00:00:00.000 Started by upstream project "autotest-per-patch" build number 132548 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.120 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.121 The recommended git tool is: git 00:00:00.121 using credential 00000000-0000-0000-0000-000000000002 00:00:00.122 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.169 Fetching changes from the remote Git repository 00:00:00.173 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.225 Using shallow fetch with depth 1 00:00:00.225 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.225 > git --version # timeout=10 00:00:00.267 > git --version # 'git version 2.39.2' 00:00:00.267 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.289 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.289 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.585 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.598 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.611 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.611 > git config core.sparsecheckout # timeout=10 00:00:05.625 > git read-tree -mu HEAD # timeout=10 00:00:05.673 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.696 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.696 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.780 [Pipeline] Start of Pipeline 00:00:05.794 [Pipeline] library 00:00:05.795 Loading library shm_lib@master 00:00:05.795 Library shm_lib@master is cached. Copying from home. 00:00:05.811 [Pipeline] node 00:00:05.819 Running on GP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.820 [Pipeline] { 00:00:05.828 [Pipeline] catchError 00:00:05.829 [Pipeline] { 00:00:05.839 [Pipeline] wrap 00:00:05.847 [Pipeline] { 00:00:05.854 [Pipeline] stage 00:00:05.856 [Pipeline] { (Prologue) 00:00:06.046 [Pipeline] sh 00:00:06.335 + logger -p user.info -t JENKINS-CI 00:00:06.351 [Pipeline] echo 00:00:06.352 Node: GP6 00:00:06.358 [Pipeline] sh 00:00:06.654 [Pipeline] setCustomBuildProperty 00:00:06.668 [Pipeline] echo 00:00:06.670 Cleanup processes 00:00:06.677 [Pipeline] sh 00:00:06.963 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.963 1458272 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.975 [Pipeline] sh 00:00:07.260 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.260 ++ grep -v 'sudo pgrep' 00:00:07.260 ++ awk '{print $1}' 00:00:07.260 + sudo kill -9 00:00:07.260 + true 00:00:07.272 [Pipeline] cleanWs 00:00:07.280 [WS-CLEANUP] Deleting project workspace... 00:00:07.280 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.287 [WS-CLEANUP] done 00:00:07.290 [Pipeline] setCustomBuildProperty 00:00:07.301 [Pipeline] sh 00:00:07.583 + sudo git config --global --replace-all safe.directory '*' 00:00:07.698 [Pipeline] httpRequest 00:00:08.031 [Pipeline] echo 00:00:08.033 Sorcerer 10.211.164.20 is alive 00:00:08.040 [Pipeline] retry 00:00:08.042 [Pipeline] { 00:00:08.057 [Pipeline] httpRequest 00:00:08.062 HttpMethod: GET 00:00:08.062 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.063 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.076 Response Code: HTTP/1.1 200 OK 00:00:08.076 Success: Status code 200 is in the accepted range: 200,404 00:00:08.076 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.382 [Pipeline] } 00:00:10.399 [Pipeline] // retry 00:00:10.406 [Pipeline] sh 00:00:10.694 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.710 [Pipeline] httpRequest 00:00:11.560 [Pipeline] echo 00:00:11.562 Sorcerer 10.211.164.20 is alive 00:00:11.572 [Pipeline] retry 00:00:11.574 [Pipeline] { 00:00:11.587 [Pipeline] httpRequest 00:00:11.591 HttpMethod: GET 00:00:11.591 URL: http://10.211.164.20/packages/spdk_752c08b51d4d945f9b4de294628364e4390596d1.tar.gz 00:00:11.592 Sending request to url: http://10.211.164.20/packages/spdk_752c08b51d4d945f9b4de294628364e4390596d1.tar.gz 00:00:11.613 Response Code: HTTP/1.1 200 OK 00:00:11.613 Success: Status code 200 is in the accepted range: 200,404 00:00:11.614 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_752c08b51d4d945f9b4de294628364e4390596d1.tar.gz 00:00:58.568 [Pipeline] } 00:00:58.581 [Pipeline] // retry 00:00:58.589 [Pipeline] sh 00:00:58.874 + tar --no-same-owner -xf spdk_752c08b51d4d945f9b4de294628364e4390596d1.tar.gz 00:01:01.417 [Pipeline] sh 00:01:01.701 + git -C spdk log --oneline -n5 00:01:01.701 752c08b51 bdev: Rename _bdev_memory_domain_io_get_buf() to bdev_io_get_bounce_buf() 00:01:01.701 22fe262e0 bdev: Relocate _bdev_memory_domain_io_get_buf_cb() close to _bdev_io_submit_ext() 00:01:01.701 3c6c4e019 bdev: Factor out checking bounce buffer necessity into helper function 00:01:01.701 0836dccda bdev: Add spdk_dif_ctx and spdk_dif_error into spdk_bdev_io 00:01:01.701 fb1630bf7 bdev: Use data_block_size for upper layer buffer if hide_metadata is true 00:01:01.712 [Pipeline] } 00:01:01.727 [Pipeline] // stage 00:01:01.736 [Pipeline] stage 00:01:01.738 [Pipeline] { (Prepare) 00:01:01.755 [Pipeline] writeFile 00:01:01.771 [Pipeline] sh 00:01:02.055 + logger -p user.info -t JENKINS-CI 00:01:02.068 [Pipeline] sh 00:01:02.351 + logger -p user.info -t JENKINS-CI 00:01:02.364 [Pipeline] sh 00:01:02.649 + cat autorun-spdk.conf 00:01:02.649 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:02.649 SPDK_TEST_NVMF=1 00:01:02.649 SPDK_TEST_NVME_CLI=1 00:01:02.649 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:02.649 SPDK_TEST_NVMF_NICS=e810 00:01:02.649 SPDK_TEST_VFIOUSER=1 00:01:02.649 SPDK_RUN_UBSAN=1 00:01:02.649 NET_TYPE=phy 00:01:02.656 RUN_NIGHTLY=0 00:01:02.661 [Pipeline] readFile 00:01:02.687 [Pipeline] withEnv 00:01:02.690 [Pipeline] { 00:01:02.702 [Pipeline] sh 00:01:02.990 + set -ex 00:01:02.990 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:02.990 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:02.990 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:02.990 ++ SPDK_TEST_NVMF=1 00:01:02.990 ++ SPDK_TEST_NVME_CLI=1 00:01:02.990 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:02.990 ++ SPDK_TEST_NVMF_NICS=e810 00:01:02.990 ++ SPDK_TEST_VFIOUSER=1 00:01:02.990 ++ SPDK_RUN_UBSAN=1 00:01:02.990 ++ NET_TYPE=phy 00:01:02.990 ++ RUN_NIGHTLY=0 00:01:02.990 + case $SPDK_TEST_NVMF_NICS in 00:01:02.990 + DRIVERS=ice 00:01:02.990 + [[ tcp == \r\d\m\a ]] 00:01:02.990 + [[ -n ice ]] 00:01:02.990 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:02.990 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:02.990 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:02.990 rmmod: ERROR: Module irdma is not currently loaded 00:01:02.990 rmmod: ERROR: Module i40iw is not currently loaded 00:01:02.990 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:02.990 + true 00:01:02.990 + for D in $DRIVERS 00:01:02.990 + sudo modprobe ice 00:01:02.990 + exit 0 00:01:03.000 [Pipeline] } 00:01:03.015 [Pipeline] // withEnv 00:01:03.021 [Pipeline] } 00:01:03.034 [Pipeline] // stage 00:01:03.045 [Pipeline] catchError 00:01:03.046 [Pipeline] { 00:01:03.059 [Pipeline] timeout 00:01:03.060 Timeout set to expire in 1 hr 0 min 00:01:03.061 [Pipeline] { 00:01:03.076 [Pipeline] stage 00:01:03.079 [Pipeline] { (Tests) 00:01:03.093 [Pipeline] sh 00:01:03.378 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:03.378 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:03.378 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:03.378 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:03.378 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:03.378 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:03.378 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:03.378 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:03.378 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:03.378 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:03.378 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:03.378 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:03.378 + source /etc/os-release 00:01:03.378 ++ NAME='Fedora Linux' 00:01:03.378 ++ VERSION='39 (Cloud Edition)' 00:01:03.378 ++ ID=fedora 00:01:03.378 ++ VERSION_ID=39 00:01:03.378 ++ VERSION_CODENAME= 00:01:03.378 ++ PLATFORM_ID=platform:f39 00:01:03.378 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:03.378 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:03.378 ++ LOGO=fedora-logo-icon 00:01:03.378 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:03.378 ++ HOME_URL=https://fedoraproject.org/ 00:01:03.378 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:03.378 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:03.378 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:03.378 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:03.378 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:03.378 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:03.378 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:03.378 ++ SUPPORT_END=2024-11-12 00:01:03.378 ++ VARIANT='Cloud Edition' 00:01:03.378 ++ VARIANT_ID=cloud 00:01:03.378 + uname -a 00:01:03.378 Linux spdk-gp-06 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:03.378 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:04.753 Hugepages 00:01:04.753 node hugesize free / total 00:01:04.754 node0 1048576kB 0 / 0 00:01:04.754 node0 2048kB 0 / 0 00:01:04.754 node1 1048576kB 0 / 0 00:01:04.754 node1 2048kB 0 / 0 00:01:04.754 00:01:04.754 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:04.754 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:04.754 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:04.754 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:04.754 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:04.754 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:04.754 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:04.754 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:04.754 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:04.754 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:04.754 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:04.754 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:04.754 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:04.754 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:04.754 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:04.754 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:04.754 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:04.754 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:04.754 + rm -f /tmp/spdk-ld-path 00:01:04.754 + source autorun-spdk.conf 00:01:04.754 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:04.754 ++ SPDK_TEST_NVMF=1 00:01:04.754 ++ SPDK_TEST_NVME_CLI=1 00:01:04.754 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:04.754 ++ SPDK_TEST_NVMF_NICS=e810 00:01:04.754 ++ SPDK_TEST_VFIOUSER=1 00:01:04.754 ++ SPDK_RUN_UBSAN=1 00:01:04.754 ++ NET_TYPE=phy 00:01:04.754 ++ RUN_NIGHTLY=0 00:01:04.754 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:04.754 + [[ -n '' ]] 00:01:04.754 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:04.754 + for M in /var/spdk/build-*-manifest.txt 00:01:04.754 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:04.754 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:04.754 + for M in /var/spdk/build-*-manifest.txt 00:01:04.754 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:04.754 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:04.754 + for M in /var/spdk/build-*-manifest.txt 00:01:04.754 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:04.754 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:04.754 ++ uname 00:01:04.754 + [[ Linux == \L\i\n\u\x ]] 00:01:04.754 + sudo dmesg -T 00:01:04.754 + sudo dmesg --clear 00:01:04.754 + dmesg_pid=1458947 00:01:04.754 + [[ Fedora Linux == FreeBSD ]] 00:01:04.754 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:04.754 + sudo dmesg -Tw 00:01:04.754 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:04.754 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:04.754 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:04.754 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:04.754 + [[ -x /usr/src/fio-static/fio ]] 00:01:04.754 + export FIO_BIN=/usr/src/fio-static/fio 00:01:04.754 + FIO_BIN=/usr/src/fio-static/fio 00:01:04.754 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:04.754 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:04.754 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:04.754 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:04.754 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:04.754 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:04.754 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:04.754 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:04.754 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:04.754 20:31:08 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:04.754 20:31:08 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:04.754 20:31:08 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:04.754 20:31:08 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:04.754 20:31:08 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:04.754 20:31:08 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:04.754 20:31:08 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:04.754 20:31:08 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:04.754 20:31:08 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:04.754 20:31:08 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:04.754 20:31:08 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:04.754 20:31:08 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:04.754 20:31:08 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:04.754 20:31:08 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:04.754 20:31:08 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:04.754 20:31:08 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:04.754 20:31:08 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:04.754 20:31:08 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:04.754 20:31:08 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:04.754 20:31:08 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:04.754 20:31:08 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:04.754 20:31:08 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:04.754 20:31:08 -- paths/export.sh@5 -- $ export PATH 00:01:04.754 20:31:08 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:04.754 20:31:08 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:04.754 20:31:08 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:04.754 20:31:08 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732649468.XXXXXX 00:01:04.754 20:31:08 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732649468.5wiXlI 00:01:04.754 20:31:08 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:04.754 20:31:08 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:04.754 20:31:08 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:04.754 20:31:08 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:04.754 20:31:08 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:04.754 20:31:08 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:04.754 20:31:08 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:04.754 20:31:08 -- common/autotest_common.sh@10 -- $ set +x 00:01:04.754 20:31:08 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:04.754 20:31:08 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:04.754 20:31:08 -- pm/common@17 -- $ local monitor 00:01:04.754 20:31:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:04.754 20:31:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:04.754 20:31:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:04.754 20:31:08 -- pm/common@21 -- $ date +%s 00:01:04.754 20:31:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:04.754 20:31:08 -- pm/common@21 -- $ date +%s 00:01:04.754 20:31:08 -- pm/common@25 -- $ sleep 1 00:01:04.754 20:31:08 -- pm/common@21 -- $ date +%s 00:01:04.754 20:31:08 -- pm/common@21 -- $ date +%s 00:01:04.754 20:31:08 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732649468 00:01:04.754 20:31:08 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732649468 00:01:04.754 20:31:08 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732649468 00:01:04.754 20:31:08 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732649468 00:01:04.754 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732649468_collect-cpu-load.pm.log 00:01:04.754 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732649468_collect-cpu-temp.pm.log 00:01:04.755 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732649468_collect-vmstat.pm.log 00:01:04.755 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732649468_collect-bmc-pm.bmc.pm.log 00:01:05.699 20:31:09 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:05.699 20:31:09 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:05.699 20:31:09 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:05.699 20:31:09 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:05.699 20:31:09 -- spdk/autobuild.sh@16 -- $ date -u 00:01:05.699 Tue Nov 26 07:31:09 PM UTC 2024 00:01:05.699 20:31:09 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:05.699 v25.01-pre-248-g752c08b51 00:01:05.699 20:31:09 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:05.699 20:31:09 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:05.699 20:31:09 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:05.699 20:31:09 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:05.699 20:31:09 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:05.699 20:31:09 -- common/autotest_common.sh@10 -- $ set +x 00:01:05.956 ************************************ 00:01:05.956 START TEST ubsan 00:01:05.956 ************************************ 00:01:05.956 20:31:09 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:05.956 using ubsan 00:01:05.956 00:01:05.956 real 0m0.000s 00:01:05.956 user 0m0.000s 00:01:05.956 sys 0m0.000s 00:01:05.956 20:31:09 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:05.956 20:31:09 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:05.956 ************************************ 00:01:05.956 END TEST ubsan 00:01:05.956 ************************************ 00:01:05.957 20:31:09 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:05.957 20:31:09 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:05.957 20:31:09 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:05.957 20:31:09 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:05.957 20:31:09 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:05.957 20:31:09 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:05.957 20:31:09 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:05.957 20:31:09 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:05.957 20:31:09 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:05.957 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:05.957 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:06.214 Using 'verbs' RDMA provider 00:01:16.771 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:26.758 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:26.758 Creating mk/config.mk...done. 00:01:26.758 Creating mk/cc.flags.mk...done. 00:01:26.758 Type 'make' to build. 00:01:26.758 20:31:30 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:01:26.758 20:31:30 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:26.758 20:31:30 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:26.758 20:31:30 -- common/autotest_common.sh@10 -- $ set +x 00:01:27.016 ************************************ 00:01:27.016 START TEST make 00:01:27.016 ************************************ 00:01:27.016 20:31:30 make -- common/autotest_common.sh@1129 -- $ make -j48 00:01:27.016 make[1]: Nothing to be done for 'all'. 00:01:28.929 The Meson build system 00:01:28.929 Version: 1.5.0 00:01:28.929 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:28.929 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:28.929 Build type: native build 00:01:28.929 Project name: libvfio-user 00:01:28.929 Project version: 0.0.1 00:01:28.929 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:28.929 C linker for the host machine: cc ld.bfd 2.40-14 00:01:28.929 Host machine cpu family: x86_64 00:01:28.929 Host machine cpu: x86_64 00:01:28.929 Run-time dependency threads found: YES 00:01:28.929 Library dl found: YES 00:01:28.929 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:28.929 Run-time dependency json-c found: YES 0.17 00:01:28.929 Run-time dependency cmocka found: YES 1.1.7 00:01:28.929 Program pytest-3 found: NO 00:01:28.929 Program flake8 found: NO 00:01:28.929 Program misspell-fixer found: NO 00:01:28.929 Program restructuredtext-lint found: NO 00:01:28.929 Program valgrind found: YES (/usr/bin/valgrind) 00:01:28.929 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:28.929 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:28.929 Compiler for C supports arguments -Wwrite-strings: YES 00:01:28.929 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:28.929 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:28.930 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:28.930 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:28.930 Build targets in project: 8 00:01:28.930 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:28.930 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:28.930 00:01:28.930 libvfio-user 0.0.1 00:01:28.930 00:01:28.930 User defined options 00:01:28.930 buildtype : debug 00:01:28.930 default_library: shared 00:01:28.930 libdir : /usr/local/lib 00:01:28.930 00:01:28.930 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:29.876 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:29.876 [1/37] Compiling C object samples/null.p/null.c.o 00:01:30.140 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:30.140 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:30.140 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:30.140 [5/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:30.140 [6/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:30.140 [7/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:30.140 [8/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:30.140 [9/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:30.140 [10/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:30.140 [11/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:30.140 [12/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:30.140 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:30.140 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:30.140 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:30.140 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:30.140 [17/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:30.140 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:30.140 [19/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:30.140 [20/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:30.140 [21/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:30.140 [22/37] Compiling C object samples/server.p/server.c.o 00:01:30.140 [23/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:30.140 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:30.140 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:30.399 [26/37] Compiling C object samples/client.p/client.c.o 00:01:30.399 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:30.399 [28/37] Linking target samples/client 00:01:30.399 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:01:30.399 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:30.399 [31/37] Linking target test/unit_tests 00:01:30.661 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:30.661 [33/37] Linking target samples/null 00:01:30.661 [34/37] Linking target samples/gpio-pci-idio-16 00:01:30.661 [35/37] Linking target samples/lspci 00:01:30.661 [36/37] Linking target samples/server 00:01:30.661 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:30.661 INFO: autodetecting backend as ninja 00:01:30.661 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:30.661 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:31.603 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:31.603 ninja: no work to do. 00:01:36.870 The Meson build system 00:01:36.870 Version: 1.5.0 00:01:36.870 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:36.870 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:36.870 Build type: native build 00:01:36.870 Program cat found: YES (/usr/bin/cat) 00:01:36.870 Project name: DPDK 00:01:36.870 Project version: 24.03.0 00:01:36.870 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:36.870 C linker for the host machine: cc ld.bfd 2.40-14 00:01:36.870 Host machine cpu family: x86_64 00:01:36.870 Host machine cpu: x86_64 00:01:36.870 Message: ## Building in Developer Mode ## 00:01:36.870 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:36.870 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:36.870 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:36.870 Program python3 found: YES (/usr/bin/python3) 00:01:36.870 Program cat found: YES (/usr/bin/cat) 00:01:36.870 Compiler for C supports arguments -march=native: YES 00:01:36.870 Checking for size of "void *" : 8 00:01:36.870 Checking for size of "void *" : 8 (cached) 00:01:36.870 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:36.870 Library m found: YES 00:01:36.870 Library numa found: YES 00:01:36.870 Has header "numaif.h" : YES 00:01:36.870 Library fdt found: NO 00:01:36.870 Library execinfo found: NO 00:01:36.870 Has header "execinfo.h" : YES 00:01:36.870 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:36.870 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:36.870 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:36.870 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:36.870 Run-time dependency openssl found: YES 3.1.1 00:01:36.870 Run-time dependency libpcap found: YES 1.10.4 00:01:36.870 Has header "pcap.h" with dependency libpcap: YES 00:01:36.870 Compiler for C supports arguments -Wcast-qual: YES 00:01:36.870 Compiler for C supports arguments -Wdeprecated: YES 00:01:36.870 Compiler for C supports arguments -Wformat: YES 00:01:36.870 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:36.870 Compiler for C supports arguments -Wformat-security: NO 00:01:36.870 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:36.870 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:36.870 Compiler for C supports arguments -Wnested-externs: YES 00:01:36.870 Compiler for C supports arguments -Wold-style-definition: YES 00:01:36.871 Compiler for C supports arguments -Wpointer-arith: YES 00:01:36.871 Compiler for C supports arguments -Wsign-compare: YES 00:01:36.871 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:36.871 Compiler for C supports arguments -Wundef: YES 00:01:36.871 Compiler for C supports arguments -Wwrite-strings: YES 00:01:36.871 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:36.871 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:36.871 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:36.871 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:36.871 Program objdump found: YES (/usr/bin/objdump) 00:01:36.871 Compiler for C supports arguments -mavx512f: YES 00:01:36.871 Checking if "AVX512 checking" compiles: YES 00:01:36.871 Fetching value of define "__SSE4_2__" : 1 00:01:36.871 Fetching value of define "__AES__" : 1 00:01:36.871 Fetching value of define "__AVX__" : 1 00:01:36.871 Fetching value of define "__AVX2__" : (undefined) 00:01:36.871 Fetching value of define "__AVX512BW__" : (undefined) 00:01:36.871 Fetching value of define "__AVX512CD__" : (undefined) 00:01:36.871 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:36.871 Fetching value of define "__AVX512F__" : (undefined) 00:01:36.871 Fetching value of define "__AVX512VL__" : (undefined) 00:01:36.871 Fetching value of define "__PCLMUL__" : 1 00:01:36.871 Fetching value of define "__RDRND__" : 1 00:01:36.871 Fetching value of define "__RDSEED__" : (undefined) 00:01:36.871 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:36.871 Fetching value of define "__znver1__" : (undefined) 00:01:36.871 Fetching value of define "__znver2__" : (undefined) 00:01:36.871 Fetching value of define "__znver3__" : (undefined) 00:01:36.871 Fetching value of define "__znver4__" : (undefined) 00:01:36.871 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:36.871 Message: lib/log: Defining dependency "log" 00:01:36.871 Message: lib/kvargs: Defining dependency "kvargs" 00:01:36.871 Message: lib/telemetry: Defining dependency "telemetry" 00:01:36.871 Checking for function "getentropy" : NO 00:01:36.871 Message: lib/eal: Defining dependency "eal" 00:01:36.871 Message: lib/ring: Defining dependency "ring" 00:01:36.871 Message: lib/rcu: Defining dependency "rcu" 00:01:36.871 Message: lib/mempool: Defining dependency "mempool" 00:01:36.871 Message: lib/mbuf: Defining dependency "mbuf" 00:01:36.871 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:36.871 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:36.871 Compiler for C supports arguments -mpclmul: YES 00:01:36.871 Compiler for C supports arguments -maes: YES 00:01:36.871 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:36.871 Compiler for C supports arguments -mavx512bw: YES 00:01:36.871 Compiler for C supports arguments -mavx512dq: YES 00:01:36.871 Compiler for C supports arguments -mavx512vl: YES 00:01:36.871 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:36.871 Compiler for C supports arguments -mavx2: YES 00:01:36.871 Compiler for C supports arguments -mavx: YES 00:01:36.871 Message: lib/net: Defining dependency "net" 00:01:36.871 Message: lib/meter: Defining dependency "meter" 00:01:36.871 Message: lib/ethdev: Defining dependency "ethdev" 00:01:36.871 Message: lib/pci: Defining dependency "pci" 00:01:36.871 Message: lib/cmdline: Defining dependency "cmdline" 00:01:36.871 Message: lib/hash: Defining dependency "hash" 00:01:36.871 Message: lib/timer: Defining dependency "timer" 00:01:36.871 Message: lib/compressdev: Defining dependency "compressdev" 00:01:36.871 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:36.871 Message: lib/dmadev: Defining dependency "dmadev" 00:01:36.871 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:36.871 Message: lib/power: Defining dependency "power" 00:01:36.871 Message: lib/reorder: Defining dependency "reorder" 00:01:36.871 Message: lib/security: Defining dependency "security" 00:01:36.871 Has header "linux/userfaultfd.h" : YES 00:01:36.871 Has header "linux/vduse.h" : YES 00:01:36.871 Message: lib/vhost: Defining dependency "vhost" 00:01:36.871 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:36.871 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:36.871 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:36.871 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:36.871 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:36.871 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:36.871 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:36.871 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:36.871 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:36.871 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:36.871 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:36.871 Configuring doxy-api-html.conf using configuration 00:01:36.871 Configuring doxy-api-man.conf using configuration 00:01:36.871 Program mandb found: YES (/usr/bin/mandb) 00:01:36.871 Program sphinx-build found: NO 00:01:36.871 Configuring rte_build_config.h using configuration 00:01:36.871 Message: 00:01:36.871 ================= 00:01:36.871 Applications Enabled 00:01:36.871 ================= 00:01:36.871 00:01:36.871 apps: 00:01:36.871 00:01:36.871 00:01:36.871 Message: 00:01:36.871 ================= 00:01:36.871 Libraries Enabled 00:01:36.871 ================= 00:01:36.871 00:01:36.871 libs: 00:01:36.871 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:36.871 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:36.871 cryptodev, dmadev, power, reorder, security, vhost, 00:01:36.871 00:01:36.871 Message: 00:01:36.871 =============== 00:01:36.871 Drivers Enabled 00:01:36.871 =============== 00:01:36.871 00:01:36.871 common: 00:01:36.871 00:01:36.871 bus: 00:01:36.871 pci, vdev, 00:01:36.871 mempool: 00:01:36.871 ring, 00:01:36.871 dma: 00:01:36.871 00:01:36.871 net: 00:01:36.871 00:01:36.871 crypto: 00:01:36.871 00:01:36.871 compress: 00:01:36.871 00:01:36.871 vdpa: 00:01:36.871 00:01:36.871 00:01:36.871 Message: 00:01:36.871 ================= 00:01:36.871 Content Skipped 00:01:36.871 ================= 00:01:36.871 00:01:36.871 apps: 00:01:36.871 dumpcap: explicitly disabled via build config 00:01:36.871 graph: explicitly disabled via build config 00:01:36.871 pdump: explicitly disabled via build config 00:01:36.871 proc-info: explicitly disabled via build config 00:01:36.871 test-acl: explicitly disabled via build config 00:01:36.871 test-bbdev: explicitly disabled via build config 00:01:36.871 test-cmdline: explicitly disabled via build config 00:01:36.871 test-compress-perf: explicitly disabled via build config 00:01:36.871 test-crypto-perf: explicitly disabled via build config 00:01:36.871 test-dma-perf: explicitly disabled via build config 00:01:36.871 test-eventdev: explicitly disabled via build config 00:01:36.871 test-fib: explicitly disabled via build config 00:01:36.871 test-flow-perf: explicitly disabled via build config 00:01:36.871 test-gpudev: explicitly disabled via build config 00:01:36.871 test-mldev: explicitly disabled via build config 00:01:36.871 test-pipeline: explicitly disabled via build config 00:01:36.871 test-pmd: explicitly disabled via build config 00:01:36.871 test-regex: explicitly disabled via build config 00:01:36.871 test-sad: explicitly disabled via build config 00:01:36.871 test-security-perf: explicitly disabled via build config 00:01:36.871 00:01:36.871 libs: 00:01:36.871 argparse: explicitly disabled via build config 00:01:36.871 metrics: explicitly disabled via build config 00:01:36.871 acl: explicitly disabled via build config 00:01:36.871 bbdev: explicitly disabled via build config 00:01:36.871 bitratestats: explicitly disabled via build config 00:01:36.871 bpf: explicitly disabled via build config 00:01:36.871 cfgfile: explicitly disabled via build config 00:01:36.871 distributor: explicitly disabled via build config 00:01:36.871 efd: explicitly disabled via build config 00:01:36.871 eventdev: explicitly disabled via build config 00:01:36.871 dispatcher: explicitly disabled via build config 00:01:36.871 gpudev: explicitly disabled via build config 00:01:36.871 gro: explicitly disabled via build config 00:01:36.871 gso: explicitly disabled via build config 00:01:36.871 ip_frag: explicitly disabled via build config 00:01:36.871 jobstats: explicitly disabled via build config 00:01:36.871 latencystats: explicitly disabled via build config 00:01:36.871 lpm: explicitly disabled via build config 00:01:36.871 member: explicitly disabled via build config 00:01:36.871 pcapng: explicitly disabled via build config 00:01:36.871 rawdev: explicitly disabled via build config 00:01:36.871 regexdev: explicitly disabled via build config 00:01:36.871 mldev: explicitly disabled via build config 00:01:36.871 rib: explicitly disabled via build config 00:01:36.871 sched: explicitly disabled via build config 00:01:36.871 stack: explicitly disabled via build config 00:01:36.871 ipsec: explicitly disabled via build config 00:01:36.871 pdcp: explicitly disabled via build config 00:01:36.871 fib: explicitly disabled via build config 00:01:36.871 port: explicitly disabled via build config 00:01:36.871 pdump: explicitly disabled via build config 00:01:36.871 table: explicitly disabled via build config 00:01:36.871 pipeline: explicitly disabled via build config 00:01:36.871 graph: explicitly disabled via build config 00:01:36.871 node: explicitly disabled via build config 00:01:36.871 00:01:36.871 drivers: 00:01:36.871 common/cpt: not in enabled drivers build config 00:01:36.871 common/dpaax: not in enabled drivers build config 00:01:36.871 common/iavf: not in enabled drivers build config 00:01:36.871 common/idpf: not in enabled drivers build config 00:01:36.871 common/ionic: not in enabled drivers build config 00:01:36.871 common/mvep: not in enabled drivers build config 00:01:36.871 common/octeontx: not in enabled drivers build config 00:01:36.871 bus/auxiliary: not in enabled drivers build config 00:01:36.871 bus/cdx: not in enabled drivers build config 00:01:36.871 bus/dpaa: not in enabled drivers build config 00:01:36.871 bus/fslmc: not in enabled drivers build config 00:01:36.871 bus/ifpga: not in enabled drivers build config 00:01:36.871 bus/platform: not in enabled drivers build config 00:01:36.871 bus/uacce: not in enabled drivers build config 00:01:36.872 bus/vmbus: not in enabled drivers build config 00:01:36.872 common/cnxk: not in enabled drivers build config 00:01:36.872 common/mlx5: not in enabled drivers build config 00:01:36.872 common/nfp: not in enabled drivers build config 00:01:36.872 common/nitrox: not in enabled drivers build config 00:01:36.872 common/qat: not in enabled drivers build config 00:01:36.872 common/sfc_efx: not in enabled drivers build config 00:01:36.872 mempool/bucket: not in enabled drivers build config 00:01:36.872 mempool/cnxk: not in enabled drivers build config 00:01:36.872 mempool/dpaa: not in enabled drivers build config 00:01:36.872 mempool/dpaa2: not in enabled drivers build config 00:01:36.872 mempool/octeontx: not in enabled drivers build config 00:01:36.872 mempool/stack: not in enabled drivers build config 00:01:36.872 dma/cnxk: not in enabled drivers build config 00:01:36.872 dma/dpaa: not in enabled drivers build config 00:01:36.872 dma/dpaa2: not in enabled drivers build config 00:01:36.872 dma/hisilicon: not in enabled drivers build config 00:01:36.872 dma/idxd: not in enabled drivers build config 00:01:36.872 dma/ioat: not in enabled drivers build config 00:01:36.872 dma/skeleton: not in enabled drivers build config 00:01:36.872 net/af_packet: not in enabled drivers build config 00:01:36.872 net/af_xdp: not in enabled drivers build config 00:01:36.872 net/ark: not in enabled drivers build config 00:01:36.872 net/atlantic: not in enabled drivers build config 00:01:36.872 net/avp: not in enabled drivers build config 00:01:36.872 net/axgbe: not in enabled drivers build config 00:01:36.872 net/bnx2x: not in enabled drivers build config 00:01:36.872 net/bnxt: not in enabled drivers build config 00:01:36.872 net/bonding: not in enabled drivers build config 00:01:36.872 net/cnxk: not in enabled drivers build config 00:01:36.872 net/cpfl: not in enabled drivers build config 00:01:36.872 net/cxgbe: not in enabled drivers build config 00:01:36.872 net/dpaa: not in enabled drivers build config 00:01:36.872 net/dpaa2: not in enabled drivers build config 00:01:36.872 net/e1000: not in enabled drivers build config 00:01:36.872 net/ena: not in enabled drivers build config 00:01:36.872 net/enetc: not in enabled drivers build config 00:01:36.872 net/enetfec: not in enabled drivers build config 00:01:36.872 net/enic: not in enabled drivers build config 00:01:36.872 net/failsafe: not in enabled drivers build config 00:01:36.872 net/fm10k: not in enabled drivers build config 00:01:36.872 net/gve: not in enabled drivers build config 00:01:36.872 net/hinic: not in enabled drivers build config 00:01:36.872 net/hns3: not in enabled drivers build config 00:01:36.872 net/i40e: not in enabled drivers build config 00:01:36.872 net/iavf: not in enabled drivers build config 00:01:36.872 net/ice: not in enabled drivers build config 00:01:36.872 net/idpf: not in enabled drivers build config 00:01:36.872 net/igc: not in enabled drivers build config 00:01:36.872 net/ionic: not in enabled drivers build config 00:01:36.872 net/ipn3ke: not in enabled drivers build config 00:01:36.872 net/ixgbe: not in enabled drivers build config 00:01:36.872 net/mana: not in enabled drivers build config 00:01:36.872 net/memif: not in enabled drivers build config 00:01:36.872 net/mlx4: not in enabled drivers build config 00:01:36.872 net/mlx5: not in enabled drivers build config 00:01:36.872 net/mvneta: not in enabled drivers build config 00:01:36.872 net/mvpp2: not in enabled drivers build config 00:01:36.872 net/netvsc: not in enabled drivers build config 00:01:36.872 net/nfb: not in enabled drivers build config 00:01:36.872 net/nfp: not in enabled drivers build config 00:01:36.872 net/ngbe: not in enabled drivers build config 00:01:36.872 net/null: not in enabled drivers build config 00:01:36.872 net/octeontx: not in enabled drivers build config 00:01:36.872 net/octeon_ep: not in enabled drivers build config 00:01:36.872 net/pcap: not in enabled drivers build config 00:01:36.872 net/pfe: not in enabled drivers build config 00:01:36.872 net/qede: not in enabled drivers build config 00:01:36.872 net/ring: not in enabled drivers build config 00:01:36.872 net/sfc: not in enabled drivers build config 00:01:36.872 net/softnic: not in enabled drivers build config 00:01:36.872 net/tap: not in enabled drivers build config 00:01:36.872 net/thunderx: not in enabled drivers build config 00:01:36.872 net/txgbe: not in enabled drivers build config 00:01:36.872 net/vdev_netvsc: not in enabled drivers build config 00:01:36.872 net/vhost: not in enabled drivers build config 00:01:36.872 net/virtio: not in enabled drivers build config 00:01:36.872 net/vmxnet3: not in enabled drivers build config 00:01:36.872 raw/*: missing internal dependency, "rawdev" 00:01:36.872 crypto/armv8: not in enabled drivers build config 00:01:36.872 crypto/bcmfs: not in enabled drivers build config 00:01:36.872 crypto/caam_jr: not in enabled drivers build config 00:01:36.872 crypto/ccp: not in enabled drivers build config 00:01:36.872 crypto/cnxk: not in enabled drivers build config 00:01:36.872 crypto/dpaa_sec: not in enabled drivers build config 00:01:36.872 crypto/dpaa2_sec: not in enabled drivers build config 00:01:36.872 crypto/ipsec_mb: not in enabled drivers build config 00:01:36.872 crypto/mlx5: not in enabled drivers build config 00:01:36.872 crypto/mvsam: not in enabled drivers build config 00:01:36.872 crypto/nitrox: not in enabled drivers build config 00:01:36.872 crypto/null: not in enabled drivers build config 00:01:36.872 crypto/octeontx: not in enabled drivers build config 00:01:36.872 crypto/openssl: not in enabled drivers build config 00:01:36.872 crypto/scheduler: not in enabled drivers build config 00:01:36.872 crypto/uadk: not in enabled drivers build config 00:01:36.872 crypto/virtio: not in enabled drivers build config 00:01:36.872 compress/isal: not in enabled drivers build config 00:01:36.872 compress/mlx5: not in enabled drivers build config 00:01:36.872 compress/nitrox: not in enabled drivers build config 00:01:36.872 compress/octeontx: not in enabled drivers build config 00:01:36.872 compress/zlib: not in enabled drivers build config 00:01:36.872 regex/*: missing internal dependency, "regexdev" 00:01:36.872 ml/*: missing internal dependency, "mldev" 00:01:36.872 vdpa/ifc: not in enabled drivers build config 00:01:36.872 vdpa/mlx5: not in enabled drivers build config 00:01:36.872 vdpa/nfp: not in enabled drivers build config 00:01:36.872 vdpa/sfc: not in enabled drivers build config 00:01:36.872 event/*: missing internal dependency, "eventdev" 00:01:36.872 baseband/*: missing internal dependency, "bbdev" 00:01:36.872 gpu/*: missing internal dependency, "gpudev" 00:01:36.872 00:01:36.872 00:01:36.872 Build targets in project: 85 00:01:36.872 00:01:36.872 DPDK 24.03.0 00:01:36.872 00:01:36.872 User defined options 00:01:36.872 buildtype : debug 00:01:36.872 default_library : shared 00:01:36.872 libdir : lib 00:01:36.872 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:36.872 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:36.872 c_link_args : 00:01:36.872 cpu_instruction_set: native 00:01:36.872 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:36.872 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:36.872 enable_docs : false 00:01:36.872 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:01:36.872 enable_kmods : false 00:01:36.872 max_lcores : 128 00:01:36.872 tests : false 00:01:36.872 00:01:36.872 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:36.872 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:36.872 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:37.132 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:37.132 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:37.132 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:37.132 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:37.132 [6/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:37.132 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:37.132 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:37.132 [9/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:37.132 [10/268] Linking static target lib/librte_kvargs.a 00:01:37.132 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:37.132 [12/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:37.132 [13/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:37.132 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:37.132 [15/268] Linking static target lib/librte_log.a 00:01:37.132 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:37.704 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.971 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:37.971 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:37.971 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:37.971 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:37.971 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:37.971 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:37.971 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:37.971 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:37.971 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:37.971 [27/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:37.971 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:37.971 [29/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:37.971 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:37.971 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:37.971 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:37.971 [33/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:37.971 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:37.971 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:37.971 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:37.971 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:37.971 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:37.971 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:37.971 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:37.971 [41/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:37.971 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:37.971 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:37.971 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:37.971 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:37.971 [46/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:37.971 [47/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:37.971 [48/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:37.971 [49/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:37.971 [50/268] Linking static target lib/librte_telemetry.a 00:01:37.971 [51/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:37.971 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:37.971 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:38.229 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:38.229 [55/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:38.229 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:38.229 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:38.229 [58/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:38.229 [59/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:38.229 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:38.229 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:38.229 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:38.229 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:38.229 [64/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.229 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:38.489 [66/268] Linking target lib/librte_log.so.24.1 00:01:38.489 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:38.753 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:38.753 [69/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:38.753 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:38.753 [71/268] Linking static target lib/librte_pci.a 00:01:38.753 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:38.753 [73/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:38.753 [74/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:38.753 [75/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:38.753 [76/268] Linking target lib/librte_kvargs.so.24.1 00:01:38.753 [77/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:38.753 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:38.753 [79/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:38.753 [80/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:39.012 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:39.012 [82/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:39.012 [83/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:39.012 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:39.012 [85/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:39.012 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:39.012 [87/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:39.012 [88/268] Linking static target lib/librte_ring.a 00:01:39.012 [89/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:39.012 [90/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:39.012 [91/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:39.012 [92/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:39.012 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:39.012 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:39.012 [95/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:39.012 [96/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:39.012 [97/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:39.012 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:39.012 [99/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:39.012 [100/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:39.012 [101/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:39.012 [102/268] Linking static target lib/librte_meter.a 00:01:39.012 [103/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:39.012 [104/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:39.012 [105/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:39.012 [106/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:39.012 [107/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:39.012 [108/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:39.012 [109/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.012 [110/268] Linking static target lib/librte_eal.a 00:01:39.273 [111/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.273 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:39.273 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:39.273 [114/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:39.273 [115/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:39.273 [116/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:39.273 [117/268] Linking static target lib/librte_mempool.a 00:01:39.273 [118/268] Linking static target lib/librte_rcu.a 00:01:39.273 [119/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:39.273 [120/268] Linking target lib/librte_telemetry.so.24.1 00:01:39.273 [121/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:39.274 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:39.274 [123/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:39.274 [124/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:39.274 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:39.274 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:39.536 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:39.536 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:39.536 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:39.536 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:39.536 [131/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:39.536 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:39.536 [133/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:39.536 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:39.536 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:39.536 [136/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.536 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:39.802 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:39.802 [139/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:39.802 [140/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:39.802 [141/268] Linking static target lib/librte_net.a 00:01:39.802 [142/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.802 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:39.802 [144/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.802 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:39.802 [146/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:39.802 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:40.062 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:40.063 [149/268] Linking static target lib/librte_cmdline.a 00:01:40.063 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:40.063 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:40.063 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:40.063 [153/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:40.063 [154/268] Linking static target lib/librte_timer.a 00:01:40.063 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:40.063 [156/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:40.063 [157/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:40.320 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:40.320 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:40.320 [160/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.320 [161/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:40.320 [162/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:40.320 [163/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:40.320 [164/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:40.320 [165/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:40.320 [166/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:40.320 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:40.320 [168/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:40.320 [169/268] Linking static target lib/librte_dmadev.a 00:01:40.320 [170/268] Linking static target lib/librte_power.a 00:01:40.320 [171/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.320 [172/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:40.578 [173/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.578 [174/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:40.578 [175/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:40.578 [176/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:40.578 [177/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:40.578 [178/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:40.579 [179/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:40.579 [180/268] Linking static target lib/librte_compressdev.a 00:01:40.579 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:40.579 [182/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:40.579 [183/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:40.579 [184/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:40.579 [185/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:40.579 [186/268] Linking static target lib/librte_hash.a 00:01:40.579 [187/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:40.837 [188/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:40.837 [189/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:40.837 [190/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.837 [191/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.837 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:40.837 [193/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:40.837 [194/268] Linking static target lib/librte_mbuf.a 00:01:40.837 [195/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:40.837 [196/268] Linking static target lib/librte_reorder.a 00:01:40.837 [197/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:40.837 [198/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:40.837 [199/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:40.837 [200/268] Linking static target drivers/librte_bus_vdev.a 00:01:40.837 [201/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.837 [202/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:40.837 [203/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:41.095 [204/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:41.095 [205/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:41.095 [206/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:41.095 [207/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:41.095 [208/268] Linking static target lib/librte_security.a 00:01:41.095 [209/268] Linking static target drivers/librte_bus_pci.a 00:01:41.095 [210/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.095 [211/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:41.095 [212/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.095 [213/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.095 [214/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:41.095 [215/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:41.095 [216/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:41.095 [217/268] Linking static target drivers/librte_mempool_ring.a 00:01:41.095 [218/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.354 [219/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:41.354 [220/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.354 [221/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:41.354 [222/268] Linking static target lib/librte_cryptodev.a 00:01:41.354 [223/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.354 [224/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.610 [225/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:41.610 [226/268] Linking static target lib/librte_ethdev.a 00:01:42.542 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.915 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:45.813 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.813 [230/268] Linking target lib/librte_eal.so.24.1 00:01:45.813 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:45.814 [232/268] Linking target lib/librte_ring.so.24.1 00:01:45.814 [233/268] Linking target lib/librte_timer.so.24.1 00:01:45.814 [234/268] Linking target lib/librte_meter.so.24.1 00:01:45.814 [235/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:45.814 [236/268] Linking target lib/librte_pci.so.24.1 00:01:45.814 [237/268] Linking target lib/librte_dmadev.so.24.1 00:01:45.814 [238/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:45.814 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:45.814 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:45.814 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:45.814 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:45.814 [243/268] Linking target lib/librte_rcu.so.24.1 00:01:45.814 [244/268] Linking target lib/librte_mempool.so.24.1 00:01:45.814 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:45.814 [246/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.071 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:46.071 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:46.071 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:46.071 [250/268] Linking target lib/librte_mbuf.so.24.1 00:01:46.071 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:46.071 [252/268] Linking target lib/librte_reorder.so.24.1 00:01:46.071 [253/268] Linking target lib/librte_compressdev.so.24.1 00:01:46.071 [254/268] Linking target lib/librte_net.so.24.1 00:01:46.328 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:01:46.329 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:46.329 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:46.329 [258/268] Linking target lib/librte_security.so.24.1 00:01:46.329 [259/268] Linking target lib/librte_cmdline.so.24.1 00:01:46.329 [260/268] Linking target lib/librte_hash.so.24.1 00:01:46.329 [261/268] Linking target lib/librte_ethdev.so.24.1 00:01:46.586 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:46.586 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:46.586 [264/268] Linking target lib/librte_power.so.24.1 00:01:50.769 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:50.769 [266/268] Linking static target lib/librte_vhost.a 00:01:51.027 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.027 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:51.287 INFO: autodetecting backend as ninja 00:01:51.287 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:02:13.262 CC lib/log/log.o 00:02:13.262 CC lib/log/log_flags.o 00:02:13.262 CC lib/log/log_deprecated.o 00:02:13.262 CC lib/ut/ut.o 00:02:13.262 CC lib/ut_mock/mock.o 00:02:13.262 LIB libspdk_ut.a 00:02:13.262 LIB libspdk_log.a 00:02:13.262 LIB libspdk_ut_mock.a 00:02:13.262 SO libspdk_ut_mock.so.6.0 00:02:13.262 SO libspdk_ut.so.2.0 00:02:13.262 SO libspdk_log.so.7.1 00:02:13.262 SYMLINK libspdk_ut.so 00:02:13.262 SYMLINK libspdk_ut_mock.so 00:02:13.262 SYMLINK libspdk_log.so 00:02:13.262 CC lib/dma/dma.o 00:02:13.262 CXX lib/trace_parser/trace.o 00:02:13.262 CC lib/util/base64.o 00:02:13.262 CC lib/util/bit_array.o 00:02:13.262 CC lib/util/cpuset.o 00:02:13.262 CC lib/util/crc16.o 00:02:13.262 CC lib/util/crc32.o 00:02:13.262 CC lib/util/crc32c.o 00:02:13.262 CC lib/ioat/ioat.o 00:02:13.262 CC lib/util/crc32_ieee.o 00:02:13.262 CC lib/util/crc64.o 00:02:13.262 CC lib/util/dif.o 00:02:13.262 CC lib/util/fd.o 00:02:13.262 CC lib/util/fd_group.o 00:02:13.262 CC lib/util/file.o 00:02:13.262 CC lib/util/hexlify.o 00:02:13.262 CC lib/util/iov.o 00:02:13.262 CC lib/util/math.o 00:02:13.262 CC lib/util/net.o 00:02:13.262 CC lib/util/pipe.o 00:02:13.262 CC lib/util/strerror_tls.o 00:02:13.262 CC lib/util/uuid.o 00:02:13.262 CC lib/util/string.o 00:02:13.262 CC lib/util/xor.o 00:02:13.262 CC lib/util/md5.o 00:02:13.262 CC lib/util/zipf.o 00:02:13.262 CC lib/vfio_user/host/vfio_user_pci.o 00:02:13.262 CC lib/vfio_user/host/vfio_user.o 00:02:13.262 LIB libspdk_ioat.a 00:02:13.262 LIB libspdk_dma.a 00:02:13.262 SO libspdk_ioat.so.7.0 00:02:13.262 SO libspdk_dma.so.5.0 00:02:13.262 SYMLINK libspdk_ioat.so 00:02:13.262 SYMLINK libspdk_dma.so 00:02:13.262 LIB libspdk_vfio_user.a 00:02:13.262 SO libspdk_vfio_user.so.5.0 00:02:13.262 SYMLINK libspdk_vfio_user.so 00:02:13.262 LIB libspdk_util.a 00:02:13.262 SO libspdk_util.so.10.1 00:02:13.262 SYMLINK libspdk_util.so 00:02:13.262 LIB libspdk_trace_parser.a 00:02:13.262 CC lib/rdma_utils/rdma_utils.o 00:02:13.262 CC lib/idxd/idxd.o 00:02:13.262 CC lib/json/json_parse.o 00:02:13.262 CC lib/vmd/vmd.o 00:02:13.262 CC lib/conf/conf.o 00:02:13.262 CC lib/env_dpdk/env.o 00:02:13.262 CC lib/json/json_util.o 00:02:13.262 CC lib/idxd/idxd_user.o 00:02:13.262 CC lib/env_dpdk/memory.o 00:02:13.262 CC lib/json/json_write.o 00:02:13.262 CC lib/idxd/idxd_kernel.o 00:02:13.262 CC lib/vmd/led.o 00:02:13.262 CC lib/env_dpdk/pci.o 00:02:13.262 CC lib/env_dpdk/init.o 00:02:13.262 CC lib/env_dpdk/threads.o 00:02:13.262 CC lib/env_dpdk/pci_ioat.o 00:02:13.262 CC lib/env_dpdk/pci_virtio.o 00:02:13.262 CC lib/env_dpdk/pci_vmd.o 00:02:13.262 CC lib/env_dpdk/pci_idxd.o 00:02:13.262 CC lib/env_dpdk/pci_event.o 00:02:13.262 CC lib/env_dpdk/sigbus_handler.o 00:02:13.262 CC lib/env_dpdk/pci_dpdk.o 00:02:13.262 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:13.262 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:13.262 SO libspdk_trace_parser.so.6.0 00:02:13.262 SYMLINK libspdk_trace_parser.so 00:02:13.262 LIB libspdk_conf.a 00:02:13.262 SO libspdk_conf.so.6.0 00:02:13.262 LIB libspdk_rdma_utils.a 00:02:13.262 LIB libspdk_json.a 00:02:13.262 SYMLINK libspdk_conf.so 00:02:13.262 SO libspdk_rdma_utils.so.1.0 00:02:13.262 SO libspdk_json.so.6.0 00:02:13.262 SYMLINK libspdk_rdma_utils.so 00:02:13.262 SYMLINK libspdk_json.so 00:02:13.262 CC lib/rdma_provider/common.o 00:02:13.262 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:13.262 CC lib/jsonrpc/jsonrpc_server.o 00:02:13.262 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:13.262 CC lib/jsonrpc/jsonrpc_client.o 00:02:13.262 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:13.263 LIB libspdk_idxd.a 00:02:13.263 SO libspdk_idxd.so.12.1 00:02:13.263 SYMLINK libspdk_idxd.so 00:02:13.263 LIB libspdk_vmd.a 00:02:13.263 SO libspdk_vmd.so.6.0 00:02:13.263 LIB libspdk_rdma_provider.a 00:02:13.263 SO libspdk_rdma_provider.so.7.0 00:02:13.263 SYMLINK libspdk_vmd.so 00:02:13.263 LIB libspdk_jsonrpc.a 00:02:13.263 SYMLINK libspdk_rdma_provider.so 00:02:13.263 SO libspdk_jsonrpc.so.6.0 00:02:13.263 SYMLINK libspdk_jsonrpc.so 00:02:13.263 CC lib/rpc/rpc.o 00:02:13.263 LIB libspdk_rpc.a 00:02:13.263 SO libspdk_rpc.so.6.0 00:02:13.263 SYMLINK libspdk_rpc.so 00:02:13.263 CC lib/trace/trace.o 00:02:13.263 CC lib/keyring/keyring.o 00:02:13.263 CC lib/notify/notify.o 00:02:13.263 CC lib/trace/trace_flags.o 00:02:13.263 CC lib/keyring/keyring_rpc.o 00:02:13.263 CC lib/notify/notify_rpc.o 00:02:13.263 CC lib/trace/trace_rpc.o 00:02:13.263 LIB libspdk_notify.a 00:02:13.263 SO libspdk_notify.so.6.0 00:02:13.263 SYMLINK libspdk_notify.so 00:02:13.263 LIB libspdk_keyring.a 00:02:13.263 LIB libspdk_trace.a 00:02:13.263 SO libspdk_keyring.so.2.0 00:02:13.263 SO libspdk_trace.so.11.0 00:02:13.520 SYMLINK libspdk_keyring.so 00:02:13.520 SYMLINK libspdk_trace.so 00:02:13.520 LIB libspdk_env_dpdk.a 00:02:13.520 CC lib/thread/thread.o 00:02:13.520 CC lib/thread/iobuf.o 00:02:13.520 CC lib/sock/sock.o 00:02:13.520 CC lib/sock/sock_rpc.o 00:02:13.520 SO libspdk_env_dpdk.so.15.1 00:02:13.778 SYMLINK libspdk_env_dpdk.so 00:02:14.036 LIB libspdk_sock.a 00:02:14.036 SO libspdk_sock.so.10.0 00:02:14.036 SYMLINK libspdk_sock.so 00:02:14.294 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:14.294 CC lib/nvme/nvme_ctrlr.o 00:02:14.294 CC lib/nvme/nvme_fabric.o 00:02:14.294 CC lib/nvme/nvme_ns_cmd.o 00:02:14.294 CC lib/nvme/nvme_ns.o 00:02:14.294 CC lib/nvme/nvme_pcie_common.o 00:02:14.294 CC lib/nvme/nvme_pcie.o 00:02:14.294 CC lib/nvme/nvme_qpair.o 00:02:14.294 CC lib/nvme/nvme.o 00:02:14.294 CC lib/nvme/nvme_quirks.o 00:02:14.294 CC lib/nvme/nvme_transport.o 00:02:14.294 CC lib/nvme/nvme_discovery.o 00:02:14.294 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:14.294 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:14.294 CC lib/nvme/nvme_tcp.o 00:02:14.294 CC lib/nvme/nvme_opal.o 00:02:14.294 CC lib/nvme/nvme_io_msg.o 00:02:14.294 CC lib/nvme/nvme_poll_group.o 00:02:14.294 CC lib/nvme/nvme_zns.o 00:02:14.294 CC lib/nvme/nvme_stubs.o 00:02:14.294 CC lib/nvme/nvme_auth.o 00:02:14.294 CC lib/nvme/nvme_cuse.o 00:02:14.294 CC lib/nvme/nvme_vfio_user.o 00:02:14.294 CC lib/nvme/nvme_rdma.o 00:02:15.227 LIB libspdk_thread.a 00:02:15.227 SO libspdk_thread.so.11.0 00:02:15.227 SYMLINK libspdk_thread.so 00:02:15.484 CC lib/vfu_tgt/tgt_endpoint.o 00:02:15.484 CC lib/blob/blobstore.o 00:02:15.484 CC lib/virtio/virtio.o 00:02:15.484 CC lib/vfu_tgt/tgt_rpc.o 00:02:15.484 CC lib/virtio/virtio_vhost_user.o 00:02:15.484 CC lib/init/json_config.o 00:02:15.484 CC lib/blob/request.o 00:02:15.484 CC lib/virtio/virtio_vfio_user.o 00:02:15.484 CC lib/blob/zeroes.o 00:02:15.484 CC lib/init/subsystem.o 00:02:15.484 CC lib/fsdev/fsdev.o 00:02:15.484 CC lib/virtio/virtio_pci.o 00:02:15.484 CC lib/accel/accel.o 00:02:15.484 CC lib/blob/blob_bs_dev.o 00:02:15.484 CC lib/fsdev/fsdev_io.o 00:02:15.484 CC lib/init/subsystem_rpc.o 00:02:15.484 CC lib/accel/accel_rpc.o 00:02:15.484 CC lib/init/rpc.o 00:02:15.484 CC lib/accel/accel_sw.o 00:02:15.484 CC lib/fsdev/fsdev_rpc.o 00:02:15.742 LIB libspdk_init.a 00:02:15.742 SO libspdk_init.so.6.0 00:02:15.742 SYMLINK libspdk_init.so 00:02:15.742 LIB libspdk_vfu_tgt.a 00:02:15.742 SO libspdk_vfu_tgt.so.3.0 00:02:16.000 LIB libspdk_virtio.a 00:02:16.000 SYMLINK libspdk_vfu_tgt.so 00:02:16.000 SO libspdk_virtio.so.7.0 00:02:16.000 CC lib/event/app.o 00:02:16.000 CC lib/event/reactor.o 00:02:16.000 CC lib/event/log_rpc.o 00:02:16.000 CC lib/event/app_rpc.o 00:02:16.000 CC lib/event/scheduler_static.o 00:02:16.000 SYMLINK libspdk_virtio.so 00:02:16.258 LIB libspdk_fsdev.a 00:02:16.258 SO libspdk_fsdev.so.2.0 00:02:16.258 SYMLINK libspdk_fsdev.so 00:02:16.515 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:16.515 LIB libspdk_event.a 00:02:16.515 SO libspdk_event.so.14.0 00:02:16.515 SYMLINK libspdk_event.so 00:02:16.773 LIB libspdk_accel.a 00:02:16.773 SO libspdk_accel.so.16.0 00:02:16.773 LIB libspdk_nvme.a 00:02:16.773 SYMLINK libspdk_accel.so 00:02:16.773 SO libspdk_nvme.so.15.0 00:02:17.030 CC lib/bdev/bdev.o 00:02:17.030 CC lib/bdev/bdev_rpc.o 00:02:17.030 CC lib/bdev/bdev_zone.o 00:02:17.030 CC lib/bdev/part.o 00:02:17.030 CC lib/bdev/scsi_nvme.o 00:02:17.030 SYMLINK libspdk_nvme.so 00:02:17.030 LIB libspdk_fuse_dispatcher.a 00:02:17.030 SO libspdk_fuse_dispatcher.so.1.0 00:02:17.288 SYMLINK libspdk_fuse_dispatcher.so 00:02:18.660 LIB libspdk_blob.a 00:02:18.660 SO libspdk_blob.so.12.0 00:02:18.660 SYMLINK libspdk_blob.so 00:02:18.917 CC lib/blobfs/blobfs.o 00:02:18.917 CC lib/blobfs/tree.o 00:02:18.917 CC lib/lvol/lvol.o 00:02:19.854 LIB libspdk_bdev.a 00:02:19.854 SO libspdk_bdev.so.17.0 00:02:19.854 LIB libspdk_blobfs.a 00:02:19.854 SO libspdk_blobfs.so.11.0 00:02:19.854 LIB libspdk_lvol.a 00:02:19.854 SYMLINK libspdk_bdev.so 00:02:19.854 SO libspdk_lvol.so.11.0 00:02:19.854 SYMLINK libspdk_blobfs.so 00:02:19.854 SYMLINK libspdk_lvol.so 00:02:19.854 CC lib/ublk/ublk.o 00:02:19.854 CC lib/nbd/nbd.o 00:02:19.854 CC lib/nvmf/ctrlr.o 00:02:19.854 CC lib/ublk/ublk_rpc.o 00:02:19.854 CC lib/nbd/nbd_rpc.o 00:02:19.854 CC lib/scsi/dev.o 00:02:19.854 CC lib/nvmf/ctrlr_discovery.o 00:02:19.854 CC lib/scsi/lun.o 00:02:19.854 CC lib/scsi/port.o 00:02:19.854 CC lib/ftl/ftl_core.o 00:02:19.854 CC lib/nvmf/ctrlr_bdev.o 00:02:19.854 CC lib/scsi/scsi.o 00:02:19.854 CC lib/nvmf/subsystem.o 00:02:19.854 CC lib/scsi/scsi_bdev.o 00:02:19.854 CC lib/ftl/ftl_init.o 00:02:19.854 CC lib/nvmf/nvmf.o 00:02:19.854 CC lib/ftl/ftl_layout.o 00:02:19.854 CC lib/nvmf/nvmf_rpc.o 00:02:19.854 CC lib/scsi/scsi_pr.o 00:02:19.854 CC lib/ftl/ftl_debug.o 00:02:19.854 CC lib/nvmf/transport.o 00:02:19.854 CC lib/scsi/scsi_rpc.o 00:02:19.854 CC lib/ftl/ftl_io.o 00:02:19.854 CC lib/scsi/task.o 00:02:19.854 CC lib/nvmf/tcp.o 00:02:19.854 CC lib/ftl/ftl_sb.o 00:02:19.854 CC lib/ftl/ftl_l2p.o 00:02:19.854 CC lib/nvmf/stubs.o 00:02:19.854 CC lib/nvmf/vfio_user.o 00:02:19.854 CC lib/nvmf/mdns_server.o 00:02:19.854 CC lib/ftl/ftl_l2p_flat.o 00:02:19.854 CC lib/ftl/ftl_nv_cache.o 00:02:19.854 CC lib/nvmf/rdma.o 00:02:19.854 CC lib/ftl/ftl_band.o 00:02:19.854 CC lib/nvmf/auth.o 00:02:19.854 CC lib/ftl/ftl_band_ops.o 00:02:19.854 CC lib/ftl/ftl_writer.o 00:02:19.854 CC lib/ftl/ftl_rq.o 00:02:19.854 CC lib/ftl/ftl_reloc.o 00:02:19.854 CC lib/ftl/ftl_l2p_cache.o 00:02:19.854 CC lib/ftl/ftl_p2l.o 00:02:19.854 CC lib/ftl/ftl_p2l_log.o 00:02:19.854 CC lib/ftl/mngt/ftl_mngt.o 00:02:19.854 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:19.854 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:19.854 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:19.854 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:19.854 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:20.427 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:20.427 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:20.427 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:20.427 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:20.427 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:20.427 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:20.427 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:20.427 CC lib/ftl/utils/ftl_conf.o 00:02:20.427 CC lib/ftl/utils/ftl_md.o 00:02:20.427 CC lib/ftl/utils/ftl_mempool.o 00:02:20.427 CC lib/ftl/utils/ftl_bitmap.o 00:02:20.427 CC lib/ftl/utils/ftl_property.o 00:02:20.427 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:20.427 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:20.427 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:20.427 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:20.427 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:20.427 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:20.427 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:20.690 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:20.690 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:20.690 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:20.690 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:20.690 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:20.690 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:20.690 CC lib/ftl/base/ftl_base_dev.o 00:02:20.690 CC lib/ftl/base/ftl_base_bdev.o 00:02:20.690 CC lib/ftl/ftl_trace.o 00:02:20.690 LIB libspdk_nbd.a 00:02:20.690 SO libspdk_nbd.so.7.0 00:02:20.948 SYMLINK libspdk_nbd.so 00:02:20.948 LIB libspdk_scsi.a 00:02:20.948 SO libspdk_scsi.so.9.0 00:02:20.948 SYMLINK libspdk_scsi.so 00:02:21.206 LIB libspdk_ublk.a 00:02:21.206 SO libspdk_ublk.so.3.0 00:02:21.206 SYMLINK libspdk_ublk.so 00:02:21.206 CC lib/vhost/vhost.o 00:02:21.206 CC lib/vhost/vhost_rpc.o 00:02:21.206 CC lib/vhost/vhost_scsi.o 00:02:21.206 CC lib/vhost/vhost_blk.o 00:02:21.206 CC lib/vhost/rte_vhost_user.o 00:02:21.206 CC lib/iscsi/conn.o 00:02:21.206 CC lib/iscsi/init_grp.o 00:02:21.206 CC lib/iscsi/iscsi.o 00:02:21.206 CC lib/iscsi/param.o 00:02:21.206 CC lib/iscsi/portal_grp.o 00:02:21.206 CC lib/iscsi/tgt_node.o 00:02:21.206 CC lib/iscsi/iscsi_subsystem.o 00:02:21.206 CC lib/iscsi/iscsi_rpc.o 00:02:21.206 CC lib/iscsi/task.o 00:02:21.464 LIB libspdk_ftl.a 00:02:21.721 SO libspdk_ftl.so.9.0 00:02:21.979 SYMLINK libspdk_ftl.so 00:02:22.547 LIB libspdk_vhost.a 00:02:22.547 SO libspdk_vhost.so.8.0 00:02:22.547 LIB libspdk_nvmf.a 00:02:22.547 SYMLINK libspdk_vhost.so 00:02:22.547 SO libspdk_nvmf.so.20.0 00:02:22.806 LIB libspdk_iscsi.a 00:02:22.806 SO libspdk_iscsi.so.8.0 00:02:22.806 SYMLINK libspdk_nvmf.so 00:02:22.806 SYMLINK libspdk_iscsi.so 00:02:23.063 CC module/env_dpdk/env_dpdk_rpc.o 00:02:23.063 CC module/vfu_device/vfu_virtio.o 00:02:23.063 CC module/vfu_device/vfu_virtio_blk.o 00:02:23.063 CC module/vfu_device/vfu_virtio_scsi.o 00:02:23.064 CC module/vfu_device/vfu_virtio_rpc.o 00:02:23.064 CC module/vfu_device/vfu_virtio_fs.o 00:02:23.322 CC module/scheduler/gscheduler/gscheduler.o 00:02:23.322 CC module/sock/posix/posix.o 00:02:23.322 CC module/accel/error/accel_error.o 00:02:23.322 CC module/accel/error/accel_error_rpc.o 00:02:23.322 CC module/accel/iaa/accel_iaa.o 00:02:23.322 CC module/blob/bdev/blob_bdev.o 00:02:23.322 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:23.322 CC module/accel/iaa/accel_iaa_rpc.o 00:02:23.322 CC module/accel/ioat/accel_ioat.o 00:02:23.322 CC module/fsdev/aio/fsdev_aio.o 00:02:23.322 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:23.322 CC module/accel/ioat/accel_ioat_rpc.o 00:02:23.322 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:23.322 CC module/keyring/linux/keyring.o 00:02:23.322 CC module/fsdev/aio/linux_aio_mgr.o 00:02:23.322 CC module/keyring/linux/keyring_rpc.o 00:02:23.322 CC module/accel/dsa/accel_dsa.o 00:02:23.322 CC module/keyring/file/keyring.o 00:02:23.322 CC module/accel/dsa/accel_dsa_rpc.o 00:02:23.322 CC module/keyring/file/keyring_rpc.o 00:02:23.322 LIB libspdk_env_dpdk_rpc.a 00:02:23.322 SO libspdk_env_dpdk_rpc.so.6.0 00:02:23.322 SYMLINK libspdk_env_dpdk_rpc.so 00:02:23.322 LIB libspdk_keyring_linux.a 00:02:23.322 LIB libspdk_scheduler_gscheduler.a 00:02:23.322 LIB libspdk_scheduler_dpdk_governor.a 00:02:23.322 SO libspdk_keyring_linux.so.1.0 00:02:23.322 SO libspdk_scheduler_gscheduler.so.4.0 00:02:23.580 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:23.580 LIB libspdk_accel_ioat.a 00:02:23.580 SO libspdk_accel_ioat.so.6.0 00:02:23.580 LIB libspdk_keyring_file.a 00:02:23.580 LIB libspdk_accel_error.a 00:02:23.580 LIB libspdk_accel_iaa.a 00:02:23.580 SYMLINK libspdk_scheduler_gscheduler.so 00:02:23.580 SYMLINK libspdk_keyring_linux.so 00:02:23.580 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:23.580 SO libspdk_keyring_file.so.2.0 00:02:23.580 SO libspdk_accel_error.so.2.0 00:02:23.580 SO libspdk_accel_iaa.so.3.0 00:02:23.580 SYMLINK libspdk_accel_ioat.so 00:02:23.580 LIB libspdk_scheduler_dynamic.a 00:02:23.580 LIB libspdk_blob_bdev.a 00:02:23.580 SYMLINK libspdk_keyring_file.so 00:02:23.580 SYMLINK libspdk_accel_error.so 00:02:23.580 SYMLINK libspdk_accel_iaa.so 00:02:23.580 SO libspdk_scheduler_dynamic.so.4.0 00:02:23.580 SO libspdk_blob_bdev.so.12.0 00:02:23.580 SYMLINK libspdk_scheduler_dynamic.so 00:02:23.580 SYMLINK libspdk_blob_bdev.so 00:02:23.580 LIB libspdk_accel_dsa.a 00:02:23.580 SO libspdk_accel_dsa.so.5.0 00:02:23.838 SYMLINK libspdk_accel_dsa.so 00:02:23.838 LIB libspdk_vfu_device.a 00:02:23.838 SO libspdk_vfu_device.so.3.0 00:02:23.838 CC module/blobfs/bdev/blobfs_bdev.o 00:02:23.838 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:23.838 CC module/bdev/delay/vbdev_delay.o 00:02:23.838 CC module/bdev/lvol/vbdev_lvol.o 00:02:23.838 CC module/bdev/null/bdev_null.o 00:02:23.838 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:23.838 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:23.838 CC module/bdev/null/bdev_null_rpc.o 00:02:23.838 CC module/bdev/nvme/bdev_nvme.o 00:02:23.838 CC module/bdev/malloc/bdev_malloc.o 00:02:23.838 CC module/bdev/gpt/gpt.o 00:02:23.838 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:23.838 CC module/bdev/gpt/vbdev_gpt.o 00:02:23.838 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:23.838 CC module/bdev/error/vbdev_error.o 00:02:23.838 CC module/bdev/nvme/nvme_rpc.o 00:02:23.838 CC module/bdev/passthru/vbdev_passthru.o 00:02:23.838 CC module/bdev/error/vbdev_error_rpc.o 00:02:23.838 CC module/bdev/nvme/bdev_mdns_client.o 00:02:23.838 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:23.838 CC module/bdev/nvme/vbdev_opal.o 00:02:23.838 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:23.838 CC module/bdev/aio/bdev_aio.o 00:02:23.838 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:23.838 CC module/bdev/aio/bdev_aio_rpc.o 00:02:23.838 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:23.838 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:23.839 CC module/bdev/split/vbdev_split.o 00:02:23.839 CC module/bdev/raid/bdev_raid.o 00:02:23.839 CC module/bdev/raid/bdev_raid_rpc.o 00:02:23.839 CC module/bdev/split/vbdev_split_rpc.o 00:02:23.839 CC module/bdev/raid/bdev_raid_sb.o 00:02:23.839 CC module/bdev/raid/raid0.o 00:02:23.839 CC module/bdev/raid/raid1.o 00:02:23.839 CC module/bdev/raid/concat.o 00:02:23.839 CC module/bdev/ftl/bdev_ftl.o 00:02:23.839 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:23.839 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:23.839 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:23.839 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:23.839 CC module/bdev/iscsi/bdev_iscsi.o 00:02:23.839 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:24.097 SYMLINK libspdk_vfu_device.so 00:02:24.097 LIB libspdk_fsdev_aio.a 00:02:24.097 SO libspdk_fsdev_aio.so.1.0 00:02:24.097 LIB libspdk_sock_posix.a 00:02:24.097 SO libspdk_sock_posix.so.6.0 00:02:24.097 SYMLINK libspdk_fsdev_aio.so 00:02:24.355 SYMLINK libspdk_sock_posix.so 00:02:24.355 LIB libspdk_blobfs_bdev.a 00:02:24.355 SO libspdk_blobfs_bdev.so.6.0 00:02:24.355 LIB libspdk_bdev_split.a 00:02:24.355 SYMLINK libspdk_blobfs_bdev.so 00:02:24.355 LIB libspdk_bdev_null.a 00:02:24.355 LIB libspdk_bdev_error.a 00:02:24.355 LIB libspdk_bdev_gpt.a 00:02:24.355 SO libspdk_bdev_split.so.6.0 00:02:24.355 LIB libspdk_bdev_zone_block.a 00:02:24.355 SO libspdk_bdev_error.so.6.0 00:02:24.355 SO libspdk_bdev_null.so.6.0 00:02:24.355 LIB libspdk_bdev_ftl.a 00:02:24.355 SO libspdk_bdev_gpt.so.6.0 00:02:24.355 SO libspdk_bdev_zone_block.so.6.0 00:02:24.355 SO libspdk_bdev_ftl.so.6.0 00:02:24.355 LIB libspdk_bdev_aio.a 00:02:24.355 SYMLINK libspdk_bdev_split.so 00:02:24.355 LIB libspdk_bdev_passthru.a 00:02:24.355 SYMLINK libspdk_bdev_error.so 00:02:24.355 SYMLINK libspdk_bdev_null.so 00:02:24.355 SO libspdk_bdev_aio.so.6.0 00:02:24.355 SO libspdk_bdev_passthru.so.6.0 00:02:24.355 SYMLINK libspdk_bdev_gpt.so 00:02:24.355 SYMLINK libspdk_bdev_zone_block.so 00:02:24.355 SYMLINK libspdk_bdev_ftl.so 00:02:24.355 LIB libspdk_bdev_iscsi.a 00:02:24.355 LIB libspdk_bdev_delay.a 00:02:24.612 SO libspdk_bdev_iscsi.so.6.0 00:02:24.612 SO libspdk_bdev_delay.so.6.0 00:02:24.612 SYMLINK libspdk_bdev_aio.so 00:02:24.612 SYMLINK libspdk_bdev_passthru.so 00:02:24.612 LIB libspdk_bdev_malloc.a 00:02:24.612 SO libspdk_bdev_malloc.so.6.0 00:02:24.612 SYMLINK libspdk_bdev_delay.so 00:02:24.612 SYMLINK libspdk_bdev_iscsi.so 00:02:24.612 SYMLINK libspdk_bdev_malloc.so 00:02:24.612 LIB libspdk_bdev_lvol.a 00:02:24.612 LIB libspdk_bdev_virtio.a 00:02:24.612 SO libspdk_bdev_lvol.so.6.0 00:02:24.612 SO libspdk_bdev_virtio.so.6.0 00:02:24.612 SYMLINK libspdk_bdev_lvol.so 00:02:24.612 SYMLINK libspdk_bdev_virtio.so 00:02:25.178 LIB libspdk_bdev_raid.a 00:02:25.178 SO libspdk_bdev_raid.so.6.0 00:02:25.178 SYMLINK libspdk_bdev_raid.so 00:02:26.552 LIB libspdk_bdev_nvme.a 00:02:26.810 SO libspdk_bdev_nvme.so.7.1 00:02:26.810 SYMLINK libspdk_bdev_nvme.so 00:02:27.068 CC module/event/subsystems/keyring/keyring.o 00:02:27.068 CC module/event/subsystems/iobuf/iobuf.o 00:02:27.068 CC module/event/subsystems/sock/sock.o 00:02:27.068 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:27.068 CC module/event/subsystems/scheduler/scheduler.o 00:02:27.068 CC module/event/subsystems/vmd/vmd.o 00:02:27.068 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:27.068 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:27.068 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:27.068 CC module/event/subsystems/fsdev/fsdev.o 00:02:27.326 LIB libspdk_event_keyring.a 00:02:27.326 LIB libspdk_event_vhost_blk.a 00:02:27.326 LIB libspdk_event_fsdev.a 00:02:27.326 LIB libspdk_event_scheduler.a 00:02:27.326 LIB libspdk_event_vmd.a 00:02:27.326 LIB libspdk_event_vfu_tgt.a 00:02:27.326 LIB libspdk_event_sock.a 00:02:27.326 SO libspdk_event_keyring.so.1.0 00:02:27.326 LIB libspdk_event_iobuf.a 00:02:27.326 SO libspdk_event_fsdev.so.1.0 00:02:27.326 SO libspdk_event_scheduler.so.4.0 00:02:27.326 SO libspdk_event_vhost_blk.so.3.0 00:02:27.326 SO libspdk_event_vfu_tgt.so.3.0 00:02:27.326 SO libspdk_event_vmd.so.6.0 00:02:27.326 SO libspdk_event_sock.so.5.0 00:02:27.326 SO libspdk_event_iobuf.so.3.0 00:02:27.326 SYMLINK libspdk_event_keyring.so 00:02:27.326 SYMLINK libspdk_event_fsdev.so 00:02:27.326 SYMLINK libspdk_event_vhost_blk.so 00:02:27.326 SYMLINK libspdk_event_scheduler.so 00:02:27.326 SYMLINK libspdk_event_vfu_tgt.so 00:02:27.326 SYMLINK libspdk_event_sock.so 00:02:27.326 SYMLINK libspdk_event_vmd.so 00:02:27.326 SYMLINK libspdk_event_iobuf.so 00:02:27.585 CC module/event/subsystems/accel/accel.o 00:02:27.844 LIB libspdk_event_accel.a 00:02:27.844 SO libspdk_event_accel.so.6.0 00:02:27.844 SYMLINK libspdk_event_accel.so 00:02:28.103 CC module/event/subsystems/bdev/bdev.o 00:02:28.103 LIB libspdk_event_bdev.a 00:02:28.103 SO libspdk_event_bdev.so.6.0 00:02:28.103 SYMLINK libspdk_event_bdev.so 00:02:28.361 CC module/event/subsystems/scsi/scsi.o 00:02:28.361 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:28.361 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:28.361 CC module/event/subsystems/ublk/ublk.o 00:02:28.361 CC module/event/subsystems/nbd/nbd.o 00:02:28.619 LIB libspdk_event_nbd.a 00:02:28.619 LIB libspdk_event_ublk.a 00:02:28.619 LIB libspdk_event_scsi.a 00:02:28.619 SO libspdk_event_ublk.so.3.0 00:02:28.619 SO libspdk_event_nbd.so.6.0 00:02:28.619 SO libspdk_event_scsi.so.6.0 00:02:28.619 SYMLINK libspdk_event_nbd.so 00:02:28.619 SYMLINK libspdk_event_ublk.so 00:02:28.619 SYMLINK libspdk_event_scsi.so 00:02:28.619 LIB libspdk_event_nvmf.a 00:02:28.619 SO libspdk_event_nvmf.so.6.0 00:02:28.619 SYMLINK libspdk_event_nvmf.so 00:02:28.877 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:28.877 CC module/event/subsystems/iscsi/iscsi.o 00:02:28.877 LIB libspdk_event_vhost_scsi.a 00:02:28.877 SO libspdk_event_vhost_scsi.so.3.0 00:02:28.877 LIB libspdk_event_iscsi.a 00:02:28.877 SO libspdk_event_iscsi.so.6.0 00:02:28.877 SYMLINK libspdk_event_vhost_scsi.so 00:02:29.135 SYMLINK libspdk_event_iscsi.so 00:02:29.135 SO libspdk.so.6.0 00:02:29.135 SYMLINK libspdk.so 00:02:29.397 CC app/trace_record/trace_record.o 00:02:29.397 CXX app/trace/trace.o 00:02:29.397 TEST_HEADER include/spdk/accel.h 00:02:29.397 TEST_HEADER include/spdk/assert.h 00:02:29.397 CC app/spdk_top/spdk_top.o 00:02:29.397 TEST_HEADER include/spdk/accel_module.h 00:02:29.397 TEST_HEADER include/spdk/barrier.h 00:02:29.397 TEST_HEADER include/spdk/base64.h 00:02:29.397 TEST_HEADER include/spdk/bdev.h 00:02:29.397 TEST_HEADER include/spdk/bdev_module.h 00:02:29.397 CC app/spdk_lspci/spdk_lspci.o 00:02:29.397 CC app/spdk_nvme_perf/perf.o 00:02:29.397 CC app/spdk_nvme_discover/discovery_aer.o 00:02:29.397 TEST_HEADER include/spdk/bdev_zone.h 00:02:29.397 TEST_HEADER include/spdk/bit_array.h 00:02:29.397 TEST_HEADER include/spdk/bit_pool.h 00:02:29.397 TEST_HEADER include/spdk/blob_bdev.h 00:02:29.397 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:29.397 TEST_HEADER include/spdk/blobfs.h 00:02:29.397 TEST_HEADER include/spdk/blob.h 00:02:29.397 TEST_HEADER include/spdk/conf.h 00:02:29.397 CC test/rpc_client/rpc_client_test.o 00:02:29.398 TEST_HEADER include/spdk/config.h 00:02:29.398 TEST_HEADER include/spdk/cpuset.h 00:02:29.398 TEST_HEADER include/spdk/crc16.h 00:02:29.398 TEST_HEADER include/spdk/crc32.h 00:02:29.398 CC app/spdk_nvme_identify/identify.o 00:02:29.398 TEST_HEADER include/spdk/crc64.h 00:02:29.398 TEST_HEADER include/spdk/dif.h 00:02:29.398 TEST_HEADER include/spdk/dma.h 00:02:29.398 TEST_HEADER include/spdk/env_dpdk.h 00:02:29.398 TEST_HEADER include/spdk/endian.h 00:02:29.398 TEST_HEADER include/spdk/env.h 00:02:29.398 TEST_HEADER include/spdk/event.h 00:02:29.398 TEST_HEADER include/spdk/fd_group.h 00:02:29.398 TEST_HEADER include/spdk/fd.h 00:02:29.398 TEST_HEADER include/spdk/file.h 00:02:29.398 TEST_HEADER include/spdk/fsdev.h 00:02:29.398 TEST_HEADER include/spdk/fsdev_module.h 00:02:29.398 TEST_HEADER include/spdk/ftl.h 00:02:29.398 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:29.398 TEST_HEADER include/spdk/gpt_spec.h 00:02:29.398 TEST_HEADER include/spdk/hexlify.h 00:02:29.398 TEST_HEADER include/spdk/histogram_data.h 00:02:29.398 TEST_HEADER include/spdk/idxd.h 00:02:29.398 TEST_HEADER include/spdk/idxd_spec.h 00:02:29.398 TEST_HEADER include/spdk/ioat.h 00:02:29.398 TEST_HEADER include/spdk/init.h 00:02:29.398 TEST_HEADER include/spdk/ioat_spec.h 00:02:29.398 TEST_HEADER include/spdk/iscsi_spec.h 00:02:29.398 TEST_HEADER include/spdk/json.h 00:02:29.398 TEST_HEADER include/spdk/jsonrpc.h 00:02:29.398 TEST_HEADER include/spdk/keyring_module.h 00:02:29.398 TEST_HEADER include/spdk/keyring.h 00:02:29.398 TEST_HEADER include/spdk/likely.h 00:02:29.398 TEST_HEADER include/spdk/log.h 00:02:29.398 TEST_HEADER include/spdk/lvol.h 00:02:29.398 TEST_HEADER include/spdk/memory.h 00:02:29.398 TEST_HEADER include/spdk/md5.h 00:02:29.398 TEST_HEADER include/spdk/mmio.h 00:02:29.398 TEST_HEADER include/spdk/nbd.h 00:02:29.398 TEST_HEADER include/spdk/net.h 00:02:29.398 TEST_HEADER include/spdk/notify.h 00:02:29.398 TEST_HEADER include/spdk/nvme.h 00:02:29.398 TEST_HEADER include/spdk/nvme_intel.h 00:02:29.398 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:29.398 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:29.398 TEST_HEADER include/spdk/nvme_spec.h 00:02:29.398 TEST_HEADER include/spdk/nvme_zns.h 00:02:29.398 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:29.398 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:29.398 TEST_HEADER include/spdk/nvmf.h 00:02:29.398 TEST_HEADER include/spdk/nvmf_spec.h 00:02:29.398 TEST_HEADER include/spdk/nvmf_transport.h 00:02:29.398 TEST_HEADER include/spdk/opal_spec.h 00:02:29.398 TEST_HEADER include/spdk/opal.h 00:02:29.398 TEST_HEADER include/spdk/pci_ids.h 00:02:29.398 TEST_HEADER include/spdk/pipe.h 00:02:29.398 TEST_HEADER include/spdk/queue.h 00:02:29.398 TEST_HEADER include/spdk/reduce.h 00:02:29.398 TEST_HEADER include/spdk/rpc.h 00:02:29.398 TEST_HEADER include/spdk/scheduler.h 00:02:29.398 TEST_HEADER include/spdk/scsi.h 00:02:29.398 TEST_HEADER include/spdk/scsi_spec.h 00:02:29.398 TEST_HEADER include/spdk/sock.h 00:02:29.398 TEST_HEADER include/spdk/stdinc.h 00:02:29.398 TEST_HEADER include/spdk/string.h 00:02:29.398 TEST_HEADER include/spdk/thread.h 00:02:29.398 TEST_HEADER include/spdk/trace.h 00:02:29.398 TEST_HEADER include/spdk/tree.h 00:02:29.398 TEST_HEADER include/spdk/trace_parser.h 00:02:29.398 TEST_HEADER include/spdk/ublk.h 00:02:29.398 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:29.398 TEST_HEADER include/spdk/util.h 00:02:29.398 TEST_HEADER include/spdk/uuid.h 00:02:29.398 TEST_HEADER include/spdk/version.h 00:02:29.398 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:29.398 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:29.398 TEST_HEADER include/spdk/vhost.h 00:02:29.398 TEST_HEADER include/spdk/vmd.h 00:02:29.398 TEST_HEADER include/spdk/xor.h 00:02:29.398 TEST_HEADER include/spdk/zipf.h 00:02:29.398 CXX test/cpp_headers/accel.o 00:02:29.398 CXX test/cpp_headers/assert.o 00:02:29.398 CXX test/cpp_headers/accel_module.o 00:02:29.398 CXX test/cpp_headers/barrier.o 00:02:29.398 CXX test/cpp_headers/base64.o 00:02:29.398 CXX test/cpp_headers/bdev.o 00:02:29.398 CXX test/cpp_headers/bdev_module.o 00:02:29.398 CXX test/cpp_headers/bdev_zone.o 00:02:29.398 CXX test/cpp_headers/bit_array.o 00:02:29.398 CXX test/cpp_headers/bit_pool.o 00:02:29.398 CXX test/cpp_headers/blob_bdev.o 00:02:29.398 CXX test/cpp_headers/blobfs_bdev.o 00:02:29.398 CXX test/cpp_headers/blobfs.o 00:02:29.398 CXX test/cpp_headers/blob.o 00:02:29.398 CXX test/cpp_headers/conf.o 00:02:29.398 CC app/spdk_dd/spdk_dd.o 00:02:29.398 CXX test/cpp_headers/config.o 00:02:29.398 CXX test/cpp_headers/cpuset.o 00:02:29.398 CXX test/cpp_headers/crc16.o 00:02:29.398 CC app/nvmf_tgt/nvmf_main.o 00:02:29.398 CC app/iscsi_tgt/iscsi_tgt.o 00:02:29.398 CXX test/cpp_headers/crc32.o 00:02:29.398 CC test/app/jsoncat/jsoncat.o 00:02:29.398 CC test/thread/poller_perf/poller_perf.o 00:02:29.398 CC app/spdk_tgt/spdk_tgt.o 00:02:29.398 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:29.398 CC examples/ioat/perf/perf.o 00:02:29.398 CC test/env/vtophys/vtophys.o 00:02:29.398 CC test/env/pci/pci_ut.o 00:02:29.398 CC examples/ioat/verify/verify.o 00:02:29.398 CC test/app/histogram_perf/histogram_perf.o 00:02:29.398 CC test/env/memory/memory_ut.o 00:02:29.398 CC examples/util/zipf/zipf.o 00:02:29.398 CC test/app/stub/stub.o 00:02:29.398 CC app/fio/nvme/fio_plugin.o 00:02:29.663 CC test/app/bdev_svc/bdev_svc.o 00:02:29.663 CC test/dma/test_dma/test_dma.o 00:02:29.663 CC app/fio/bdev/fio_plugin.o 00:02:29.663 LINK spdk_lspci 00:02:29.663 CC test/env/mem_callbacks/mem_callbacks.o 00:02:29.663 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:29.663 LINK rpc_client_test 00:02:29.663 LINK spdk_nvme_discover 00:02:29.922 LINK interrupt_tgt 00:02:29.922 LINK jsoncat 00:02:29.922 LINK histogram_perf 00:02:29.922 LINK poller_perf 00:02:29.922 CXX test/cpp_headers/crc64.o 00:02:29.922 LINK vtophys 00:02:29.922 LINK zipf 00:02:29.922 LINK spdk_trace_record 00:02:29.922 LINK nvmf_tgt 00:02:29.922 CXX test/cpp_headers/dif.o 00:02:29.922 CXX test/cpp_headers/dma.o 00:02:29.922 CXX test/cpp_headers/endian.o 00:02:29.922 CXX test/cpp_headers/env_dpdk.o 00:02:29.922 CXX test/cpp_headers/env.o 00:02:29.922 CXX test/cpp_headers/event.o 00:02:29.922 LINK env_dpdk_post_init 00:02:29.922 CXX test/cpp_headers/fd_group.o 00:02:29.922 CXX test/cpp_headers/fd.o 00:02:29.922 CXX test/cpp_headers/file.o 00:02:29.922 LINK iscsi_tgt 00:02:29.922 CXX test/cpp_headers/fsdev.o 00:02:29.922 CXX test/cpp_headers/fsdev_module.o 00:02:29.922 LINK stub 00:02:29.922 CXX test/cpp_headers/ftl.o 00:02:29.922 CXX test/cpp_headers/fuse_dispatcher.o 00:02:29.922 CXX test/cpp_headers/gpt_spec.o 00:02:29.922 CXX test/cpp_headers/hexlify.o 00:02:29.922 LINK verify 00:02:29.922 CXX test/cpp_headers/histogram_data.o 00:02:29.922 LINK ioat_perf 00:02:29.922 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:29.922 LINK bdev_svc 00:02:29.922 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:29.922 LINK spdk_tgt 00:02:30.185 CXX test/cpp_headers/idxd.o 00:02:30.185 CXX test/cpp_headers/idxd_spec.o 00:02:30.186 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:30.186 CXX test/cpp_headers/init.o 00:02:30.186 CXX test/cpp_headers/ioat.o 00:02:30.186 CXX test/cpp_headers/ioat_spec.o 00:02:30.186 CXX test/cpp_headers/iscsi_spec.o 00:02:30.186 CXX test/cpp_headers/json.o 00:02:30.186 LINK spdk_dd 00:02:30.186 CXX test/cpp_headers/jsonrpc.o 00:02:30.186 CXX test/cpp_headers/keyring.o 00:02:30.186 CXX test/cpp_headers/keyring_module.o 00:02:30.186 CXX test/cpp_headers/likely.o 00:02:30.186 CXX test/cpp_headers/log.o 00:02:30.186 LINK spdk_trace 00:02:30.186 CXX test/cpp_headers/lvol.o 00:02:30.186 CXX test/cpp_headers/md5.o 00:02:30.186 CXX test/cpp_headers/memory.o 00:02:30.186 CXX test/cpp_headers/mmio.o 00:02:30.186 CXX test/cpp_headers/nbd.o 00:02:30.186 CXX test/cpp_headers/net.o 00:02:30.186 CXX test/cpp_headers/notify.o 00:02:30.186 LINK pci_ut 00:02:30.453 CXX test/cpp_headers/nvme.o 00:02:30.453 CXX test/cpp_headers/nvme_intel.o 00:02:30.453 CXX test/cpp_headers/nvme_ocssd.o 00:02:30.453 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:30.453 CXX test/cpp_headers/nvme_spec.o 00:02:30.453 CXX test/cpp_headers/nvme_zns.o 00:02:30.453 CXX test/cpp_headers/nvmf_cmd.o 00:02:30.453 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:30.453 CC test/event/event_perf/event_perf.o 00:02:30.453 CC test/event/reactor_perf/reactor_perf.o 00:02:30.453 CXX test/cpp_headers/nvmf.o 00:02:30.453 CC test/event/reactor/reactor.o 00:02:30.453 CXX test/cpp_headers/nvmf_spec.o 00:02:30.453 LINK nvme_fuzz 00:02:30.453 CXX test/cpp_headers/nvmf_transport.o 00:02:30.453 CXX test/cpp_headers/opal.o 00:02:30.453 CC examples/sock/hello_world/hello_sock.o 00:02:30.453 CC examples/vmd/lsvmd/lsvmd.o 00:02:30.453 CC test/event/app_repeat/app_repeat.o 00:02:30.716 CXX test/cpp_headers/opal_spec.o 00:02:30.716 CXX test/cpp_headers/pci_ids.o 00:02:30.716 CC examples/idxd/perf/perf.o 00:02:30.716 LINK spdk_bdev 00:02:30.716 CXX test/cpp_headers/pipe.o 00:02:30.716 LINK spdk_nvme 00:02:30.716 CC examples/thread/thread/thread_ex.o 00:02:30.716 LINK test_dma 00:02:30.716 CC examples/vmd/led/led.o 00:02:30.716 CXX test/cpp_headers/queue.o 00:02:30.716 CC test/event/scheduler/scheduler.o 00:02:30.716 CXX test/cpp_headers/reduce.o 00:02:30.716 CXX test/cpp_headers/rpc.o 00:02:30.716 CXX test/cpp_headers/scheduler.o 00:02:30.716 CXX test/cpp_headers/scsi.o 00:02:30.716 CXX test/cpp_headers/scsi_spec.o 00:02:30.716 CXX test/cpp_headers/sock.o 00:02:30.716 CXX test/cpp_headers/stdinc.o 00:02:30.716 CXX test/cpp_headers/string.o 00:02:30.716 CXX test/cpp_headers/thread.o 00:02:30.716 CXX test/cpp_headers/trace.o 00:02:30.716 CXX test/cpp_headers/trace_parser.o 00:02:30.716 CXX test/cpp_headers/tree.o 00:02:30.716 CXX test/cpp_headers/ublk.o 00:02:30.716 CXX test/cpp_headers/util.o 00:02:30.716 CXX test/cpp_headers/uuid.o 00:02:30.716 CXX test/cpp_headers/version.o 00:02:30.716 CXX test/cpp_headers/vfio_user_pci.o 00:02:30.716 LINK event_perf 00:02:30.716 LINK reactor 00:02:30.716 CXX test/cpp_headers/vfio_user_spec.o 00:02:30.976 LINK reactor_perf 00:02:30.976 CXX test/cpp_headers/vhost.o 00:02:30.976 CXX test/cpp_headers/vmd.o 00:02:30.976 CXX test/cpp_headers/xor.o 00:02:30.976 CXX test/cpp_headers/zipf.o 00:02:30.976 CC app/vhost/vhost.o 00:02:30.976 LINK mem_callbacks 00:02:30.976 LINK lsvmd 00:02:30.976 LINK app_repeat 00:02:30.976 LINK vhost_fuzz 00:02:30.976 LINK spdk_nvme_perf 00:02:30.976 LINK led 00:02:30.976 LINK spdk_nvme_identify 00:02:30.976 LINK hello_sock 00:02:30.976 LINK spdk_top 00:02:31.234 LINK scheduler 00:02:31.234 LINK thread 00:02:31.234 LINK vhost 00:02:31.234 CC test/nvme/e2edp/nvme_dp.o 00:02:31.234 LINK idxd_perf 00:02:31.234 CC test/nvme/startup/startup.o 00:02:31.234 CC test/nvme/sgl/sgl.o 00:02:31.234 CC test/nvme/aer/aer.o 00:02:31.234 CC test/nvme/fdp/fdp.o 00:02:31.234 CC test/nvme/boot_partition/boot_partition.o 00:02:31.234 CC test/nvme/reset/reset.o 00:02:31.234 CC test/nvme/compliance/nvme_compliance.o 00:02:31.234 CC test/nvme/cuse/cuse.o 00:02:31.234 CC test/nvme/simple_copy/simple_copy.o 00:02:31.234 CC test/nvme/fused_ordering/fused_ordering.o 00:02:31.234 CC test/nvme/err_injection/err_injection.o 00:02:31.234 CC test/nvme/connect_stress/connect_stress.o 00:02:31.234 CC test/nvme/overhead/overhead.o 00:02:31.234 CC test/nvme/reserve/reserve.o 00:02:31.234 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:31.234 CC test/accel/dif/dif.o 00:02:31.234 CC test/blobfs/mkfs/mkfs.o 00:02:31.234 CC test/lvol/esnap/esnap.o 00:02:31.505 CC examples/nvme/hello_world/hello_world.o 00:02:31.505 CC examples/nvme/abort/abort.o 00:02:31.505 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:31.505 CC examples/nvme/hotplug/hotplug.o 00:02:31.505 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:31.505 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:31.505 CC examples/nvme/reconnect/reconnect.o 00:02:31.505 CC examples/nvme/arbitration/arbitration.o 00:02:31.505 LINK connect_stress 00:02:31.505 LINK err_injection 00:02:31.505 LINK fused_ordering 00:02:31.505 LINK reserve 00:02:31.505 LINK doorbell_aers 00:02:31.505 LINK startup 00:02:31.505 LINK simple_copy 00:02:31.505 LINK reset 00:02:31.505 LINK boot_partition 00:02:31.800 LINK sgl 00:02:31.800 LINK memory_ut 00:02:31.800 LINK aer 00:02:31.800 LINK nvme_dp 00:02:31.800 LINK mkfs 00:02:31.800 CC examples/accel/perf/accel_perf.o 00:02:31.800 LINK nvme_compliance 00:02:31.800 LINK fdp 00:02:31.800 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:31.800 LINK overhead 00:02:31.800 CC examples/blob/cli/blobcli.o 00:02:31.800 CC examples/blob/hello_world/hello_blob.o 00:02:31.800 LINK cmb_copy 00:02:31.800 LINK pmr_persistence 00:02:31.800 LINK hello_world 00:02:31.800 LINK hotplug 00:02:32.058 LINK abort 00:02:32.058 LINK arbitration 00:02:32.058 LINK hello_blob 00:02:32.058 LINK reconnect 00:02:32.058 LINK dif 00:02:32.058 LINK nvme_manage 00:02:32.058 LINK hello_fsdev 00:02:32.316 LINK accel_perf 00:02:32.316 LINK blobcli 00:02:32.573 CC test/bdev/bdevio/bdevio.o 00:02:32.573 LINK iscsi_fuzz 00:02:32.573 CC examples/bdev/hello_world/hello_bdev.o 00:02:32.573 CC examples/bdev/bdevperf/bdevperf.o 00:02:32.830 LINK cuse 00:02:32.831 LINK hello_bdev 00:02:32.831 LINK bdevio 00:02:33.395 LINK bdevperf 00:02:33.960 CC examples/nvmf/nvmf/nvmf.o 00:02:34.218 LINK nvmf 00:02:36.744 LINK esnap 00:02:37.002 00:02:37.002 real 1m10.077s 00:02:37.002 user 11m53.418s 00:02:37.002 sys 2m37.840s 00:02:37.002 20:32:40 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:37.002 20:32:40 make -- common/autotest_common.sh@10 -- $ set +x 00:02:37.002 ************************************ 00:02:37.002 END TEST make 00:02:37.002 ************************************ 00:02:37.002 20:32:40 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:37.002 20:32:40 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:37.002 20:32:40 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:37.002 20:32:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:37.002 20:32:40 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:37.002 20:32:40 -- pm/common@44 -- $ pid=1458989 00:02:37.002 20:32:40 -- pm/common@50 -- $ kill -TERM 1458989 00:02:37.002 20:32:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:37.002 20:32:40 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:37.002 20:32:40 -- pm/common@44 -- $ pid=1458991 00:02:37.002 20:32:40 -- pm/common@50 -- $ kill -TERM 1458991 00:02:37.002 20:32:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:37.002 20:32:40 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:37.002 20:32:40 -- pm/common@44 -- $ pid=1458993 00:02:37.002 20:32:40 -- pm/common@50 -- $ kill -TERM 1458993 00:02:37.002 20:32:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:37.002 20:32:40 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:37.002 20:32:40 -- pm/common@44 -- $ pid=1459023 00:02:37.002 20:32:40 -- pm/common@50 -- $ sudo -E kill -TERM 1459023 00:02:37.002 20:32:40 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:37.002 20:32:40 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:37.002 20:32:40 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:02:37.002 20:32:40 -- common/autotest_common.sh@1693 -- # lcov --version 00:02:37.002 20:32:40 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:02:37.261 20:32:40 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:02:37.261 20:32:40 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:37.261 20:32:40 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:37.261 20:32:40 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:37.261 20:32:40 -- scripts/common.sh@336 -- # IFS=.-: 00:02:37.261 20:32:40 -- scripts/common.sh@336 -- # read -ra ver1 00:02:37.261 20:32:40 -- scripts/common.sh@337 -- # IFS=.-: 00:02:37.261 20:32:40 -- scripts/common.sh@337 -- # read -ra ver2 00:02:37.261 20:32:40 -- scripts/common.sh@338 -- # local 'op=<' 00:02:37.261 20:32:40 -- scripts/common.sh@340 -- # ver1_l=2 00:02:37.261 20:32:40 -- scripts/common.sh@341 -- # ver2_l=1 00:02:37.261 20:32:40 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:37.261 20:32:40 -- scripts/common.sh@344 -- # case "$op" in 00:02:37.261 20:32:40 -- scripts/common.sh@345 -- # : 1 00:02:37.261 20:32:40 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:37.261 20:32:40 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:37.261 20:32:40 -- scripts/common.sh@365 -- # decimal 1 00:02:37.261 20:32:40 -- scripts/common.sh@353 -- # local d=1 00:02:37.261 20:32:40 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:37.261 20:32:40 -- scripts/common.sh@355 -- # echo 1 00:02:37.261 20:32:40 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:37.261 20:32:40 -- scripts/common.sh@366 -- # decimal 2 00:02:37.261 20:32:40 -- scripts/common.sh@353 -- # local d=2 00:02:37.261 20:32:40 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:37.261 20:32:40 -- scripts/common.sh@355 -- # echo 2 00:02:37.261 20:32:40 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:37.261 20:32:40 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:37.261 20:32:40 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:37.261 20:32:40 -- scripts/common.sh@368 -- # return 0 00:02:37.261 20:32:40 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:37.261 20:32:40 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:02:37.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:37.261 --rc genhtml_branch_coverage=1 00:02:37.261 --rc genhtml_function_coverage=1 00:02:37.261 --rc genhtml_legend=1 00:02:37.261 --rc geninfo_all_blocks=1 00:02:37.261 --rc geninfo_unexecuted_blocks=1 00:02:37.261 00:02:37.261 ' 00:02:37.261 20:32:40 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:02:37.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:37.261 --rc genhtml_branch_coverage=1 00:02:37.261 --rc genhtml_function_coverage=1 00:02:37.261 --rc genhtml_legend=1 00:02:37.261 --rc geninfo_all_blocks=1 00:02:37.261 --rc geninfo_unexecuted_blocks=1 00:02:37.261 00:02:37.261 ' 00:02:37.261 20:32:40 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:02:37.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:37.261 --rc genhtml_branch_coverage=1 00:02:37.261 --rc genhtml_function_coverage=1 00:02:37.261 --rc genhtml_legend=1 00:02:37.261 --rc geninfo_all_blocks=1 00:02:37.261 --rc geninfo_unexecuted_blocks=1 00:02:37.261 00:02:37.261 ' 00:02:37.261 20:32:40 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:02:37.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:37.261 --rc genhtml_branch_coverage=1 00:02:37.261 --rc genhtml_function_coverage=1 00:02:37.261 --rc genhtml_legend=1 00:02:37.261 --rc geninfo_all_blocks=1 00:02:37.261 --rc geninfo_unexecuted_blocks=1 00:02:37.261 00:02:37.261 ' 00:02:37.261 20:32:40 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:37.261 20:32:40 -- nvmf/common.sh@7 -- # uname -s 00:02:37.261 20:32:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:37.261 20:32:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:37.261 20:32:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:37.261 20:32:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:37.261 20:32:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:37.261 20:32:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:37.261 20:32:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:37.261 20:32:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:37.261 20:32:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:37.261 20:32:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:37.261 20:32:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:02:37.261 20:32:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:02:37.261 20:32:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:37.261 20:32:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:37.261 20:32:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:37.261 20:32:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:37.261 20:32:40 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:37.261 20:32:40 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:37.261 20:32:40 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:37.261 20:32:40 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:37.261 20:32:40 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:37.261 20:32:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:37.261 20:32:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:37.261 20:32:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:37.261 20:32:40 -- paths/export.sh@5 -- # export PATH 00:02:37.261 20:32:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:37.261 20:32:40 -- nvmf/common.sh@51 -- # : 0 00:02:37.261 20:32:40 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:37.261 20:32:40 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:37.261 20:32:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:37.261 20:32:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:37.261 20:32:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:37.261 20:32:40 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:37.261 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:37.261 20:32:40 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:37.261 20:32:40 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:37.261 20:32:40 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:37.261 20:32:40 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:37.261 20:32:40 -- spdk/autotest.sh@32 -- # uname -s 00:02:37.261 20:32:40 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:37.261 20:32:40 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:37.261 20:32:40 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:37.261 20:32:40 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:37.261 20:32:40 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:37.261 20:32:40 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:37.261 20:32:40 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:37.261 20:32:40 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:37.261 20:32:40 -- spdk/autotest.sh@48 -- # udevadm_pid=1518421 00:02:37.261 20:32:40 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:37.261 20:32:40 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:37.261 20:32:40 -- pm/common@17 -- # local monitor 00:02:37.261 20:32:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:37.261 20:32:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:37.261 20:32:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:37.261 20:32:40 -- pm/common@21 -- # date +%s 00:02:37.261 20:32:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:37.261 20:32:40 -- pm/common@21 -- # date +%s 00:02:37.261 20:32:40 -- pm/common@25 -- # sleep 1 00:02:37.261 20:32:40 -- pm/common@21 -- # date +%s 00:02:37.261 20:32:40 -- pm/common@21 -- # date +%s 00:02:37.261 20:32:40 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732649560 00:02:37.261 20:32:40 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732649560 00:02:37.261 20:32:40 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732649560 00:02:37.261 20:32:40 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732649560 00:02:37.261 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732649560_collect-vmstat.pm.log 00:02:37.261 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732649560_collect-cpu-load.pm.log 00:02:37.261 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732649560_collect-cpu-temp.pm.log 00:02:37.261 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732649560_collect-bmc-pm.bmc.pm.log 00:02:38.198 20:32:41 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:38.198 20:32:41 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:38.198 20:32:41 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:38.198 20:32:41 -- common/autotest_common.sh@10 -- # set +x 00:02:38.198 20:32:41 -- spdk/autotest.sh@59 -- # create_test_list 00:02:38.198 20:32:41 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:38.198 20:32:41 -- common/autotest_common.sh@10 -- # set +x 00:02:38.198 20:32:41 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:38.198 20:32:41 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:38.198 20:32:41 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:38.198 20:32:41 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:38.198 20:32:41 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:38.198 20:32:41 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:38.198 20:32:41 -- common/autotest_common.sh@1457 -- # uname 00:02:38.198 20:32:41 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:38.198 20:32:41 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:38.198 20:32:41 -- common/autotest_common.sh@1477 -- # uname 00:02:38.198 20:32:41 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:38.198 20:32:41 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:38.198 20:32:41 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:38.198 lcov: LCOV version 1.15 00:02:38.198 20:32:41 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:56.264 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:56.264 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:18.194 20:33:19 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:18.194 20:33:19 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:18.194 20:33:19 -- common/autotest_common.sh@10 -- # set +x 00:03:18.194 20:33:19 -- spdk/autotest.sh@78 -- # rm -f 00:03:18.194 20:33:19 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:18.194 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:03:18.194 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:03:18.194 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:03:18.194 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:03:18.194 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:03:18.194 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:03:18.194 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:03:18.194 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:03:18.194 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:03:18.194 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:03:18.194 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:03:18.194 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:03:18.194 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:03:18.194 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:03:18.194 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:03:18.194 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:03:18.194 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:03:18.194 20:33:21 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:18.194 20:33:21 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:18.194 20:33:21 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:18.194 20:33:21 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:18.194 20:33:21 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:18.194 20:33:21 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:18.194 20:33:21 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:18.194 20:33:21 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:18.194 20:33:21 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:18.194 20:33:21 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:18.194 20:33:21 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:18.194 20:33:21 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:18.194 20:33:21 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:18.194 20:33:21 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:18.194 20:33:21 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:18.194 No valid GPT data, bailing 00:03:18.194 20:33:21 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:18.194 20:33:21 -- scripts/common.sh@394 -- # pt= 00:03:18.194 20:33:21 -- scripts/common.sh@395 -- # return 1 00:03:18.194 20:33:21 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:18.194 1+0 records in 00:03:18.194 1+0 records out 00:03:18.194 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00164454 s, 638 MB/s 00:03:18.194 20:33:21 -- spdk/autotest.sh@105 -- # sync 00:03:18.194 20:33:21 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:18.194 20:33:21 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:18.194 20:33:21 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:20.096 20:33:23 -- spdk/autotest.sh@111 -- # uname -s 00:03:20.096 20:33:23 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:20.096 20:33:23 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:20.096 20:33:23 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:21.028 Hugepages 00:03:21.028 node hugesize free / total 00:03:21.028 node0 1048576kB 0 / 0 00:03:21.028 node0 2048kB 0 / 0 00:03:21.028 node1 1048576kB 0 / 0 00:03:21.028 node1 2048kB 0 / 0 00:03:21.028 00:03:21.028 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:21.028 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:21.028 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:21.028 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:21.028 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:21.028 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:21.028 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:21.028 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:21.028 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:21.028 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:21.029 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:21.029 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:21.029 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:21.029 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:21.029 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:21.029 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:21.029 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:21.029 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:21.029 20:33:24 -- spdk/autotest.sh@117 -- # uname -s 00:03:21.029 20:33:24 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:21.029 20:33:24 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:21.029 20:33:24 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:22.403 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:22.403 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:22.403 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:22.403 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:22.403 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:22.403 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:22.403 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:22.403 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:22.403 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:22.403 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:22.403 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:22.403 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:22.403 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:22.403 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:22.403 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:22.403 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:23.348 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:03:23.606 20:33:27 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:24.561 20:33:28 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:24.561 20:33:28 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:24.561 20:33:28 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:24.561 20:33:28 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:24.561 20:33:28 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:24.561 20:33:28 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:24.561 20:33:28 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:24.561 20:33:28 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:24.561 20:33:28 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:24.561 20:33:28 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:24.561 20:33:28 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:0b:00.0 00:03:24.561 20:33:28 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:25.936 Waiting for block devices as requested 00:03:25.936 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:25.936 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:25.936 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:25.936 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:26.195 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:26.195 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:26.195 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:26.195 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:26.454 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:03:26.454 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:26.454 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:26.713 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:26.713 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:26.713 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:26.971 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:26.971 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:26.971 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:27.229 20:33:30 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:27.229 20:33:30 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:0b:00.0 00:03:27.229 20:33:30 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:27.229 20:33:30 -- common/autotest_common.sh@1487 -- # grep 0000:0b:00.0/nvme/nvme 00:03:27.229 20:33:30 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:03:27.229 20:33:30 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 ]] 00:03:27.229 20:33:30 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:03:27.229 20:33:30 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:27.229 20:33:30 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:27.229 20:33:30 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:27.229 20:33:30 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:27.229 20:33:30 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:27.229 20:33:30 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:27.229 20:33:30 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:03:27.229 20:33:30 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:27.229 20:33:30 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:27.229 20:33:30 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:27.229 20:33:30 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:27.229 20:33:30 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:27.229 20:33:30 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:27.229 20:33:30 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:27.229 20:33:30 -- common/autotest_common.sh@1543 -- # continue 00:03:27.229 20:33:30 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:27.229 20:33:30 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:27.229 20:33:30 -- common/autotest_common.sh@10 -- # set +x 00:03:27.229 20:33:30 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:27.229 20:33:30 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:27.229 20:33:30 -- common/autotest_common.sh@10 -- # set +x 00:03:27.229 20:33:30 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:28.604 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:28.604 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:28.604 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:28.604 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:28.604 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:28.604 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:28.604 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:28.604 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:28.604 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:28.604 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:28.604 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:28.604 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:28.604 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:28.604 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:28.604 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:28.604 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:29.545 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:03:29.545 20:33:33 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:29.545 20:33:33 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:29.545 20:33:33 -- common/autotest_common.sh@10 -- # set +x 00:03:29.801 20:33:33 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:29.801 20:33:33 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:29.801 20:33:33 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:29.801 20:33:33 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:29.801 20:33:33 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:29.801 20:33:33 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:29.801 20:33:33 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:29.801 20:33:33 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:29.801 20:33:33 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:29.801 20:33:33 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:29.801 20:33:33 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:29.801 20:33:33 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:29.801 20:33:33 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:29.802 20:33:33 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:29.802 20:33:33 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:0b:00.0 00:03:29.802 20:33:33 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:29.802 20:33:33 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:0b:00.0/device 00:03:29.802 20:33:33 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:03:29.802 20:33:33 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:29.802 20:33:33 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:03:29.802 20:33:33 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:03:29.802 20:33:33 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:0b:00.0 00:03:29.802 20:33:33 -- common/autotest_common.sh@1579 -- # [[ -z 0000:0b:00.0 ]] 00:03:29.802 20:33:33 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=1529568 00:03:29.802 20:33:33 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:29.802 20:33:33 -- common/autotest_common.sh@1585 -- # waitforlisten 1529568 00:03:29.802 20:33:33 -- common/autotest_common.sh@835 -- # '[' -z 1529568 ']' 00:03:29.802 20:33:33 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:29.802 20:33:33 -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:29.802 20:33:33 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:29.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:29.802 20:33:33 -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:29.802 20:33:33 -- common/autotest_common.sh@10 -- # set +x 00:03:29.802 [2024-11-26 20:33:33.384434] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:03:29.802 [2024-11-26 20:33:33.384533] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1529568 ] 00:03:29.802 [2024-11-26 20:33:33.451006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:30.059 [2024-11-26 20:33:33.512407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:30.316 20:33:33 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:30.316 20:33:33 -- common/autotest_common.sh@868 -- # return 0 00:03:30.316 20:33:33 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:03:30.316 20:33:33 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:03:30.316 20:33:33 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:0b:00.0 00:03:33.621 nvme0n1 00:03:33.622 20:33:36 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:33.622 [2024-11-26 20:33:37.150876] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:33.622 [2024-11-26 20:33:37.150919] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:33.622 request: 00:03:33.622 { 00:03:33.622 "nvme_ctrlr_name": "nvme0", 00:03:33.622 "password": "test", 00:03:33.622 "method": "bdev_nvme_opal_revert", 00:03:33.622 "req_id": 1 00:03:33.622 } 00:03:33.622 Got JSON-RPC error response 00:03:33.622 response: 00:03:33.622 { 00:03:33.622 "code": -32603, 00:03:33.622 "message": "Internal error" 00:03:33.622 } 00:03:33.622 20:33:37 -- common/autotest_common.sh@1591 -- # true 00:03:33.622 20:33:37 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:03:33.622 20:33:37 -- common/autotest_common.sh@1595 -- # killprocess 1529568 00:03:33.622 20:33:37 -- common/autotest_common.sh@954 -- # '[' -z 1529568 ']' 00:03:33.622 20:33:37 -- common/autotest_common.sh@958 -- # kill -0 1529568 00:03:33.622 20:33:37 -- common/autotest_common.sh@959 -- # uname 00:03:33.622 20:33:37 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:33.622 20:33:37 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1529568 00:03:33.622 20:33:37 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:33.622 20:33:37 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:33.622 20:33:37 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1529568' 00:03:33.622 killing process with pid 1529568 00:03:33.622 20:33:37 -- common/autotest_common.sh@973 -- # kill 1529568 00:03:33.622 20:33:37 -- common/autotest_common.sh@978 -- # wait 1529568 00:03:35.519 20:33:38 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:35.519 20:33:38 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:35.519 20:33:38 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:35.519 20:33:38 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:35.519 20:33:38 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:35.519 20:33:38 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:35.519 20:33:38 -- common/autotest_common.sh@10 -- # set +x 00:03:35.519 20:33:38 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:35.519 20:33:38 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:35.519 20:33:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:35.519 20:33:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:35.519 20:33:38 -- common/autotest_common.sh@10 -- # set +x 00:03:35.519 ************************************ 00:03:35.519 START TEST env 00:03:35.519 ************************************ 00:03:35.519 20:33:38 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:35.519 * Looking for test storage... 00:03:35.519 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:35.519 20:33:38 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:35.519 20:33:38 env -- common/autotest_common.sh@1693 -- # lcov --version 00:03:35.519 20:33:38 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:35.519 20:33:39 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:35.519 20:33:39 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:35.519 20:33:39 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:35.519 20:33:39 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:35.519 20:33:39 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:35.519 20:33:39 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:35.519 20:33:39 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:35.519 20:33:39 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:35.519 20:33:39 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:35.519 20:33:39 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:35.519 20:33:39 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:35.519 20:33:39 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:35.519 20:33:39 env -- scripts/common.sh@344 -- # case "$op" in 00:03:35.519 20:33:39 env -- scripts/common.sh@345 -- # : 1 00:03:35.519 20:33:39 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:35.519 20:33:39 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:35.519 20:33:39 env -- scripts/common.sh@365 -- # decimal 1 00:03:35.519 20:33:39 env -- scripts/common.sh@353 -- # local d=1 00:03:35.519 20:33:39 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:35.519 20:33:39 env -- scripts/common.sh@355 -- # echo 1 00:03:35.519 20:33:39 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:35.519 20:33:39 env -- scripts/common.sh@366 -- # decimal 2 00:03:35.519 20:33:39 env -- scripts/common.sh@353 -- # local d=2 00:03:35.519 20:33:39 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:35.519 20:33:39 env -- scripts/common.sh@355 -- # echo 2 00:03:35.519 20:33:39 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:35.519 20:33:39 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:35.519 20:33:39 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:35.519 20:33:39 env -- scripts/common.sh@368 -- # return 0 00:03:35.519 20:33:39 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:35.519 20:33:39 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:35.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:35.519 --rc genhtml_branch_coverage=1 00:03:35.519 --rc genhtml_function_coverage=1 00:03:35.519 --rc genhtml_legend=1 00:03:35.519 --rc geninfo_all_blocks=1 00:03:35.519 --rc geninfo_unexecuted_blocks=1 00:03:35.519 00:03:35.519 ' 00:03:35.519 20:33:39 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:35.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:35.519 --rc genhtml_branch_coverage=1 00:03:35.519 --rc genhtml_function_coverage=1 00:03:35.519 --rc genhtml_legend=1 00:03:35.519 --rc geninfo_all_blocks=1 00:03:35.519 --rc geninfo_unexecuted_blocks=1 00:03:35.519 00:03:35.519 ' 00:03:35.519 20:33:39 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:35.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:35.519 --rc genhtml_branch_coverage=1 00:03:35.519 --rc genhtml_function_coverage=1 00:03:35.519 --rc genhtml_legend=1 00:03:35.519 --rc geninfo_all_blocks=1 00:03:35.519 --rc geninfo_unexecuted_blocks=1 00:03:35.519 00:03:35.519 ' 00:03:35.519 20:33:39 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:35.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:35.519 --rc genhtml_branch_coverage=1 00:03:35.519 --rc genhtml_function_coverage=1 00:03:35.519 --rc genhtml_legend=1 00:03:35.519 --rc geninfo_all_blocks=1 00:03:35.519 --rc geninfo_unexecuted_blocks=1 00:03:35.519 00:03:35.519 ' 00:03:35.519 20:33:39 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:35.519 20:33:39 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:35.519 20:33:39 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:35.519 20:33:39 env -- common/autotest_common.sh@10 -- # set +x 00:03:35.519 ************************************ 00:03:35.519 START TEST env_memory 00:03:35.519 ************************************ 00:03:35.519 20:33:39 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:35.519 00:03:35.519 00:03:35.519 CUnit - A unit testing framework for C - Version 2.1-3 00:03:35.519 http://cunit.sourceforge.net/ 00:03:35.519 00:03:35.519 00:03:35.519 Suite: memory 00:03:35.519 Test: alloc and free memory map ...[2024-11-26 20:33:39.126588] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:35.519 passed 00:03:35.519 Test: mem map translation ...[2024-11-26 20:33:39.146394] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:35.519 [2024-11-26 20:33:39.146416] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:35.519 [2024-11-26 20:33:39.146466] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:35.519 [2024-11-26 20:33:39.146478] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:35.519 passed 00:03:35.519 Test: mem map registration ...[2024-11-26 20:33:39.188792] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:35.519 [2024-11-26 20:33:39.188813] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:35.519 passed 00:03:35.777 Test: mem map adjacent registrations ...passed 00:03:35.777 00:03:35.777 Run Summary: Type Total Ran Passed Failed Inactive 00:03:35.777 suites 1 1 n/a 0 0 00:03:35.777 tests 4 4 4 0 0 00:03:35.777 asserts 152 152 152 0 n/a 00:03:35.777 00:03:35.777 Elapsed time = 0.145 seconds 00:03:35.777 00:03:35.777 real 0m0.155s 00:03:35.777 user 0m0.146s 00:03:35.777 sys 0m0.008s 00:03:35.777 20:33:39 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:35.777 20:33:39 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:35.777 ************************************ 00:03:35.777 END TEST env_memory 00:03:35.777 ************************************ 00:03:35.777 20:33:39 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:35.777 20:33:39 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:35.777 20:33:39 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:35.777 20:33:39 env -- common/autotest_common.sh@10 -- # set +x 00:03:35.777 ************************************ 00:03:35.777 START TEST env_vtophys 00:03:35.777 ************************************ 00:03:35.777 20:33:39 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:35.777 EAL: lib.eal log level changed from notice to debug 00:03:35.777 EAL: Detected lcore 0 as core 0 on socket 0 00:03:35.777 EAL: Detected lcore 1 as core 1 on socket 0 00:03:35.777 EAL: Detected lcore 2 as core 2 on socket 0 00:03:35.777 EAL: Detected lcore 3 as core 3 on socket 0 00:03:35.777 EAL: Detected lcore 4 as core 4 on socket 0 00:03:35.777 EAL: Detected lcore 5 as core 5 on socket 0 00:03:35.777 EAL: Detected lcore 6 as core 8 on socket 0 00:03:35.777 EAL: Detected lcore 7 as core 9 on socket 0 00:03:35.777 EAL: Detected lcore 8 as core 10 on socket 0 00:03:35.777 EAL: Detected lcore 9 as core 11 on socket 0 00:03:35.777 EAL: Detected lcore 10 as core 12 on socket 0 00:03:35.777 EAL: Detected lcore 11 as core 13 on socket 0 00:03:35.777 EAL: Detected lcore 12 as core 0 on socket 1 00:03:35.777 EAL: Detected lcore 13 as core 1 on socket 1 00:03:35.777 EAL: Detected lcore 14 as core 2 on socket 1 00:03:35.777 EAL: Detected lcore 15 as core 3 on socket 1 00:03:35.777 EAL: Detected lcore 16 as core 4 on socket 1 00:03:35.777 EAL: Detected lcore 17 as core 5 on socket 1 00:03:35.777 EAL: Detected lcore 18 as core 8 on socket 1 00:03:35.777 EAL: Detected lcore 19 as core 9 on socket 1 00:03:35.777 EAL: Detected lcore 20 as core 10 on socket 1 00:03:35.777 EAL: Detected lcore 21 as core 11 on socket 1 00:03:35.777 EAL: Detected lcore 22 as core 12 on socket 1 00:03:35.777 EAL: Detected lcore 23 as core 13 on socket 1 00:03:35.777 EAL: Detected lcore 24 as core 0 on socket 0 00:03:35.777 EAL: Detected lcore 25 as core 1 on socket 0 00:03:35.777 EAL: Detected lcore 26 as core 2 on socket 0 00:03:35.777 EAL: Detected lcore 27 as core 3 on socket 0 00:03:35.777 EAL: Detected lcore 28 as core 4 on socket 0 00:03:35.777 EAL: Detected lcore 29 as core 5 on socket 0 00:03:35.777 EAL: Detected lcore 30 as core 8 on socket 0 00:03:35.777 EAL: Detected lcore 31 as core 9 on socket 0 00:03:35.777 EAL: Detected lcore 32 as core 10 on socket 0 00:03:35.777 EAL: Detected lcore 33 as core 11 on socket 0 00:03:35.777 EAL: Detected lcore 34 as core 12 on socket 0 00:03:35.777 EAL: Detected lcore 35 as core 13 on socket 0 00:03:35.777 EAL: Detected lcore 36 as core 0 on socket 1 00:03:35.777 EAL: Detected lcore 37 as core 1 on socket 1 00:03:35.777 EAL: Detected lcore 38 as core 2 on socket 1 00:03:35.777 EAL: Detected lcore 39 as core 3 on socket 1 00:03:35.777 EAL: Detected lcore 40 as core 4 on socket 1 00:03:35.777 EAL: Detected lcore 41 as core 5 on socket 1 00:03:35.777 EAL: Detected lcore 42 as core 8 on socket 1 00:03:35.777 EAL: Detected lcore 43 as core 9 on socket 1 00:03:35.777 EAL: Detected lcore 44 as core 10 on socket 1 00:03:35.777 EAL: Detected lcore 45 as core 11 on socket 1 00:03:35.777 EAL: Detected lcore 46 as core 12 on socket 1 00:03:35.777 EAL: Detected lcore 47 as core 13 on socket 1 00:03:35.777 EAL: Maximum logical cores by configuration: 128 00:03:35.777 EAL: Detected CPU lcores: 48 00:03:35.777 EAL: Detected NUMA nodes: 2 00:03:35.777 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:35.777 EAL: Detected shared linkage of DPDK 00:03:35.777 EAL: No shared files mode enabled, IPC will be disabled 00:03:35.777 EAL: Bus pci wants IOVA as 'DC' 00:03:35.777 EAL: Buses did not request a specific IOVA mode. 00:03:35.777 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:35.777 EAL: Selected IOVA mode 'VA' 00:03:35.777 EAL: Probing VFIO support... 00:03:35.777 EAL: IOMMU type 1 (Type 1) is supported 00:03:35.777 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:35.777 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:35.777 EAL: VFIO support initialized 00:03:35.777 EAL: Ask a virtual area of 0x2e000 bytes 00:03:35.777 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:35.777 EAL: Setting up physically contiguous memory... 00:03:35.777 EAL: Setting maximum number of open files to 524288 00:03:35.777 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:35.777 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:35.777 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:35.777 EAL: Ask a virtual area of 0x61000 bytes 00:03:35.777 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:35.777 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:35.777 EAL: Ask a virtual area of 0x400000000 bytes 00:03:35.777 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:35.777 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:35.777 EAL: Ask a virtual area of 0x61000 bytes 00:03:35.777 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:35.777 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:35.777 EAL: Ask a virtual area of 0x400000000 bytes 00:03:35.777 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:35.777 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:35.777 EAL: Ask a virtual area of 0x61000 bytes 00:03:35.777 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:35.777 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:35.777 EAL: Ask a virtual area of 0x400000000 bytes 00:03:35.777 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:35.777 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:35.777 EAL: Ask a virtual area of 0x61000 bytes 00:03:35.777 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:35.777 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:35.777 EAL: Ask a virtual area of 0x400000000 bytes 00:03:35.777 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:35.777 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:35.777 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:35.777 EAL: Ask a virtual area of 0x61000 bytes 00:03:35.777 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:35.777 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:35.777 EAL: Ask a virtual area of 0x400000000 bytes 00:03:35.777 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:35.777 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:35.777 EAL: Ask a virtual area of 0x61000 bytes 00:03:35.777 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:35.777 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:35.777 EAL: Ask a virtual area of 0x400000000 bytes 00:03:35.777 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:35.777 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:35.777 EAL: Ask a virtual area of 0x61000 bytes 00:03:35.777 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:35.777 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:35.777 EAL: Ask a virtual area of 0x400000000 bytes 00:03:35.777 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:35.777 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:35.777 EAL: Ask a virtual area of 0x61000 bytes 00:03:35.777 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:35.777 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:35.777 EAL: Ask a virtual area of 0x400000000 bytes 00:03:35.777 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:35.777 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:35.777 EAL: Hugepages will be freed exactly as allocated. 00:03:35.777 EAL: No shared files mode enabled, IPC is disabled 00:03:35.777 EAL: No shared files mode enabled, IPC is disabled 00:03:35.777 EAL: TSC frequency is ~2700000 KHz 00:03:35.777 EAL: Main lcore 0 is ready (tid=7f281ed11a00;cpuset=[0]) 00:03:35.777 EAL: Trying to obtain current memory policy. 00:03:35.777 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:35.777 EAL: Restoring previous memory policy: 0 00:03:35.777 EAL: request: mp_malloc_sync 00:03:35.777 EAL: No shared files mode enabled, IPC is disabled 00:03:35.777 EAL: Heap on socket 0 was expanded by 2MB 00:03:35.777 EAL: No shared files mode enabled, IPC is disabled 00:03:35.777 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:35.777 EAL: Mem event callback 'spdk:(nil)' registered 00:03:35.777 00:03:35.777 00:03:35.777 CUnit - A unit testing framework for C - Version 2.1-3 00:03:35.777 http://cunit.sourceforge.net/ 00:03:35.777 00:03:35.777 00:03:35.777 Suite: components_suite 00:03:35.777 Test: vtophys_malloc_test ...passed 00:03:35.777 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:35.777 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:35.777 EAL: Restoring previous memory policy: 4 00:03:35.777 EAL: Calling mem event callback 'spdk:(nil)' 00:03:35.777 EAL: request: mp_malloc_sync 00:03:35.777 EAL: No shared files mode enabled, IPC is disabled 00:03:35.777 EAL: Heap on socket 0 was expanded by 4MB 00:03:35.777 EAL: Calling mem event callback 'spdk:(nil)' 00:03:35.777 EAL: request: mp_malloc_sync 00:03:35.777 EAL: No shared files mode enabled, IPC is disabled 00:03:35.777 EAL: Heap on socket 0 was shrunk by 4MB 00:03:35.777 EAL: Trying to obtain current memory policy. 00:03:35.777 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:35.777 EAL: Restoring previous memory policy: 4 00:03:35.777 EAL: Calling mem event callback 'spdk:(nil)' 00:03:35.777 EAL: request: mp_malloc_sync 00:03:35.777 EAL: No shared files mode enabled, IPC is disabled 00:03:35.777 EAL: Heap on socket 0 was expanded by 6MB 00:03:35.777 EAL: Calling mem event callback 'spdk:(nil)' 00:03:35.777 EAL: request: mp_malloc_sync 00:03:35.777 EAL: No shared files mode enabled, IPC is disabled 00:03:35.777 EAL: Heap on socket 0 was shrunk by 6MB 00:03:35.777 EAL: Trying to obtain current memory policy. 00:03:35.777 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:35.777 EAL: Restoring previous memory policy: 4 00:03:35.777 EAL: Calling mem event callback 'spdk:(nil)' 00:03:35.777 EAL: request: mp_malloc_sync 00:03:35.777 EAL: No shared files mode enabled, IPC is disabled 00:03:35.777 EAL: Heap on socket 0 was expanded by 10MB 00:03:35.777 EAL: Calling mem event callback 'spdk:(nil)' 00:03:35.777 EAL: request: mp_malloc_sync 00:03:35.777 EAL: No shared files mode enabled, IPC is disabled 00:03:35.777 EAL: Heap on socket 0 was shrunk by 10MB 00:03:35.777 EAL: Trying to obtain current memory policy. 00:03:35.777 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:35.777 EAL: Restoring previous memory policy: 4 00:03:35.777 EAL: Calling mem event callback 'spdk:(nil)' 00:03:35.777 EAL: request: mp_malloc_sync 00:03:35.777 EAL: No shared files mode enabled, IPC is disabled 00:03:35.777 EAL: Heap on socket 0 was expanded by 18MB 00:03:35.777 EAL: Calling mem event callback 'spdk:(nil)' 00:03:35.777 EAL: request: mp_malloc_sync 00:03:35.777 EAL: No shared files mode enabled, IPC is disabled 00:03:35.777 EAL: Heap on socket 0 was shrunk by 18MB 00:03:35.777 EAL: Trying to obtain current memory policy. 00:03:35.777 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:35.777 EAL: Restoring previous memory policy: 4 00:03:35.777 EAL: Calling mem event callback 'spdk:(nil)' 00:03:35.777 EAL: request: mp_malloc_sync 00:03:35.777 EAL: No shared files mode enabled, IPC is disabled 00:03:35.777 EAL: Heap on socket 0 was expanded by 34MB 00:03:35.777 EAL: Calling mem event callback 'spdk:(nil)' 00:03:35.777 EAL: request: mp_malloc_sync 00:03:35.777 EAL: No shared files mode enabled, IPC is disabled 00:03:35.777 EAL: Heap on socket 0 was shrunk by 34MB 00:03:35.777 EAL: Trying to obtain current memory policy. 00:03:35.778 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:35.778 EAL: Restoring previous memory policy: 4 00:03:35.778 EAL: Calling mem event callback 'spdk:(nil)' 00:03:35.778 EAL: request: mp_malloc_sync 00:03:35.778 EAL: No shared files mode enabled, IPC is disabled 00:03:35.778 EAL: Heap on socket 0 was expanded by 66MB 00:03:35.778 EAL: Calling mem event callback 'spdk:(nil)' 00:03:35.778 EAL: request: mp_malloc_sync 00:03:35.778 EAL: No shared files mode enabled, IPC is disabled 00:03:35.778 EAL: Heap on socket 0 was shrunk by 66MB 00:03:35.778 EAL: Trying to obtain current memory policy. 00:03:35.778 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:36.034 EAL: Restoring previous memory policy: 4 00:03:36.034 EAL: Calling mem event callback 'spdk:(nil)' 00:03:36.034 EAL: request: mp_malloc_sync 00:03:36.034 EAL: No shared files mode enabled, IPC is disabled 00:03:36.034 EAL: Heap on socket 0 was expanded by 130MB 00:03:36.034 EAL: Calling mem event callback 'spdk:(nil)' 00:03:36.034 EAL: request: mp_malloc_sync 00:03:36.034 EAL: No shared files mode enabled, IPC is disabled 00:03:36.034 EAL: Heap on socket 0 was shrunk by 130MB 00:03:36.034 EAL: Trying to obtain current memory policy. 00:03:36.034 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:36.034 EAL: Restoring previous memory policy: 4 00:03:36.034 EAL: Calling mem event callback 'spdk:(nil)' 00:03:36.034 EAL: request: mp_malloc_sync 00:03:36.034 EAL: No shared files mode enabled, IPC is disabled 00:03:36.034 EAL: Heap on socket 0 was expanded by 258MB 00:03:36.034 EAL: Calling mem event callback 'spdk:(nil)' 00:03:36.034 EAL: request: mp_malloc_sync 00:03:36.034 EAL: No shared files mode enabled, IPC is disabled 00:03:36.034 EAL: Heap on socket 0 was shrunk by 258MB 00:03:36.034 EAL: Trying to obtain current memory policy. 00:03:36.034 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:36.291 EAL: Restoring previous memory policy: 4 00:03:36.291 EAL: Calling mem event callback 'spdk:(nil)' 00:03:36.291 EAL: request: mp_malloc_sync 00:03:36.291 EAL: No shared files mode enabled, IPC is disabled 00:03:36.291 EAL: Heap on socket 0 was expanded by 514MB 00:03:36.291 EAL: Calling mem event callback 'spdk:(nil)' 00:03:36.548 EAL: request: mp_malloc_sync 00:03:36.548 EAL: No shared files mode enabled, IPC is disabled 00:03:36.548 EAL: Heap on socket 0 was shrunk by 514MB 00:03:36.548 EAL: Trying to obtain current memory policy. 00:03:36.548 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:36.805 EAL: Restoring previous memory policy: 4 00:03:36.805 EAL: Calling mem event callback 'spdk:(nil)' 00:03:36.805 EAL: request: mp_malloc_sync 00:03:36.805 EAL: No shared files mode enabled, IPC is disabled 00:03:36.805 EAL: Heap on socket 0 was expanded by 1026MB 00:03:37.063 EAL: Calling mem event callback 'spdk:(nil)' 00:03:37.321 EAL: request: mp_malloc_sync 00:03:37.321 EAL: No shared files mode enabled, IPC is disabled 00:03:37.321 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:37.321 passed 00:03:37.321 00:03:37.321 Run Summary: Type Total Ran Passed Failed Inactive 00:03:37.321 suites 1 1 n/a 0 0 00:03:37.321 tests 2 2 2 0 0 00:03:37.321 asserts 497 497 497 0 n/a 00:03:37.321 00:03:37.321 Elapsed time = 1.355 seconds 00:03:37.321 EAL: Calling mem event callback 'spdk:(nil)' 00:03:37.321 EAL: request: mp_malloc_sync 00:03:37.321 EAL: No shared files mode enabled, IPC is disabled 00:03:37.321 EAL: Heap on socket 0 was shrunk by 2MB 00:03:37.321 EAL: No shared files mode enabled, IPC is disabled 00:03:37.321 EAL: No shared files mode enabled, IPC is disabled 00:03:37.321 EAL: No shared files mode enabled, IPC is disabled 00:03:37.321 00:03:37.321 real 0m1.477s 00:03:37.321 user 0m0.856s 00:03:37.321 sys 0m0.584s 00:03:37.321 20:33:40 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:37.321 20:33:40 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:37.321 ************************************ 00:03:37.321 END TEST env_vtophys 00:03:37.321 ************************************ 00:03:37.321 20:33:40 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:37.321 20:33:40 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:37.321 20:33:40 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:37.321 20:33:40 env -- common/autotest_common.sh@10 -- # set +x 00:03:37.321 ************************************ 00:03:37.321 START TEST env_pci 00:03:37.321 ************************************ 00:03:37.321 20:33:40 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:37.321 00:03:37.321 00:03:37.321 CUnit - A unit testing framework for C - Version 2.1-3 00:03:37.321 http://cunit.sourceforge.net/ 00:03:37.321 00:03:37.321 00:03:37.321 Suite: pci 00:03:37.321 Test: pci_hook ...[2024-11-26 20:33:40.835826] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1530479 has claimed it 00:03:37.321 EAL: Cannot find device (10000:00:01.0) 00:03:37.321 EAL: Failed to attach device on primary process 00:03:37.321 passed 00:03:37.321 00:03:37.321 Run Summary: Type Total Ran Passed Failed Inactive 00:03:37.321 suites 1 1 n/a 0 0 00:03:37.321 tests 1 1 1 0 0 00:03:37.321 asserts 25 25 25 0 n/a 00:03:37.321 00:03:37.321 Elapsed time = 0.022 seconds 00:03:37.321 00:03:37.321 real 0m0.035s 00:03:37.321 user 0m0.012s 00:03:37.321 sys 0m0.023s 00:03:37.321 20:33:40 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:37.321 20:33:40 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:37.321 ************************************ 00:03:37.321 END TEST env_pci 00:03:37.321 ************************************ 00:03:37.321 20:33:40 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:37.321 20:33:40 env -- env/env.sh@15 -- # uname 00:03:37.321 20:33:40 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:37.321 20:33:40 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:37.321 20:33:40 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:37.322 20:33:40 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:37.322 20:33:40 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:37.322 20:33:40 env -- common/autotest_common.sh@10 -- # set +x 00:03:37.322 ************************************ 00:03:37.322 START TEST env_dpdk_post_init 00:03:37.322 ************************************ 00:03:37.322 20:33:40 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:37.322 EAL: Detected CPU lcores: 48 00:03:37.322 EAL: Detected NUMA nodes: 2 00:03:37.322 EAL: Detected shared linkage of DPDK 00:03:37.322 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:37.322 EAL: Selected IOVA mode 'VA' 00:03:37.322 EAL: VFIO support initialized 00:03:37.322 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:37.322 EAL: Using IOMMU type 1 (Type 1) 00:03:37.581 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:03:37.581 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:03:37.581 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:03:37.581 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:03:37.581 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:03:37.581 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:03:37.581 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:03:37.581 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:03:38.517 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:0b:00.0 (socket 0) 00:03:38.517 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:03:38.517 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:03:38.517 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:03:38.517 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:03:38.517 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:03:38.517 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:03:38.517 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:03:38.517 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:03:41.791 EAL: Releasing PCI mapped resource for 0000:0b:00.0 00:03:41.791 EAL: Calling pci_unmap_resource for 0000:0b:00.0 at 0x202001020000 00:03:41.791 Starting DPDK initialization... 00:03:41.791 Starting SPDK post initialization... 00:03:41.791 SPDK NVMe probe 00:03:41.791 Attaching to 0000:0b:00.0 00:03:41.791 Attached to 0000:0b:00.0 00:03:41.791 Cleaning up... 00:03:41.791 00:03:41.791 real 0m4.368s 00:03:41.791 user 0m2.979s 00:03:41.791 sys 0m0.444s 00:03:41.791 20:33:45 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:41.791 20:33:45 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:41.791 ************************************ 00:03:41.791 END TEST env_dpdk_post_init 00:03:41.791 ************************************ 00:03:41.791 20:33:45 env -- env/env.sh@26 -- # uname 00:03:41.791 20:33:45 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:41.791 20:33:45 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:41.791 20:33:45 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:41.791 20:33:45 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:41.791 20:33:45 env -- common/autotest_common.sh@10 -- # set +x 00:03:41.791 ************************************ 00:03:41.791 START TEST env_mem_callbacks 00:03:41.791 ************************************ 00:03:41.791 20:33:45 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:41.791 EAL: Detected CPU lcores: 48 00:03:41.791 EAL: Detected NUMA nodes: 2 00:03:41.791 EAL: Detected shared linkage of DPDK 00:03:41.791 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:41.791 EAL: Selected IOVA mode 'VA' 00:03:41.791 EAL: VFIO support initialized 00:03:41.792 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:41.792 00:03:41.792 00:03:41.792 CUnit - A unit testing framework for C - Version 2.1-3 00:03:41.792 http://cunit.sourceforge.net/ 00:03:41.792 00:03:41.792 00:03:41.792 Suite: memory 00:03:41.792 Test: test ... 00:03:41.792 register 0x200000200000 2097152 00:03:41.792 malloc 3145728 00:03:41.792 register 0x200000400000 4194304 00:03:41.792 buf 0x200000500000 len 3145728 PASSED 00:03:41.792 malloc 64 00:03:41.792 buf 0x2000004fff40 len 64 PASSED 00:03:41.792 malloc 4194304 00:03:41.792 register 0x200000800000 6291456 00:03:41.792 buf 0x200000a00000 len 4194304 PASSED 00:03:41.792 free 0x200000500000 3145728 00:03:41.792 free 0x2000004fff40 64 00:03:41.792 unregister 0x200000400000 4194304 PASSED 00:03:41.792 free 0x200000a00000 4194304 00:03:41.792 unregister 0x200000800000 6291456 PASSED 00:03:41.792 malloc 8388608 00:03:41.792 register 0x200000400000 10485760 00:03:41.792 buf 0x200000600000 len 8388608 PASSED 00:03:41.792 free 0x200000600000 8388608 00:03:41.792 unregister 0x200000400000 10485760 PASSED 00:03:41.792 passed 00:03:41.792 00:03:41.792 Run Summary: Type Total Ran Passed Failed Inactive 00:03:41.792 suites 1 1 n/a 0 0 00:03:41.792 tests 1 1 1 0 0 00:03:41.792 asserts 15 15 15 0 n/a 00:03:41.792 00:03:41.792 Elapsed time = 0.005 seconds 00:03:41.792 00:03:41.792 real 0m0.049s 00:03:41.792 user 0m0.012s 00:03:41.792 sys 0m0.037s 00:03:41.792 20:33:45 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:41.792 20:33:45 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:41.792 ************************************ 00:03:41.792 END TEST env_mem_callbacks 00:03:41.792 ************************************ 00:03:41.792 00:03:41.792 real 0m6.483s 00:03:41.792 user 0m4.208s 00:03:41.792 sys 0m1.314s 00:03:41.792 20:33:45 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:41.792 20:33:45 env -- common/autotest_common.sh@10 -- # set +x 00:03:41.792 ************************************ 00:03:41.792 END TEST env 00:03:41.792 ************************************ 00:03:41.792 20:33:45 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:41.792 20:33:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:41.792 20:33:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:41.792 20:33:45 -- common/autotest_common.sh@10 -- # set +x 00:03:41.792 ************************************ 00:03:41.792 START TEST rpc 00:03:41.792 ************************************ 00:03:41.792 20:33:45 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:42.050 * Looking for test storage... 00:03:42.050 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:42.050 20:33:45 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:42.050 20:33:45 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:42.050 20:33:45 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:42.050 20:33:45 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:42.050 20:33:45 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:42.050 20:33:45 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:42.050 20:33:45 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:42.050 20:33:45 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:42.050 20:33:45 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:42.050 20:33:45 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:42.050 20:33:45 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:42.050 20:33:45 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:42.050 20:33:45 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:42.050 20:33:45 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:42.050 20:33:45 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:42.050 20:33:45 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:42.050 20:33:45 rpc -- scripts/common.sh@345 -- # : 1 00:03:42.050 20:33:45 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:42.050 20:33:45 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:42.050 20:33:45 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:42.050 20:33:45 rpc -- scripts/common.sh@353 -- # local d=1 00:03:42.050 20:33:45 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:42.050 20:33:45 rpc -- scripts/common.sh@355 -- # echo 1 00:03:42.050 20:33:45 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:42.050 20:33:45 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:42.050 20:33:45 rpc -- scripts/common.sh@353 -- # local d=2 00:03:42.050 20:33:45 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:42.050 20:33:45 rpc -- scripts/common.sh@355 -- # echo 2 00:03:42.050 20:33:45 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:42.050 20:33:45 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:42.050 20:33:45 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:42.050 20:33:45 rpc -- scripts/common.sh@368 -- # return 0 00:03:42.050 20:33:45 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:42.050 20:33:45 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:42.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.050 --rc genhtml_branch_coverage=1 00:03:42.050 --rc genhtml_function_coverage=1 00:03:42.050 --rc genhtml_legend=1 00:03:42.050 --rc geninfo_all_blocks=1 00:03:42.050 --rc geninfo_unexecuted_blocks=1 00:03:42.050 00:03:42.050 ' 00:03:42.050 20:33:45 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:42.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.050 --rc genhtml_branch_coverage=1 00:03:42.050 --rc genhtml_function_coverage=1 00:03:42.050 --rc genhtml_legend=1 00:03:42.050 --rc geninfo_all_blocks=1 00:03:42.050 --rc geninfo_unexecuted_blocks=1 00:03:42.050 00:03:42.050 ' 00:03:42.050 20:33:45 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:42.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.050 --rc genhtml_branch_coverage=1 00:03:42.050 --rc genhtml_function_coverage=1 00:03:42.050 --rc genhtml_legend=1 00:03:42.050 --rc geninfo_all_blocks=1 00:03:42.050 --rc geninfo_unexecuted_blocks=1 00:03:42.050 00:03:42.050 ' 00:03:42.050 20:33:45 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:42.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:42.050 --rc genhtml_branch_coverage=1 00:03:42.050 --rc genhtml_function_coverage=1 00:03:42.050 --rc genhtml_legend=1 00:03:42.050 --rc geninfo_all_blocks=1 00:03:42.050 --rc geninfo_unexecuted_blocks=1 00:03:42.050 00:03:42.050 ' 00:03:42.050 20:33:45 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1531252 00:03:42.050 20:33:45 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:42.050 20:33:45 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:42.050 20:33:45 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1531252 00:03:42.050 20:33:45 rpc -- common/autotest_common.sh@835 -- # '[' -z 1531252 ']' 00:03:42.050 20:33:45 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:42.051 20:33:45 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:42.051 20:33:45 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:42.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:42.051 20:33:45 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:42.051 20:33:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:42.051 [2024-11-26 20:33:45.653165] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:03:42.051 [2024-11-26 20:33:45.653257] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1531252 ] 00:03:42.051 [2024-11-26 20:33:45.718099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:42.308 [2024-11-26 20:33:45.776373] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:42.308 [2024-11-26 20:33:45.776422] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1531252' to capture a snapshot of events at runtime. 00:03:42.308 [2024-11-26 20:33:45.776451] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:42.308 [2024-11-26 20:33:45.776462] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:42.308 [2024-11-26 20:33:45.776472] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1531252 for offline analysis/debug. 00:03:42.308 [2024-11-26 20:33:45.777085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:42.566 20:33:46 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:42.567 20:33:46 rpc -- common/autotest_common.sh@868 -- # return 0 00:03:42.567 20:33:46 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:42.567 20:33:46 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:42.567 20:33:46 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:42.567 20:33:46 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:42.567 20:33:46 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:42.567 20:33:46 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:42.567 20:33:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:42.567 ************************************ 00:03:42.567 START TEST rpc_integrity 00:03:42.567 ************************************ 00:03:42.567 20:33:46 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:42.567 20:33:46 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:42.567 20:33:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:42.567 20:33:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:42.567 20:33:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:42.567 20:33:46 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:42.567 20:33:46 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:42.567 20:33:46 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:42.567 20:33:46 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:42.567 20:33:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:42.567 20:33:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:42.567 20:33:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:42.567 20:33:46 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:42.567 20:33:46 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:42.567 20:33:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:42.567 20:33:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:42.567 20:33:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:42.567 20:33:46 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:42.567 { 00:03:42.567 "name": "Malloc0", 00:03:42.567 "aliases": [ 00:03:42.567 "efa37fe8-6a10-44ca-a004-234ed236e26b" 00:03:42.567 ], 00:03:42.567 "product_name": "Malloc disk", 00:03:42.567 "block_size": 512, 00:03:42.567 "num_blocks": 16384, 00:03:42.567 "uuid": "efa37fe8-6a10-44ca-a004-234ed236e26b", 00:03:42.567 "assigned_rate_limits": { 00:03:42.567 "rw_ios_per_sec": 0, 00:03:42.567 "rw_mbytes_per_sec": 0, 00:03:42.567 "r_mbytes_per_sec": 0, 00:03:42.567 "w_mbytes_per_sec": 0 00:03:42.567 }, 00:03:42.567 "claimed": false, 00:03:42.567 "zoned": false, 00:03:42.567 "supported_io_types": { 00:03:42.567 "read": true, 00:03:42.567 "write": true, 00:03:42.567 "unmap": true, 00:03:42.567 "flush": true, 00:03:42.567 "reset": true, 00:03:42.567 "nvme_admin": false, 00:03:42.567 "nvme_io": false, 00:03:42.567 "nvme_io_md": false, 00:03:42.567 "write_zeroes": true, 00:03:42.567 "zcopy": true, 00:03:42.567 "get_zone_info": false, 00:03:42.567 "zone_management": false, 00:03:42.567 "zone_append": false, 00:03:42.567 "compare": false, 00:03:42.567 "compare_and_write": false, 00:03:42.567 "abort": true, 00:03:42.567 "seek_hole": false, 00:03:42.567 "seek_data": false, 00:03:42.567 "copy": true, 00:03:42.567 "nvme_iov_md": false 00:03:42.567 }, 00:03:42.567 "memory_domains": [ 00:03:42.567 { 00:03:42.567 "dma_device_id": "system", 00:03:42.567 "dma_device_type": 1 00:03:42.567 }, 00:03:42.567 { 00:03:42.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:42.567 "dma_device_type": 2 00:03:42.567 } 00:03:42.567 ], 00:03:42.567 "driver_specific": {} 00:03:42.567 } 00:03:42.567 ]' 00:03:42.567 20:33:46 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:42.567 20:33:46 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:42.567 20:33:46 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:42.567 20:33:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:42.567 20:33:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:42.567 [2024-11-26 20:33:46.175801] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:42.567 [2024-11-26 20:33:46.175851] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:42.567 [2024-11-26 20:33:46.175873] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c42d20 00:03:42.567 [2024-11-26 20:33:46.175886] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:42.567 [2024-11-26 20:33:46.177225] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:42.567 [2024-11-26 20:33:46.177248] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:42.567 Passthru0 00:03:42.567 20:33:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:42.567 20:33:46 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:42.567 20:33:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:42.567 20:33:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:42.567 20:33:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:42.567 20:33:46 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:42.567 { 00:03:42.567 "name": "Malloc0", 00:03:42.567 "aliases": [ 00:03:42.567 "efa37fe8-6a10-44ca-a004-234ed236e26b" 00:03:42.567 ], 00:03:42.567 "product_name": "Malloc disk", 00:03:42.567 "block_size": 512, 00:03:42.567 "num_blocks": 16384, 00:03:42.567 "uuid": "efa37fe8-6a10-44ca-a004-234ed236e26b", 00:03:42.567 "assigned_rate_limits": { 00:03:42.567 "rw_ios_per_sec": 0, 00:03:42.567 "rw_mbytes_per_sec": 0, 00:03:42.567 "r_mbytes_per_sec": 0, 00:03:42.567 "w_mbytes_per_sec": 0 00:03:42.567 }, 00:03:42.567 "claimed": true, 00:03:42.567 "claim_type": "exclusive_write", 00:03:42.567 "zoned": false, 00:03:42.567 "supported_io_types": { 00:03:42.567 "read": true, 00:03:42.567 "write": true, 00:03:42.567 "unmap": true, 00:03:42.567 "flush": true, 00:03:42.567 "reset": true, 00:03:42.567 "nvme_admin": false, 00:03:42.567 "nvme_io": false, 00:03:42.567 "nvme_io_md": false, 00:03:42.567 "write_zeroes": true, 00:03:42.567 "zcopy": true, 00:03:42.567 "get_zone_info": false, 00:03:42.567 "zone_management": false, 00:03:42.567 "zone_append": false, 00:03:42.567 "compare": false, 00:03:42.567 "compare_and_write": false, 00:03:42.567 "abort": true, 00:03:42.567 "seek_hole": false, 00:03:42.567 "seek_data": false, 00:03:42.567 "copy": true, 00:03:42.567 "nvme_iov_md": false 00:03:42.567 }, 00:03:42.567 "memory_domains": [ 00:03:42.567 { 00:03:42.567 "dma_device_id": "system", 00:03:42.567 "dma_device_type": 1 00:03:42.567 }, 00:03:42.567 { 00:03:42.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:42.567 "dma_device_type": 2 00:03:42.567 } 00:03:42.567 ], 00:03:42.567 "driver_specific": {} 00:03:42.567 }, 00:03:42.567 { 00:03:42.567 "name": "Passthru0", 00:03:42.567 "aliases": [ 00:03:42.567 "631abd91-30f3-524f-b3a4-c56a6c147f81" 00:03:42.567 ], 00:03:42.567 "product_name": "passthru", 00:03:42.567 "block_size": 512, 00:03:42.567 "num_blocks": 16384, 00:03:42.567 "uuid": "631abd91-30f3-524f-b3a4-c56a6c147f81", 00:03:42.567 "assigned_rate_limits": { 00:03:42.567 "rw_ios_per_sec": 0, 00:03:42.567 "rw_mbytes_per_sec": 0, 00:03:42.567 "r_mbytes_per_sec": 0, 00:03:42.567 "w_mbytes_per_sec": 0 00:03:42.567 }, 00:03:42.567 "claimed": false, 00:03:42.567 "zoned": false, 00:03:42.567 "supported_io_types": { 00:03:42.567 "read": true, 00:03:42.567 "write": true, 00:03:42.567 "unmap": true, 00:03:42.567 "flush": true, 00:03:42.567 "reset": true, 00:03:42.567 "nvme_admin": false, 00:03:42.567 "nvme_io": false, 00:03:42.567 "nvme_io_md": false, 00:03:42.567 "write_zeroes": true, 00:03:42.567 "zcopy": true, 00:03:42.567 "get_zone_info": false, 00:03:42.567 "zone_management": false, 00:03:42.567 "zone_append": false, 00:03:42.567 "compare": false, 00:03:42.567 "compare_and_write": false, 00:03:42.567 "abort": true, 00:03:42.567 "seek_hole": false, 00:03:42.567 "seek_data": false, 00:03:42.567 "copy": true, 00:03:42.567 "nvme_iov_md": false 00:03:42.567 }, 00:03:42.567 "memory_domains": [ 00:03:42.567 { 00:03:42.567 "dma_device_id": "system", 00:03:42.567 "dma_device_type": 1 00:03:42.567 }, 00:03:42.567 { 00:03:42.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:42.567 "dma_device_type": 2 00:03:42.567 } 00:03:42.567 ], 00:03:42.567 "driver_specific": { 00:03:42.567 "passthru": { 00:03:42.567 "name": "Passthru0", 00:03:42.567 "base_bdev_name": "Malloc0" 00:03:42.567 } 00:03:42.567 } 00:03:42.567 } 00:03:42.567 ]' 00:03:42.567 20:33:46 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:42.567 20:33:46 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:42.567 20:33:46 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:42.568 20:33:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:42.568 20:33:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:42.568 20:33:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:42.568 20:33:46 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:42.568 20:33:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:42.568 20:33:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:42.568 20:33:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:42.568 20:33:46 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:42.568 20:33:46 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:42.568 20:33:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:42.568 20:33:46 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:42.568 20:33:46 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:42.568 20:33:46 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:42.825 20:33:46 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:42.825 00:03:42.825 real 0m0.213s 00:03:42.825 user 0m0.132s 00:03:42.825 sys 0m0.027s 00:03:42.825 20:33:46 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:42.825 20:33:46 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:42.825 ************************************ 00:03:42.825 END TEST rpc_integrity 00:03:42.825 ************************************ 00:03:42.825 20:33:46 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:42.825 20:33:46 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:42.825 20:33:46 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:42.825 20:33:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:42.825 ************************************ 00:03:42.825 START TEST rpc_plugins 00:03:42.825 ************************************ 00:03:42.825 20:33:46 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:03:42.825 20:33:46 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:42.825 20:33:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:42.825 20:33:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:42.825 20:33:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:42.825 20:33:46 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:42.825 20:33:46 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:42.825 20:33:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:42.825 20:33:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:42.825 20:33:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:42.825 20:33:46 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:42.825 { 00:03:42.825 "name": "Malloc1", 00:03:42.825 "aliases": [ 00:03:42.825 "61ae5799-a21d-456f-861d-20aa95b4e7d0" 00:03:42.825 ], 00:03:42.825 "product_name": "Malloc disk", 00:03:42.825 "block_size": 4096, 00:03:42.825 "num_blocks": 256, 00:03:42.825 "uuid": "61ae5799-a21d-456f-861d-20aa95b4e7d0", 00:03:42.825 "assigned_rate_limits": { 00:03:42.825 "rw_ios_per_sec": 0, 00:03:42.825 "rw_mbytes_per_sec": 0, 00:03:42.825 "r_mbytes_per_sec": 0, 00:03:42.825 "w_mbytes_per_sec": 0 00:03:42.825 }, 00:03:42.825 "claimed": false, 00:03:42.825 "zoned": false, 00:03:42.825 "supported_io_types": { 00:03:42.825 "read": true, 00:03:42.825 "write": true, 00:03:42.825 "unmap": true, 00:03:42.825 "flush": true, 00:03:42.825 "reset": true, 00:03:42.825 "nvme_admin": false, 00:03:42.825 "nvme_io": false, 00:03:42.825 "nvme_io_md": false, 00:03:42.825 "write_zeroes": true, 00:03:42.825 "zcopy": true, 00:03:42.825 "get_zone_info": false, 00:03:42.825 "zone_management": false, 00:03:42.825 "zone_append": false, 00:03:42.825 "compare": false, 00:03:42.825 "compare_and_write": false, 00:03:42.825 "abort": true, 00:03:42.825 "seek_hole": false, 00:03:42.825 "seek_data": false, 00:03:42.825 "copy": true, 00:03:42.825 "nvme_iov_md": false 00:03:42.825 }, 00:03:42.825 "memory_domains": [ 00:03:42.825 { 00:03:42.825 "dma_device_id": "system", 00:03:42.825 "dma_device_type": 1 00:03:42.825 }, 00:03:42.825 { 00:03:42.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:42.825 "dma_device_type": 2 00:03:42.825 } 00:03:42.825 ], 00:03:42.825 "driver_specific": {} 00:03:42.825 } 00:03:42.825 ]' 00:03:42.825 20:33:46 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:42.825 20:33:46 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:42.825 20:33:46 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:42.825 20:33:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:42.825 20:33:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:42.825 20:33:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:42.825 20:33:46 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:42.825 20:33:46 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:42.825 20:33:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:42.825 20:33:46 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:42.825 20:33:46 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:42.825 20:33:46 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:42.825 20:33:46 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:42.825 00:03:42.825 real 0m0.106s 00:03:42.825 user 0m0.069s 00:03:42.825 sys 0m0.007s 00:03:42.825 20:33:46 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:42.825 20:33:46 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:42.825 ************************************ 00:03:42.825 END TEST rpc_plugins 00:03:42.825 ************************************ 00:03:42.825 20:33:46 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:42.825 20:33:46 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:42.825 20:33:46 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:42.825 20:33:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:42.825 ************************************ 00:03:42.825 START TEST rpc_trace_cmd_test 00:03:42.825 ************************************ 00:03:42.826 20:33:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:03:42.826 20:33:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:42.826 20:33:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:42.826 20:33:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:42.826 20:33:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:42.826 20:33:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:42.826 20:33:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:42.826 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1531252", 00:03:42.826 "tpoint_group_mask": "0x8", 00:03:42.826 "iscsi_conn": { 00:03:42.826 "mask": "0x2", 00:03:42.826 "tpoint_mask": "0x0" 00:03:42.826 }, 00:03:42.826 "scsi": { 00:03:42.826 "mask": "0x4", 00:03:42.826 "tpoint_mask": "0x0" 00:03:42.826 }, 00:03:42.826 "bdev": { 00:03:42.826 "mask": "0x8", 00:03:42.826 "tpoint_mask": "0xffffffffffffffff" 00:03:42.826 }, 00:03:42.826 "nvmf_rdma": { 00:03:42.826 "mask": "0x10", 00:03:42.826 "tpoint_mask": "0x0" 00:03:42.826 }, 00:03:42.826 "nvmf_tcp": { 00:03:42.826 "mask": "0x20", 00:03:42.826 "tpoint_mask": "0x0" 00:03:42.826 }, 00:03:42.826 "ftl": { 00:03:42.826 "mask": "0x40", 00:03:42.826 "tpoint_mask": "0x0" 00:03:42.826 }, 00:03:42.826 "blobfs": { 00:03:42.826 "mask": "0x80", 00:03:42.826 "tpoint_mask": "0x0" 00:03:42.826 }, 00:03:42.826 "dsa": { 00:03:42.826 "mask": "0x200", 00:03:42.826 "tpoint_mask": "0x0" 00:03:42.826 }, 00:03:42.826 "thread": { 00:03:42.826 "mask": "0x400", 00:03:42.826 "tpoint_mask": "0x0" 00:03:42.826 }, 00:03:42.826 "nvme_pcie": { 00:03:42.826 "mask": "0x800", 00:03:42.826 "tpoint_mask": "0x0" 00:03:42.826 }, 00:03:42.826 "iaa": { 00:03:42.826 "mask": "0x1000", 00:03:42.826 "tpoint_mask": "0x0" 00:03:42.826 }, 00:03:42.826 "nvme_tcp": { 00:03:42.826 "mask": "0x2000", 00:03:42.826 "tpoint_mask": "0x0" 00:03:42.826 }, 00:03:42.826 "bdev_nvme": { 00:03:42.826 "mask": "0x4000", 00:03:42.826 "tpoint_mask": "0x0" 00:03:42.826 }, 00:03:42.826 "sock": { 00:03:42.826 "mask": "0x8000", 00:03:42.826 "tpoint_mask": "0x0" 00:03:42.826 }, 00:03:42.826 "blob": { 00:03:42.826 "mask": "0x10000", 00:03:42.826 "tpoint_mask": "0x0" 00:03:42.826 }, 00:03:42.826 "bdev_raid": { 00:03:42.826 "mask": "0x20000", 00:03:42.826 "tpoint_mask": "0x0" 00:03:42.826 }, 00:03:42.826 "scheduler": { 00:03:42.826 "mask": "0x40000", 00:03:42.826 "tpoint_mask": "0x0" 00:03:42.826 } 00:03:42.826 }' 00:03:42.826 20:33:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:43.083 20:33:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:43.083 20:33:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:43.083 20:33:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:43.083 20:33:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:43.083 20:33:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:43.083 20:33:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:43.083 20:33:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:43.083 20:33:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:43.083 20:33:46 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:43.083 00:03:43.083 real 0m0.199s 00:03:43.083 user 0m0.174s 00:03:43.083 sys 0m0.016s 00:03:43.083 20:33:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:43.083 20:33:46 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:43.083 ************************************ 00:03:43.083 END TEST rpc_trace_cmd_test 00:03:43.083 ************************************ 00:03:43.083 20:33:46 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:43.083 20:33:46 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:43.083 20:33:46 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:43.083 20:33:46 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:43.083 20:33:46 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:43.083 20:33:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:43.083 ************************************ 00:03:43.083 START TEST rpc_daemon_integrity 00:03:43.083 ************************************ 00:03:43.083 20:33:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:43.083 20:33:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:43.083 20:33:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:43.083 20:33:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:43.083 20:33:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:43.083 20:33:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:43.083 20:33:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:43.083 20:33:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:43.083 20:33:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:43.083 20:33:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:43.083 20:33:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:43.341 20:33:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:43.341 20:33:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:43.341 20:33:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:43.341 20:33:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:43.341 20:33:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:43.341 20:33:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:43.341 20:33:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:43.341 { 00:03:43.341 "name": "Malloc2", 00:03:43.341 "aliases": [ 00:03:43.341 "881fc23e-2697-44ef-8034-fcea91d8c835" 00:03:43.341 ], 00:03:43.341 "product_name": "Malloc disk", 00:03:43.341 "block_size": 512, 00:03:43.341 "num_blocks": 16384, 00:03:43.341 "uuid": "881fc23e-2697-44ef-8034-fcea91d8c835", 00:03:43.341 "assigned_rate_limits": { 00:03:43.341 "rw_ios_per_sec": 0, 00:03:43.341 "rw_mbytes_per_sec": 0, 00:03:43.341 "r_mbytes_per_sec": 0, 00:03:43.341 "w_mbytes_per_sec": 0 00:03:43.341 }, 00:03:43.341 "claimed": false, 00:03:43.341 "zoned": false, 00:03:43.341 "supported_io_types": { 00:03:43.341 "read": true, 00:03:43.341 "write": true, 00:03:43.341 "unmap": true, 00:03:43.341 "flush": true, 00:03:43.341 "reset": true, 00:03:43.341 "nvme_admin": false, 00:03:43.341 "nvme_io": false, 00:03:43.341 "nvme_io_md": false, 00:03:43.341 "write_zeroes": true, 00:03:43.341 "zcopy": true, 00:03:43.341 "get_zone_info": false, 00:03:43.341 "zone_management": false, 00:03:43.341 "zone_append": false, 00:03:43.341 "compare": false, 00:03:43.341 "compare_and_write": false, 00:03:43.341 "abort": true, 00:03:43.341 "seek_hole": false, 00:03:43.341 "seek_data": false, 00:03:43.341 "copy": true, 00:03:43.341 "nvme_iov_md": false 00:03:43.341 }, 00:03:43.341 "memory_domains": [ 00:03:43.341 { 00:03:43.341 "dma_device_id": "system", 00:03:43.341 "dma_device_type": 1 00:03:43.341 }, 00:03:43.341 { 00:03:43.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:43.341 "dma_device_type": 2 00:03:43.341 } 00:03:43.341 ], 00:03:43.341 "driver_specific": {} 00:03:43.341 } 00:03:43.341 ]' 00:03:43.341 20:33:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:43.341 20:33:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:43.341 20:33:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:43.341 20:33:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:43.341 20:33:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:43.341 [2024-11-26 20:33:46.841984] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:43.341 [2024-11-26 20:33:46.842035] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:43.341 [2024-11-26 20:33:46.842064] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1afefc0 00:03:43.341 [2024-11-26 20:33:46.842077] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:43.341 [2024-11-26 20:33:46.843243] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:43.341 [2024-11-26 20:33:46.843271] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:43.341 Passthru0 00:03:43.341 20:33:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:43.341 20:33:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:43.341 20:33:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:43.341 20:33:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:43.341 20:33:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:43.341 20:33:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:43.341 { 00:03:43.341 "name": "Malloc2", 00:03:43.341 "aliases": [ 00:03:43.341 "881fc23e-2697-44ef-8034-fcea91d8c835" 00:03:43.341 ], 00:03:43.341 "product_name": "Malloc disk", 00:03:43.341 "block_size": 512, 00:03:43.341 "num_blocks": 16384, 00:03:43.341 "uuid": "881fc23e-2697-44ef-8034-fcea91d8c835", 00:03:43.341 "assigned_rate_limits": { 00:03:43.341 "rw_ios_per_sec": 0, 00:03:43.341 "rw_mbytes_per_sec": 0, 00:03:43.341 "r_mbytes_per_sec": 0, 00:03:43.342 "w_mbytes_per_sec": 0 00:03:43.342 }, 00:03:43.342 "claimed": true, 00:03:43.342 "claim_type": "exclusive_write", 00:03:43.342 "zoned": false, 00:03:43.342 "supported_io_types": { 00:03:43.342 "read": true, 00:03:43.342 "write": true, 00:03:43.342 "unmap": true, 00:03:43.342 "flush": true, 00:03:43.342 "reset": true, 00:03:43.342 "nvme_admin": false, 00:03:43.342 "nvme_io": false, 00:03:43.342 "nvme_io_md": false, 00:03:43.342 "write_zeroes": true, 00:03:43.342 "zcopy": true, 00:03:43.342 "get_zone_info": false, 00:03:43.342 "zone_management": false, 00:03:43.342 "zone_append": false, 00:03:43.342 "compare": false, 00:03:43.342 "compare_and_write": false, 00:03:43.342 "abort": true, 00:03:43.342 "seek_hole": false, 00:03:43.342 "seek_data": false, 00:03:43.342 "copy": true, 00:03:43.342 "nvme_iov_md": false 00:03:43.342 }, 00:03:43.342 "memory_domains": [ 00:03:43.342 { 00:03:43.342 "dma_device_id": "system", 00:03:43.342 "dma_device_type": 1 00:03:43.342 }, 00:03:43.342 { 00:03:43.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:43.342 "dma_device_type": 2 00:03:43.342 } 00:03:43.342 ], 00:03:43.342 "driver_specific": {} 00:03:43.342 }, 00:03:43.342 { 00:03:43.342 "name": "Passthru0", 00:03:43.342 "aliases": [ 00:03:43.342 "1abb8ccc-553b-57a7-afec-2371202d6a67" 00:03:43.342 ], 00:03:43.342 "product_name": "passthru", 00:03:43.342 "block_size": 512, 00:03:43.342 "num_blocks": 16384, 00:03:43.342 "uuid": "1abb8ccc-553b-57a7-afec-2371202d6a67", 00:03:43.342 "assigned_rate_limits": { 00:03:43.342 "rw_ios_per_sec": 0, 00:03:43.342 "rw_mbytes_per_sec": 0, 00:03:43.342 "r_mbytes_per_sec": 0, 00:03:43.342 "w_mbytes_per_sec": 0 00:03:43.342 }, 00:03:43.342 "claimed": false, 00:03:43.342 "zoned": false, 00:03:43.342 "supported_io_types": { 00:03:43.342 "read": true, 00:03:43.342 "write": true, 00:03:43.342 "unmap": true, 00:03:43.342 "flush": true, 00:03:43.342 "reset": true, 00:03:43.342 "nvme_admin": false, 00:03:43.342 "nvme_io": false, 00:03:43.342 "nvme_io_md": false, 00:03:43.342 "write_zeroes": true, 00:03:43.342 "zcopy": true, 00:03:43.342 "get_zone_info": false, 00:03:43.342 "zone_management": false, 00:03:43.342 "zone_append": false, 00:03:43.342 "compare": false, 00:03:43.342 "compare_and_write": false, 00:03:43.342 "abort": true, 00:03:43.342 "seek_hole": false, 00:03:43.342 "seek_data": false, 00:03:43.342 "copy": true, 00:03:43.342 "nvme_iov_md": false 00:03:43.342 }, 00:03:43.342 "memory_domains": [ 00:03:43.342 { 00:03:43.342 "dma_device_id": "system", 00:03:43.342 "dma_device_type": 1 00:03:43.342 }, 00:03:43.342 { 00:03:43.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:43.342 "dma_device_type": 2 00:03:43.342 } 00:03:43.342 ], 00:03:43.342 "driver_specific": { 00:03:43.342 "passthru": { 00:03:43.342 "name": "Passthru0", 00:03:43.342 "base_bdev_name": "Malloc2" 00:03:43.342 } 00:03:43.342 } 00:03:43.342 } 00:03:43.342 ]' 00:03:43.342 20:33:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:43.342 20:33:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:43.342 20:33:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:43.342 20:33:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:43.342 20:33:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:43.342 20:33:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:43.342 20:33:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:43.342 20:33:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:43.342 20:33:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:43.342 20:33:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:43.342 20:33:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:43.342 20:33:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:43.342 20:33:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:43.342 20:33:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:43.342 20:33:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:43.342 20:33:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:43.342 20:33:46 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:43.342 00:03:43.342 real 0m0.221s 00:03:43.342 user 0m0.143s 00:03:43.342 sys 0m0.018s 00:03:43.342 20:33:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:43.342 20:33:46 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:43.342 ************************************ 00:03:43.342 END TEST rpc_daemon_integrity 00:03:43.342 ************************************ 00:03:43.342 20:33:46 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:43.342 20:33:46 rpc -- rpc/rpc.sh@84 -- # killprocess 1531252 00:03:43.342 20:33:46 rpc -- common/autotest_common.sh@954 -- # '[' -z 1531252 ']' 00:03:43.342 20:33:46 rpc -- common/autotest_common.sh@958 -- # kill -0 1531252 00:03:43.342 20:33:46 rpc -- common/autotest_common.sh@959 -- # uname 00:03:43.342 20:33:46 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:43.342 20:33:46 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1531252 00:03:43.342 20:33:47 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:43.342 20:33:47 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:43.342 20:33:47 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1531252' 00:03:43.342 killing process with pid 1531252 00:03:43.342 20:33:47 rpc -- common/autotest_common.sh@973 -- # kill 1531252 00:03:43.342 20:33:47 rpc -- common/autotest_common.sh@978 -- # wait 1531252 00:03:43.906 00:03:43.906 real 0m1.976s 00:03:43.906 user 0m2.449s 00:03:43.906 sys 0m0.605s 00:03:43.906 20:33:47 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:43.906 20:33:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:43.906 ************************************ 00:03:43.906 END TEST rpc 00:03:43.906 ************************************ 00:03:43.906 20:33:47 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:43.906 20:33:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:43.906 20:33:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:43.906 20:33:47 -- common/autotest_common.sh@10 -- # set +x 00:03:43.906 ************************************ 00:03:43.906 START TEST skip_rpc 00:03:43.906 ************************************ 00:03:43.906 20:33:47 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:43.906 * Looking for test storage... 00:03:43.906 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:43.906 20:33:47 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:43.906 20:33:47 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:43.906 20:33:47 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:44.165 20:33:47 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:44.165 20:33:47 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:44.165 20:33:47 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:44.165 20:33:47 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:44.165 20:33:47 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:44.165 20:33:47 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:44.165 20:33:47 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:44.165 20:33:47 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:44.165 20:33:47 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:44.165 20:33:47 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:44.165 20:33:47 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:44.165 20:33:47 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:44.165 20:33:47 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:44.165 20:33:47 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:44.165 20:33:47 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:44.165 20:33:47 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:44.165 20:33:47 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:44.165 20:33:47 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:44.165 20:33:47 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:44.165 20:33:47 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:44.165 20:33:47 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:44.165 20:33:47 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:44.165 20:33:47 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:44.165 20:33:47 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:44.165 20:33:47 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:44.165 20:33:47 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:44.165 20:33:47 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:44.165 20:33:47 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:44.165 20:33:47 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:44.165 20:33:47 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:44.165 20:33:47 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:44.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.165 --rc genhtml_branch_coverage=1 00:03:44.165 --rc genhtml_function_coverage=1 00:03:44.165 --rc genhtml_legend=1 00:03:44.165 --rc geninfo_all_blocks=1 00:03:44.165 --rc geninfo_unexecuted_blocks=1 00:03:44.165 00:03:44.165 ' 00:03:44.165 20:33:47 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:44.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.165 --rc genhtml_branch_coverage=1 00:03:44.165 --rc genhtml_function_coverage=1 00:03:44.165 --rc genhtml_legend=1 00:03:44.165 --rc geninfo_all_blocks=1 00:03:44.165 --rc geninfo_unexecuted_blocks=1 00:03:44.165 00:03:44.165 ' 00:03:44.165 20:33:47 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:44.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.165 --rc genhtml_branch_coverage=1 00:03:44.165 --rc genhtml_function_coverage=1 00:03:44.165 --rc genhtml_legend=1 00:03:44.165 --rc geninfo_all_blocks=1 00:03:44.165 --rc geninfo_unexecuted_blocks=1 00:03:44.165 00:03:44.165 ' 00:03:44.165 20:33:47 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:44.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.165 --rc genhtml_branch_coverage=1 00:03:44.165 --rc genhtml_function_coverage=1 00:03:44.165 --rc genhtml_legend=1 00:03:44.165 --rc geninfo_all_blocks=1 00:03:44.165 --rc geninfo_unexecuted_blocks=1 00:03:44.165 00:03:44.165 ' 00:03:44.165 20:33:47 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:44.165 20:33:47 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:44.165 20:33:47 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:44.165 20:33:47 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:44.165 20:33:47 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:44.165 20:33:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:44.165 ************************************ 00:03:44.165 START TEST skip_rpc 00:03:44.165 ************************************ 00:03:44.165 20:33:47 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:03:44.165 20:33:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1531585 00:03:44.165 20:33:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:44.165 20:33:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:44.165 20:33:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:44.165 [2024-11-26 20:33:47.709356] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:03:44.165 [2024-11-26 20:33:47.709435] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1531585 ] 00:03:44.165 [2024-11-26 20:33:47.776787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:44.165 [2024-11-26 20:33:47.839100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:49.424 20:33:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:49.424 20:33:52 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:03:49.424 20:33:52 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:49.424 20:33:52 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:03:49.424 20:33:52 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:49.424 20:33:52 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:03:49.424 20:33:52 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:49.424 20:33:52 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:03:49.424 20:33:52 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:49.424 20:33:52 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:49.424 20:33:52 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:49.424 20:33:52 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:03:49.424 20:33:52 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:49.424 20:33:52 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:49.424 20:33:52 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:49.424 20:33:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:49.424 20:33:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1531585 00:03:49.424 20:33:52 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 1531585 ']' 00:03:49.424 20:33:52 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 1531585 00:03:49.424 20:33:52 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:03:49.424 20:33:52 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:49.424 20:33:52 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1531585 00:03:49.424 20:33:52 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:49.424 20:33:52 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:49.424 20:33:52 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1531585' 00:03:49.425 killing process with pid 1531585 00:03:49.425 20:33:52 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 1531585 00:03:49.425 20:33:52 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 1531585 00:03:49.425 00:03:49.425 real 0m5.459s 00:03:49.425 user 0m5.143s 00:03:49.425 sys 0m0.326s 00:03:49.425 20:33:53 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:49.425 20:33:53 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:49.425 ************************************ 00:03:49.425 END TEST skip_rpc 00:03:49.425 ************************************ 00:03:49.683 20:33:53 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:49.683 20:33:53 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:49.683 20:33:53 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:49.683 20:33:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:49.683 ************************************ 00:03:49.683 START TEST skip_rpc_with_json 00:03:49.683 ************************************ 00:03:49.683 20:33:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:03:49.683 20:33:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:49.683 20:33:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1532278 00:03:49.683 20:33:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:49.683 20:33:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:49.683 20:33:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1532278 00:03:49.683 20:33:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 1532278 ']' 00:03:49.683 20:33:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:49.683 20:33:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:49.683 20:33:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:49.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:49.683 20:33:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:49.683 20:33:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:49.683 [2024-11-26 20:33:53.214649] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:03:49.683 [2024-11-26 20:33:53.214747] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1532278 ] 00:03:49.683 [2024-11-26 20:33:53.282030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:49.683 [2024-11-26 20:33:53.338843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:49.941 20:33:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:49.941 20:33:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:03:49.941 20:33:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:49.941 20:33:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:49.941 20:33:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:49.941 [2024-11-26 20:33:53.611990] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:49.941 request: 00:03:49.941 { 00:03:49.941 "trtype": "tcp", 00:03:49.941 "method": "nvmf_get_transports", 00:03:49.941 "req_id": 1 00:03:49.941 } 00:03:49.941 Got JSON-RPC error response 00:03:49.941 response: 00:03:49.941 { 00:03:49.941 "code": -19, 00:03:49.941 "message": "No such device" 00:03:49.941 } 00:03:49.941 20:33:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:49.941 20:33:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:49.941 20:33:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:49.941 20:33:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:49.941 [2024-11-26 20:33:53.620094] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:49.941 20:33:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:49.941 20:33:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:49.941 20:33:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:49.941 20:33:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:50.199 20:33:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:50.199 20:33:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:50.199 { 00:03:50.199 "subsystems": [ 00:03:50.199 { 00:03:50.199 "subsystem": "fsdev", 00:03:50.199 "config": [ 00:03:50.199 { 00:03:50.199 "method": "fsdev_set_opts", 00:03:50.199 "params": { 00:03:50.199 "fsdev_io_pool_size": 65535, 00:03:50.199 "fsdev_io_cache_size": 256 00:03:50.199 } 00:03:50.199 } 00:03:50.199 ] 00:03:50.199 }, 00:03:50.199 { 00:03:50.199 "subsystem": "vfio_user_target", 00:03:50.199 "config": null 00:03:50.199 }, 00:03:50.199 { 00:03:50.199 "subsystem": "keyring", 00:03:50.199 "config": [] 00:03:50.199 }, 00:03:50.199 { 00:03:50.199 "subsystem": "iobuf", 00:03:50.199 "config": [ 00:03:50.199 { 00:03:50.199 "method": "iobuf_set_options", 00:03:50.199 "params": { 00:03:50.199 "small_pool_count": 8192, 00:03:50.199 "large_pool_count": 1024, 00:03:50.199 "small_bufsize": 8192, 00:03:50.199 "large_bufsize": 135168, 00:03:50.199 "enable_numa": false 00:03:50.199 } 00:03:50.199 } 00:03:50.199 ] 00:03:50.199 }, 00:03:50.199 { 00:03:50.199 "subsystem": "sock", 00:03:50.199 "config": [ 00:03:50.199 { 00:03:50.199 "method": "sock_set_default_impl", 00:03:50.199 "params": { 00:03:50.199 "impl_name": "posix" 00:03:50.199 } 00:03:50.199 }, 00:03:50.199 { 00:03:50.199 "method": "sock_impl_set_options", 00:03:50.199 "params": { 00:03:50.199 "impl_name": "ssl", 00:03:50.199 "recv_buf_size": 4096, 00:03:50.199 "send_buf_size": 4096, 00:03:50.199 "enable_recv_pipe": true, 00:03:50.199 "enable_quickack": false, 00:03:50.199 "enable_placement_id": 0, 00:03:50.199 "enable_zerocopy_send_server": true, 00:03:50.199 "enable_zerocopy_send_client": false, 00:03:50.199 "zerocopy_threshold": 0, 00:03:50.199 "tls_version": 0, 00:03:50.199 "enable_ktls": false 00:03:50.199 } 00:03:50.199 }, 00:03:50.199 { 00:03:50.199 "method": "sock_impl_set_options", 00:03:50.199 "params": { 00:03:50.199 "impl_name": "posix", 00:03:50.199 "recv_buf_size": 2097152, 00:03:50.199 "send_buf_size": 2097152, 00:03:50.199 "enable_recv_pipe": true, 00:03:50.199 "enable_quickack": false, 00:03:50.199 "enable_placement_id": 0, 00:03:50.199 "enable_zerocopy_send_server": true, 00:03:50.199 "enable_zerocopy_send_client": false, 00:03:50.199 "zerocopy_threshold": 0, 00:03:50.199 "tls_version": 0, 00:03:50.199 "enable_ktls": false 00:03:50.199 } 00:03:50.199 } 00:03:50.199 ] 00:03:50.199 }, 00:03:50.199 { 00:03:50.199 "subsystem": "vmd", 00:03:50.199 "config": [] 00:03:50.199 }, 00:03:50.199 { 00:03:50.199 "subsystem": "accel", 00:03:50.199 "config": [ 00:03:50.199 { 00:03:50.199 "method": "accel_set_options", 00:03:50.199 "params": { 00:03:50.199 "small_cache_size": 128, 00:03:50.199 "large_cache_size": 16, 00:03:50.199 "task_count": 2048, 00:03:50.199 "sequence_count": 2048, 00:03:50.199 "buf_count": 2048 00:03:50.199 } 00:03:50.199 } 00:03:50.199 ] 00:03:50.199 }, 00:03:50.199 { 00:03:50.199 "subsystem": "bdev", 00:03:50.199 "config": [ 00:03:50.199 { 00:03:50.199 "method": "bdev_set_options", 00:03:50.199 "params": { 00:03:50.199 "bdev_io_pool_size": 65535, 00:03:50.199 "bdev_io_cache_size": 256, 00:03:50.199 "bdev_auto_examine": true, 00:03:50.199 "iobuf_small_cache_size": 128, 00:03:50.199 "iobuf_large_cache_size": 16 00:03:50.199 } 00:03:50.199 }, 00:03:50.199 { 00:03:50.199 "method": "bdev_raid_set_options", 00:03:50.199 "params": { 00:03:50.199 "process_window_size_kb": 1024, 00:03:50.199 "process_max_bandwidth_mb_sec": 0 00:03:50.199 } 00:03:50.199 }, 00:03:50.199 { 00:03:50.199 "method": "bdev_iscsi_set_options", 00:03:50.199 "params": { 00:03:50.199 "timeout_sec": 30 00:03:50.199 } 00:03:50.199 }, 00:03:50.199 { 00:03:50.199 "method": "bdev_nvme_set_options", 00:03:50.199 "params": { 00:03:50.199 "action_on_timeout": "none", 00:03:50.199 "timeout_us": 0, 00:03:50.199 "timeout_admin_us": 0, 00:03:50.199 "keep_alive_timeout_ms": 10000, 00:03:50.199 "arbitration_burst": 0, 00:03:50.199 "low_priority_weight": 0, 00:03:50.199 "medium_priority_weight": 0, 00:03:50.199 "high_priority_weight": 0, 00:03:50.199 "nvme_adminq_poll_period_us": 10000, 00:03:50.199 "nvme_ioq_poll_period_us": 0, 00:03:50.199 "io_queue_requests": 0, 00:03:50.199 "delay_cmd_submit": true, 00:03:50.199 "transport_retry_count": 4, 00:03:50.199 "bdev_retry_count": 3, 00:03:50.199 "transport_ack_timeout": 0, 00:03:50.199 "ctrlr_loss_timeout_sec": 0, 00:03:50.199 "reconnect_delay_sec": 0, 00:03:50.199 "fast_io_fail_timeout_sec": 0, 00:03:50.199 "disable_auto_failback": false, 00:03:50.199 "generate_uuids": false, 00:03:50.199 "transport_tos": 0, 00:03:50.199 "nvme_error_stat": false, 00:03:50.199 "rdma_srq_size": 0, 00:03:50.199 "io_path_stat": false, 00:03:50.199 "allow_accel_sequence": false, 00:03:50.199 "rdma_max_cq_size": 0, 00:03:50.199 "rdma_cm_event_timeout_ms": 0, 00:03:50.199 "dhchap_digests": [ 00:03:50.199 "sha256", 00:03:50.200 "sha384", 00:03:50.200 "sha512" 00:03:50.200 ], 00:03:50.200 "dhchap_dhgroups": [ 00:03:50.200 "null", 00:03:50.200 "ffdhe2048", 00:03:50.200 "ffdhe3072", 00:03:50.200 "ffdhe4096", 00:03:50.200 "ffdhe6144", 00:03:50.200 "ffdhe8192" 00:03:50.200 ] 00:03:50.200 } 00:03:50.200 }, 00:03:50.200 { 00:03:50.200 "method": "bdev_nvme_set_hotplug", 00:03:50.200 "params": { 00:03:50.200 "period_us": 100000, 00:03:50.200 "enable": false 00:03:50.200 } 00:03:50.200 }, 00:03:50.200 { 00:03:50.200 "method": "bdev_wait_for_examine" 00:03:50.200 } 00:03:50.200 ] 00:03:50.200 }, 00:03:50.200 { 00:03:50.200 "subsystem": "scsi", 00:03:50.200 "config": null 00:03:50.200 }, 00:03:50.200 { 00:03:50.200 "subsystem": "scheduler", 00:03:50.200 "config": [ 00:03:50.200 { 00:03:50.200 "method": "framework_set_scheduler", 00:03:50.200 "params": { 00:03:50.200 "name": "static" 00:03:50.200 } 00:03:50.200 } 00:03:50.200 ] 00:03:50.200 }, 00:03:50.200 { 00:03:50.200 "subsystem": "vhost_scsi", 00:03:50.200 "config": [] 00:03:50.200 }, 00:03:50.200 { 00:03:50.200 "subsystem": "vhost_blk", 00:03:50.200 "config": [] 00:03:50.200 }, 00:03:50.200 { 00:03:50.200 "subsystem": "ublk", 00:03:50.200 "config": [] 00:03:50.200 }, 00:03:50.200 { 00:03:50.200 "subsystem": "nbd", 00:03:50.200 "config": [] 00:03:50.200 }, 00:03:50.200 { 00:03:50.200 "subsystem": "nvmf", 00:03:50.200 "config": [ 00:03:50.200 { 00:03:50.200 "method": "nvmf_set_config", 00:03:50.200 "params": { 00:03:50.200 "discovery_filter": "match_any", 00:03:50.200 "admin_cmd_passthru": { 00:03:50.200 "identify_ctrlr": false 00:03:50.200 }, 00:03:50.200 "dhchap_digests": [ 00:03:50.200 "sha256", 00:03:50.200 "sha384", 00:03:50.200 "sha512" 00:03:50.200 ], 00:03:50.200 "dhchap_dhgroups": [ 00:03:50.200 "null", 00:03:50.200 "ffdhe2048", 00:03:50.200 "ffdhe3072", 00:03:50.200 "ffdhe4096", 00:03:50.200 "ffdhe6144", 00:03:50.200 "ffdhe8192" 00:03:50.200 ] 00:03:50.200 } 00:03:50.200 }, 00:03:50.200 { 00:03:50.200 "method": "nvmf_set_max_subsystems", 00:03:50.200 "params": { 00:03:50.200 "max_subsystems": 1024 00:03:50.200 } 00:03:50.200 }, 00:03:50.200 { 00:03:50.200 "method": "nvmf_set_crdt", 00:03:50.200 "params": { 00:03:50.200 "crdt1": 0, 00:03:50.200 "crdt2": 0, 00:03:50.200 "crdt3": 0 00:03:50.200 } 00:03:50.200 }, 00:03:50.200 { 00:03:50.200 "method": "nvmf_create_transport", 00:03:50.200 "params": { 00:03:50.200 "trtype": "TCP", 00:03:50.200 "max_queue_depth": 128, 00:03:50.200 "max_io_qpairs_per_ctrlr": 127, 00:03:50.200 "in_capsule_data_size": 4096, 00:03:50.200 "max_io_size": 131072, 00:03:50.200 "io_unit_size": 131072, 00:03:50.200 "max_aq_depth": 128, 00:03:50.200 "num_shared_buffers": 511, 00:03:50.200 "buf_cache_size": 4294967295, 00:03:50.200 "dif_insert_or_strip": false, 00:03:50.200 "zcopy": false, 00:03:50.200 "c2h_success": true, 00:03:50.200 "sock_priority": 0, 00:03:50.200 "abort_timeout_sec": 1, 00:03:50.200 "ack_timeout": 0, 00:03:50.200 "data_wr_pool_size": 0 00:03:50.200 } 00:03:50.200 } 00:03:50.200 ] 00:03:50.200 }, 00:03:50.200 { 00:03:50.200 "subsystem": "iscsi", 00:03:50.200 "config": [ 00:03:50.200 { 00:03:50.200 "method": "iscsi_set_options", 00:03:50.200 "params": { 00:03:50.200 "node_base": "iqn.2016-06.io.spdk", 00:03:50.200 "max_sessions": 128, 00:03:50.200 "max_connections_per_session": 2, 00:03:50.200 "max_queue_depth": 64, 00:03:50.200 "default_time2wait": 2, 00:03:50.200 "default_time2retain": 20, 00:03:50.200 "first_burst_length": 8192, 00:03:50.200 "immediate_data": true, 00:03:50.200 "allow_duplicated_isid": false, 00:03:50.200 "error_recovery_level": 0, 00:03:50.200 "nop_timeout": 60, 00:03:50.200 "nop_in_interval": 30, 00:03:50.200 "disable_chap": false, 00:03:50.200 "require_chap": false, 00:03:50.200 "mutual_chap": false, 00:03:50.200 "chap_group": 0, 00:03:50.200 "max_large_datain_per_connection": 64, 00:03:50.200 "max_r2t_per_connection": 4, 00:03:50.200 "pdu_pool_size": 36864, 00:03:50.200 "immediate_data_pool_size": 16384, 00:03:50.200 "data_out_pool_size": 2048 00:03:50.200 } 00:03:50.200 } 00:03:50.200 ] 00:03:50.200 } 00:03:50.200 ] 00:03:50.200 } 00:03:50.200 20:33:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:50.200 20:33:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1532278 00:03:50.200 20:33:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1532278 ']' 00:03:50.200 20:33:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1532278 00:03:50.200 20:33:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:50.200 20:33:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:50.200 20:33:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1532278 00:03:50.200 20:33:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:50.200 20:33:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:50.200 20:33:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1532278' 00:03:50.200 killing process with pid 1532278 00:03:50.200 20:33:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1532278 00:03:50.200 20:33:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1532278 00:03:50.765 20:33:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1532417 00:03:50.765 20:33:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:50.765 20:33:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:56.021 20:33:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1532417 00:03:56.021 20:33:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1532417 ']' 00:03:56.021 20:33:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1532417 00:03:56.021 20:33:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:56.021 20:33:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:56.021 20:33:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1532417 00:03:56.021 20:33:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:56.021 20:33:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:56.021 20:33:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1532417' 00:03:56.021 killing process with pid 1532417 00:03:56.021 20:33:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1532417 00:03:56.021 20:33:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1532417 00:03:56.021 20:33:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:56.021 20:33:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:56.279 00:03:56.279 real 0m6.556s 00:03:56.279 user 0m6.176s 00:03:56.279 sys 0m0.704s 00:03:56.279 20:33:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:56.279 20:33:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:56.279 ************************************ 00:03:56.279 END TEST skip_rpc_with_json 00:03:56.279 ************************************ 00:03:56.279 20:33:59 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:56.279 20:33:59 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:56.279 20:33:59 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:56.279 20:33:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:56.279 ************************************ 00:03:56.279 START TEST skip_rpc_with_delay 00:03:56.279 ************************************ 00:03:56.279 20:33:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:03:56.279 20:33:59 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:56.279 20:33:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:03:56.279 20:33:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:56.279 20:33:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:56.279 20:33:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:56.279 20:33:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:56.279 20:33:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:56.279 20:33:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:56.279 20:33:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:56.279 20:33:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:56.279 20:33:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:56.279 20:33:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:56.279 [2024-11-26 20:33:59.830600] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:56.279 20:33:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:03:56.279 20:33:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:56.279 20:33:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:56.279 20:33:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:56.279 00:03:56.279 real 0m0.076s 00:03:56.279 user 0m0.050s 00:03:56.279 sys 0m0.026s 00:03:56.279 20:33:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:56.279 20:33:59 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:56.279 ************************************ 00:03:56.279 END TEST skip_rpc_with_delay 00:03:56.279 ************************************ 00:03:56.279 20:33:59 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:03:56.279 20:33:59 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:56.279 20:33:59 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:56.279 20:33:59 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:56.279 20:33:59 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:56.279 20:33:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:56.279 ************************************ 00:03:56.279 START TEST exit_on_failed_rpc_init 00:03:56.279 ************************************ 00:03:56.279 20:33:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:03:56.279 20:33:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1533127 00:03:56.279 20:33:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:56.279 20:33:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1533127 00:03:56.279 20:33:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 1533127 ']' 00:03:56.279 20:33:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:56.279 20:33:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:56.279 20:33:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:56.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:56.279 20:33:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:56.279 20:33:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:56.279 [2024-11-26 20:33:59.955345] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:03:56.279 [2024-11-26 20:33:59.955424] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1533127 ] 00:03:56.536 [2024-11-26 20:34:00.028480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:56.536 [2024-11-26 20:34:00.102073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:56.794 20:34:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:56.794 20:34:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:03:56.794 20:34:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:56.794 20:34:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:56.794 20:34:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:03:56.794 20:34:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:56.794 20:34:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:56.794 20:34:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:56.794 20:34:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:56.794 20:34:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:56.794 20:34:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:56.794 20:34:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:56.794 20:34:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:56.794 20:34:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:56.794 20:34:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:56.794 [2024-11-26 20:34:00.436190] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:03:56.794 [2024-11-26 20:34:00.436263] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1533261 ] 00:03:57.052 [2024-11-26 20:34:00.504905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:57.052 [2024-11-26 20:34:00.567482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:57.052 [2024-11-26 20:34:00.567596] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:57.052 [2024-11-26 20:34:00.567616] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:57.052 [2024-11-26 20:34:00.567627] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:57.052 20:34:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:03:57.052 20:34:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:57.052 20:34:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:03:57.052 20:34:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:03:57.052 20:34:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:03:57.052 20:34:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:57.052 20:34:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:57.052 20:34:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1533127 00:03:57.052 20:34:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 1533127 ']' 00:03:57.052 20:34:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 1533127 00:03:57.052 20:34:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:03:57.052 20:34:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:57.052 20:34:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1533127 00:03:57.052 20:34:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:57.052 20:34:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:57.052 20:34:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1533127' 00:03:57.052 killing process with pid 1533127 00:03:57.052 20:34:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 1533127 00:03:57.052 20:34:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 1533127 00:03:57.616 00:03:57.616 real 0m1.220s 00:03:57.616 user 0m1.348s 00:03:57.616 sys 0m0.438s 00:03:57.616 20:34:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:57.616 20:34:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:57.616 ************************************ 00:03:57.616 END TEST exit_on_failed_rpc_init 00:03:57.616 ************************************ 00:03:57.616 20:34:01 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:57.616 00:03:57.616 real 0m13.665s 00:03:57.616 user 0m12.888s 00:03:57.616 sys 0m1.698s 00:03:57.616 20:34:01 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:57.616 20:34:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:57.616 ************************************ 00:03:57.616 END TEST skip_rpc 00:03:57.616 ************************************ 00:03:57.616 20:34:01 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:57.616 20:34:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:57.616 20:34:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:57.616 20:34:01 -- common/autotest_common.sh@10 -- # set +x 00:03:57.616 ************************************ 00:03:57.616 START TEST rpc_client 00:03:57.616 ************************************ 00:03:57.616 20:34:01 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:57.616 * Looking for test storage... 00:03:57.616 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:03:57.616 20:34:01 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:57.616 20:34:01 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:03:57.616 20:34:01 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:57.874 20:34:01 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:57.874 20:34:01 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:57.874 20:34:01 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:57.874 20:34:01 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:57.874 20:34:01 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:03:57.874 20:34:01 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:03:57.874 20:34:01 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:03:57.874 20:34:01 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:03:57.874 20:34:01 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:03:57.874 20:34:01 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:03:57.874 20:34:01 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:03:57.874 20:34:01 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:57.874 20:34:01 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:03:57.874 20:34:01 rpc_client -- scripts/common.sh@345 -- # : 1 00:03:57.874 20:34:01 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:57.874 20:34:01 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:57.874 20:34:01 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:03:57.874 20:34:01 rpc_client -- scripts/common.sh@353 -- # local d=1 00:03:57.874 20:34:01 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:57.874 20:34:01 rpc_client -- scripts/common.sh@355 -- # echo 1 00:03:57.874 20:34:01 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:03:57.874 20:34:01 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:03:57.874 20:34:01 rpc_client -- scripts/common.sh@353 -- # local d=2 00:03:57.874 20:34:01 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:57.874 20:34:01 rpc_client -- scripts/common.sh@355 -- # echo 2 00:03:57.874 20:34:01 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:03:57.874 20:34:01 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:57.874 20:34:01 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:57.874 20:34:01 rpc_client -- scripts/common.sh@368 -- # return 0 00:03:57.874 20:34:01 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:57.874 20:34:01 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:57.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.874 --rc genhtml_branch_coverage=1 00:03:57.874 --rc genhtml_function_coverage=1 00:03:57.874 --rc genhtml_legend=1 00:03:57.874 --rc geninfo_all_blocks=1 00:03:57.874 --rc geninfo_unexecuted_blocks=1 00:03:57.874 00:03:57.874 ' 00:03:57.874 20:34:01 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:57.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.874 --rc genhtml_branch_coverage=1 00:03:57.874 --rc genhtml_function_coverage=1 00:03:57.874 --rc genhtml_legend=1 00:03:57.874 --rc geninfo_all_blocks=1 00:03:57.874 --rc geninfo_unexecuted_blocks=1 00:03:57.874 00:03:57.874 ' 00:03:57.874 20:34:01 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:57.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.874 --rc genhtml_branch_coverage=1 00:03:57.874 --rc genhtml_function_coverage=1 00:03:57.874 --rc genhtml_legend=1 00:03:57.874 --rc geninfo_all_blocks=1 00:03:57.874 --rc geninfo_unexecuted_blocks=1 00:03:57.874 00:03:57.874 ' 00:03:57.874 20:34:01 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:57.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.874 --rc genhtml_branch_coverage=1 00:03:57.875 --rc genhtml_function_coverage=1 00:03:57.875 --rc genhtml_legend=1 00:03:57.875 --rc geninfo_all_blocks=1 00:03:57.875 --rc geninfo_unexecuted_blocks=1 00:03:57.875 00:03:57.875 ' 00:03:57.875 20:34:01 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:57.875 OK 00:03:57.875 20:34:01 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:57.875 00:03:57.875 real 0m0.168s 00:03:57.875 user 0m0.114s 00:03:57.875 sys 0m0.062s 00:03:57.875 20:34:01 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:57.875 20:34:01 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:03:57.875 ************************************ 00:03:57.875 END TEST rpc_client 00:03:57.875 ************************************ 00:03:57.875 20:34:01 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:57.875 20:34:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:57.875 20:34:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:57.875 20:34:01 -- common/autotest_common.sh@10 -- # set +x 00:03:57.875 ************************************ 00:03:57.875 START TEST json_config 00:03:57.875 ************************************ 00:03:57.875 20:34:01 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:57.875 20:34:01 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:57.875 20:34:01 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:03:57.875 20:34:01 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:57.875 20:34:01 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:57.875 20:34:01 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:57.875 20:34:01 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:57.875 20:34:01 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:57.875 20:34:01 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:03:57.875 20:34:01 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:03:57.875 20:34:01 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:03:57.875 20:34:01 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:03:57.875 20:34:01 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:03:57.875 20:34:01 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:03:57.875 20:34:01 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:03:57.875 20:34:01 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:57.875 20:34:01 json_config -- scripts/common.sh@344 -- # case "$op" in 00:03:57.875 20:34:01 json_config -- scripts/common.sh@345 -- # : 1 00:03:57.875 20:34:01 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:57.875 20:34:01 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:57.875 20:34:01 json_config -- scripts/common.sh@365 -- # decimal 1 00:03:57.875 20:34:01 json_config -- scripts/common.sh@353 -- # local d=1 00:03:57.875 20:34:01 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:57.875 20:34:01 json_config -- scripts/common.sh@355 -- # echo 1 00:03:57.875 20:34:01 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:03:57.875 20:34:01 json_config -- scripts/common.sh@366 -- # decimal 2 00:03:57.875 20:34:01 json_config -- scripts/common.sh@353 -- # local d=2 00:03:57.875 20:34:01 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:57.875 20:34:01 json_config -- scripts/common.sh@355 -- # echo 2 00:03:57.875 20:34:01 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:03:57.875 20:34:01 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:57.875 20:34:01 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:57.875 20:34:01 json_config -- scripts/common.sh@368 -- # return 0 00:03:57.875 20:34:01 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:57.875 20:34:01 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:57.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.875 --rc genhtml_branch_coverage=1 00:03:57.875 --rc genhtml_function_coverage=1 00:03:57.875 --rc genhtml_legend=1 00:03:57.875 --rc geninfo_all_blocks=1 00:03:57.875 --rc geninfo_unexecuted_blocks=1 00:03:57.875 00:03:57.875 ' 00:03:57.875 20:34:01 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:57.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.875 --rc genhtml_branch_coverage=1 00:03:57.875 --rc genhtml_function_coverage=1 00:03:57.875 --rc genhtml_legend=1 00:03:57.875 --rc geninfo_all_blocks=1 00:03:57.875 --rc geninfo_unexecuted_blocks=1 00:03:57.875 00:03:57.875 ' 00:03:57.875 20:34:01 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:57.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.875 --rc genhtml_branch_coverage=1 00:03:57.875 --rc genhtml_function_coverage=1 00:03:57.875 --rc genhtml_legend=1 00:03:57.875 --rc geninfo_all_blocks=1 00:03:57.875 --rc geninfo_unexecuted_blocks=1 00:03:57.875 00:03:57.875 ' 00:03:57.875 20:34:01 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:57.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.875 --rc genhtml_branch_coverage=1 00:03:57.875 --rc genhtml_function_coverage=1 00:03:57.875 --rc genhtml_legend=1 00:03:57.875 --rc geninfo_all_blocks=1 00:03:57.875 --rc geninfo_unexecuted_blocks=1 00:03:57.875 00:03:57.875 ' 00:03:57.875 20:34:01 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:57.875 20:34:01 json_config -- nvmf/common.sh@7 -- # uname -s 00:03:57.875 20:34:01 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:57.875 20:34:01 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:57.875 20:34:01 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:57.875 20:34:01 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:57.875 20:34:01 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:57.875 20:34:01 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:57.875 20:34:01 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:57.875 20:34:01 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:57.875 20:34:01 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:57.875 20:34:01 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:57.875 20:34:01 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:03:57.875 20:34:01 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:03:57.875 20:34:01 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:57.875 20:34:01 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:57.875 20:34:01 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:57.875 20:34:01 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:57.875 20:34:01 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:57.875 20:34:01 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:03:57.875 20:34:01 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:57.875 20:34:01 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:57.875 20:34:01 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:57.875 20:34:01 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.876 20:34:01 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.876 20:34:01 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.876 20:34:01 json_config -- paths/export.sh@5 -- # export PATH 00:03:57.876 20:34:01 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.876 20:34:01 json_config -- nvmf/common.sh@51 -- # : 0 00:03:57.876 20:34:01 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:57.876 20:34:01 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:57.876 20:34:01 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:57.876 20:34:01 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:57.876 20:34:01 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:57.876 20:34:01 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:57.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:57.876 20:34:01 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:57.876 20:34:01 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:57.876 20:34:01 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:57.876 20:34:01 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:57.876 20:34:01 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:57.876 20:34:01 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:57.876 20:34:01 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:57.876 20:34:01 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:57.876 20:34:01 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:03:57.876 20:34:01 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:03:57.876 20:34:01 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:57.876 20:34:01 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:03:57.876 20:34:01 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:57.876 20:34:01 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:03:57.876 20:34:01 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:03:57.876 20:34:01 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:03:57.876 20:34:01 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:03:57.876 20:34:01 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:57.876 20:34:01 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:03:57.876 INFO: JSON configuration test init 00:03:57.876 20:34:01 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:03:57.876 20:34:01 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:03:57.876 20:34:01 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:57.876 20:34:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:57.876 20:34:01 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:03:57.876 20:34:01 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:57.876 20:34:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:57.876 20:34:01 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:03:57.876 20:34:01 json_config -- json_config/common.sh@9 -- # local app=target 00:03:57.876 20:34:01 json_config -- json_config/common.sh@10 -- # shift 00:03:57.876 20:34:01 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:57.876 20:34:01 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:57.876 20:34:01 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:57.876 20:34:01 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:57.876 20:34:01 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:57.876 20:34:01 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1533521 00:03:57.876 20:34:01 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:57.876 20:34:01 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:57.876 Waiting for target to run... 00:03:57.876 20:34:01 json_config -- json_config/common.sh@25 -- # waitforlisten 1533521 /var/tmp/spdk_tgt.sock 00:03:57.876 20:34:01 json_config -- common/autotest_common.sh@835 -- # '[' -z 1533521 ']' 00:03:57.876 20:34:01 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:57.876 20:34:01 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:57.876 20:34:01 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:57.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:57.876 20:34:01 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:57.876 20:34:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:58.133 [2024-11-26 20:34:01.610408] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:03:58.133 [2024-11-26 20:34:01.610485] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1533521 ] 00:03:58.390 [2024-11-26 20:34:01.946778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:58.390 [2024-11-26 20:34:01.988800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:58.955 20:34:02 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:58.955 20:34:02 json_config -- common/autotest_common.sh@868 -- # return 0 00:03:58.955 20:34:02 json_config -- json_config/common.sh@26 -- # echo '' 00:03:58.955 00:03:58.955 20:34:02 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:03:58.955 20:34:02 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:03:58.955 20:34:02 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:58.955 20:34:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:58.955 20:34:02 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:03:58.955 20:34:02 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:03:58.955 20:34:02 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:58.955 20:34:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:58.955 20:34:02 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:58.955 20:34:02 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:03:58.955 20:34:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:02.234 20:34:05 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:02.234 20:34:05 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:02.234 20:34:05 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:02.234 20:34:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:02.234 20:34:05 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:02.234 20:34:05 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:02.234 20:34:05 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:02.234 20:34:05 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:02.234 20:34:05 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:02.234 20:34:05 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:02.234 20:34:05 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:02.234 20:34:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:02.491 20:34:06 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:02.491 20:34:06 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:02.491 20:34:06 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:02.491 20:34:06 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:02.491 20:34:06 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:02.491 20:34:06 json_config -- json_config/json_config.sh@54 -- # sort 00:04:02.491 20:34:06 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:02.491 20:34:06 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:02.491 20:34:06 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:02.491 20:34:06 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:02.491 20:34:06 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:02.491 20:34:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:02.491 20:34:06 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:02.491 20:34:06 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:02.491 20:34:06 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:02.491 20:34:06 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:02.491 20:34:06 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:02.491 20:34:06 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:02.491 20:34:06 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:02.491 20:34:06 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:02.491 20:34:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:02.491 20:34:06 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:02.491 20:34:06 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:02.491 20:34:06 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:02.491 20:34:06 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:02.491 20:34:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:02.749 MallocForNvmf0 00:04:02.749 20:34:06 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:02.749 20:34:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:03.005 MallocForNvmf1 00:04:03.005 20:34:06 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:03.005 20:34:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:03.262 [2024-11-26 20:34:06.862744] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:03.262 20:34:06 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:03.262 20:34:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:03.518 20:34:07 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:03.518 20:34:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:03.775 20:34:07 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:03.775 20:34:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:04.065 20:34:07 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:04.065 20:34:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:04.347 [2024-11-26 20:34:07.926197] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:04.347 20:34:07 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:04.347 20:34:07 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:04.347 20:34:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:04.347 20:34:07 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:04.347 20:34:07 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:04.347 20:34:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:04.347 20:34:07 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:04.347 20:34:07 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:04.347 20:34:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:04.604 MallocBdevForConfigChangeCheck 00:04:04.604 20:34:08 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:04.604 20:34:08 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:04.604 20:34:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:04.604 20:34:08 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:04.604 20:34:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:05.168 20:34:08 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:05.168 INFO: shutting down applications... 00:04:05.168 20:34:08 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:05.168 20:34:08 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:05.168 20:34:08 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:05.168 20:34:08 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:07.063 Calling clear_iscsi_subsystem 00:04:07.063 Calling clear_nvmf_subsystem 00:04:07.063 Calling clear_nbd_subsystem 00:04:07.063 Calling clear_ublk_subsystem 00:04:07.063 Calling clear_vhost_blk_subsystem 00:04:07.063 Calling clear_vhost_scsi_subsystem 00:04:07.063 Calling clear_bdev_subsystem 00:04:07.063 20:34:10 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:07.063 20:34:10 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:07.063 20:34:10 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:07.063 20:34:10 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:07.063 20:34:10 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:07.063 20:34:10 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:07.063 20:34:10 json_config -- json_config/json_config.sh@352 -- # break 00:04:07.063 20:34:10 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:07.063 20:34:10 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:07.063 20:34:10 json_config -- json_config/common.sh@31 -- # local app=target 00:04:07.063 20:34:10 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:07.063 20:34:10 json_config -- json_config/common.sh@35 -- # [[ -n 1533521 ]] 00:04:07.063 20:34:10 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1533521 00:04:07.063 20:34:10 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:07.064 20:34:10 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:07.064 20:34:10 json_config -- json_config/common.sh@41 -- # kill -0 1533521 00:04:07.064 20:34:10 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:07.631 20:34:11 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:07.631 20:34:11 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:07.631 20:34:11 json_config -- json_config/common.sh@41 -- # kill -0 1533521 00:04:07.631 20:34:11 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:07.631 20:34:11 json_config -- json_config/common.sh@43 -- # break 00:04:07.631 20:34:11 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:07.631 20:34:11 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:07.631 SPDK target shutdown done 00:04:07.631 20:34:11 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:07.631 INFO: relaunching applications... 00:04:07.631 20:34:11 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:07.631 20:34:11 json_config -- json_config/common.sh@9 -- # local app=target 00:04:07.631 20:34:11 json_config -- json_config/common.sh@10 -- # shift 00:04:07.631 20:34:11 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:07.631 20:34:11 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:07.631 20:34:11 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:07.631 20:34:11 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:07.631 20:34:11 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:07.631 20:34:11 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1534728 00:04:07.631 20:34:11 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:07.631 20:34:11 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:07.631 Waiting for target to run... 00:04:07.631 20:34:11 json_config -- json_config/common.sh@25 -- # waitforlisten 1534728 /var/tmp/spdk_tgt.sock 00:04:07.631 20:34:11 json_config -- common/autotest_common.sh@835 -- # '[' -z 1534728 ']' 00:04:07.631 20:34:11 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:07.631 20:34:11 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:07.631 20:34:11 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:07.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:07.631 20:34:11 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:07.631 20:34:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:07.631 [2024-11-26 20:34:11.242179] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:04:07.631 [2024-11-26 20:34:11.242272] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1534728 ] 00:04:08.198 [2024-11-26 20:34:11.808627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:08.198 [2024-11-26 20:34:11.860061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.550 [2024-11-26 20:34:14.908449] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:11.550 [2024-11-26 20:34:14.940915] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:11.550 20:34:14 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:11.550 20:34:14 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:11.550 20:34:14 json_config -- json_config/common.sh@26 -- # echo '' 00:04:11.550 00:04:11.550 20:34:14 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:11.550 20:34:14 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:11.550 INFO: Checking if target configuration is the same... 00:04:11.550 20:34:14 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:11.550 20:34:14 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:11.550 20:34:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:11.550 + '[' 2 -ne 2 ']' 00:04:11.550 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:11.550 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:11.550 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:11.550 +++ basename /dev/fd/62 00:04:11.550 ++ mktemp /tmp/62.XXX 00:04:11.550 + tmp_file_1=/tmp/62.UzK 00:04:11.550 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:11.550 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:11.550 + tmp_file_2=/tmp/spdk_tgt_config.json.7uf 00:04:11.550 + ret=0 00:04:11.550 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:11.807 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:11.807 + diff -u /tmp/62.UzK /tmp/spdk_tgt_config.json.7uf 00:04:11.807 + echo 'INFO: JSON config files are the same' 00:04:11.807 INFO: JSON config files are the same 00:04:11.807 + rm /tmp/62.UzK /tmp/spdk_tgt_config.json.7uf 00:04:11.807 + exit 0 00:04:11.807 20:34:15 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:11.807 20:34:15 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:11.807 INFO: changing configuration and checking if this can be detected... 00:04:11.807 20:34:15 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:11.807 20:34:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:12.065 20:34:15 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:12.065 20:34:15 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:12.065 20:34:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:12.065 + '[' 2 -ne 2 ']' 00:04:12.065 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:12.065 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:12.065 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:12.065 +++ basename /dev/fd/62 00:04:12.065 ++ mktemp /tmp/62.XXX 00:04:12.065 + tmp_file_1=/tmp/62.ZZp 00:04:12.065 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:12.065 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:12.065 + tmp_file_2=/tmp/spdk_tgt_config.json.nqY 00:04:12.065 + ret=0 00:04:12.065 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:12.629 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:12.629 + diff -u /tmp/62.ZZp /tmp/spdk_tgt_config.json.nqY 00:04:12.629 + ret=1 00:04:12.629 + echo '=== Start of file: /tmp/62.ZZp ===' 00:04:12.629 + cat /tmp/62.ZZp 00:04:12.629 + echo '=== End of file: /tmp/62.ZZp ===' 00:04:12.629 + echo '' 00:04:12.629 + echo '=== Start of file: /tmp/spdk_tgt_config.json.nqY ===' 00:04:12.629 + cat /tmp/spdk_tgt_config.json.nqY 00:04:12.629 + echo '=== End of file: /tmp/spdk_tgt_config.json.nqY ===' 00:04:12.629 + echo '' 00:04:12.629 + rm /tmp/62.ZZp /tmp/spdk_tgt_config.json.nqY 00:04:12.629 + exit 1 00:04:12.629 20:34:16 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:12.629 INFO: configuration change detected. 00:04:12.629 20:34:16 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:12.629 20:34:16 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:12.629 20:34:16 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:12.629 20:34:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:12.629 20:34:16 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:12.629 20:34:16 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:12.629 20:34:16 json_config -- json_config/json_config.sh@324 -- # [[ -n 1534728 ]] 00:04:12.629 20:34:16 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:12.629 20:34:16 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:12.629 20:34:16 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:12.629 20:34:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:12.629 20:34:16 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:12.629 20:34:16 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:12.629 20:34:16 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:12.629 20:34:16 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:12.629 20:34:16 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:12.629 20:34:16 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:12.629 20:34:16 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:12.629 20:34:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:12.629 20:34:16 json_config -- json_config/json_config.sh@330 -- # killprocess 1534728 00:04:12.629 20:34:16 json_config -- common/autotest_common.sh@954 -- # '[' -z 1534728 ']' 00:04:12.629 20:34:16 json_config -- common/autotest_common.sh@958 -- # kill -0 1534728 00:04:12.629 20:34:16 json_config -- common/autotest_common.sh@959 -- # uname 00:04:12.629 20:34:16 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:12.629 20:34:16 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1534728 00:04:12.629 20:34:16 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:12.629 20:34:16 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:12.629 20:34:16 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1534728' 00:04:12.629 killing process with pid 1534728 00:04:12.629 20:34:16 json_config -- common/autotest_common.sh@973 -- # kill 1534728 00:04:12.629 20:34:16 json_config -- common/autotest_common.sh@978 -- # wait 1534728 00:04:14.526 20:34:17 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:14.526 20:34:17 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:14.526 20:34:17 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:14.526 20:34:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:14.526 20:34:17 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:14.526 20:34:17 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:14.526 INFO: Success 00:04:14.526 00:04:14.526 real 0m16.435s 00:04:14.526 user 0m17.976s 00:04:14.526 sys 0m2.680s 00:04:14.526 20:34:17 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.526 20:34:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:14.526 ************************************ 00:04:14.526 END TEST json_config 00:04:14.526 ************************************ 00:04:14.526 20:34:17 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:14.526 20:34:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.526 20:34:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.526 20:34:17 -- common/autotest_common.sh@10 -- # set +x 00:04:14.526 ************************************ 00:04:14.526 START TEST json_config_extra_key 00:04:14.526 ************************************ 00:04:14.526 20:34:17 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:14.526 20:34:17 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:14.526 20:34:17 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:14.526 20:34:17 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:14.526 20:34:18 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:14.526 20:34:18 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:14.526 20:34:18 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:14.526 20:34:18 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:14.526 20:34:18 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:14.526 20:34:18 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:14.526 20:34:18 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:14.526 20:34:18 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:14.526 20:34:18 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:14.526 20:34:18 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:14.526 20:34:18 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:14.526 20:34:18 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:14.526 20:34:18 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:14.526 20:34:18 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:14.526 20:34:18 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:14.526 20:34:18 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:14.526 20:34:18 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:14.526 20:34:18 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:14.526 20:34:18 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:14.526 20:34:18 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:14.526 20:34:18 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:14.526 20:34:18 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:14.526 20:34:18 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:14.526 20:34:18 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:14.526 20:34:18 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:14.526 20:34:18 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:14.526 20:34:18 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:14.526 20:34:18 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:14.526 20:34:18 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:14.526 20:34:18 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:14.526 20:34:18 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:14.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.526 --rc genhtml_branch_coverage=1 00:04:14.526 --rc genhtml_function_coverage=1 00:04:14.526 --rc genhtml_legend=1 00:04:14.526 --rc geninfo_all_blocks=1 00:04:14.526 --rc geninfo_unexecuted_blocks=1 00:04:14.526 00:04:14.526 ' 00:04:14.526 20:34:18 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:14.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.526 --rc genhtml_branch_coverage=1 00:04:14.526 --rc genhtml_function_coverage=1 00:04:14.526 --rc genhtml_legend=1 00:04:14.526 --rc geninfo_all_blocks=1 00:04:14.526 --rc geninfo_unexecuted_blocks=1 00:04:14.526 00:04:14.526 ' 00:04:14.526 20:34:18 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:14.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.526 --rc genhtml_branch_coverage=1 00:04:14.526 --rc genhtml_function_coverage=1 00:04:14.526 --rc genhtml_legend=1 00:04:14.527 --rc geninfo_all_blocks=1 00:04:14.527 --rc geninfo_unexecuted_blocks=1 00:04:14.527 00:04:14.527 ' 00:04:14.527 20:34:18 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:14.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.527 --rc genhtml_branch_coverage=1 00:04:14.527 --rc genhtml_function_coverage=1 00:04:14.527 --rc genhtml_legend=1 00:04:14.527 --rc geninfo_all_blocks=1 00:04:14.527 --rc geninfo_unexecuted_blocks=1 00:04:14.527 00:04:14.527 ' 00:04:14.527 20:34:18 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:14.527 20:34:18 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:14.527 20:34:18 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:14.527 20:34:18 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:14.527 20:34:18 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:14.527 20:34:18 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:14.527 20:34:18 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:14.527 20:34:18 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:14.527 20:34:18 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:14.527 20:34:18 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:14.527 20:34:18 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:14.527 20:34:18 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:14.527 20:34:18 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:04:14.527 20:34:18 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:04:14.527 20:34:18 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:14.527 20:34:18 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:14.527 20:34:18 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:14.527 20:34:18 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:14.527 20:34:18 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:14.527 20:34:18 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:14.527 20:34:18 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:14.527 20:34:18 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:14.527 20:34:18 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:14.527 20:34:18 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:14.527 20:34:18 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:14.527 20:34:18 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:14.527 20:34:18 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:14.527 20:34:18 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:14.527 20:34:18 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:14.527 20:34:18 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:14.527 20:34:18 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:14.527 20:34:18 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:14.527 20:34:18 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:14.527 20:34:18 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:14.527 20:34:18 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:14.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:14.527 20:34:18 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:14.527 20:34:18 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:14.527 20:34:18 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:14.527 20:34:18 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:14.527 20:34:18 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:14.527 20:34:18 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:14.527 20:34:18 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:14.527 20:34:18 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:14.527 20:34:18 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:14.527 20:34:18 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:14.527 20:34:18 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:14.527 20:34:18 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:14.527 20:34:18 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:14.527 20:34:18 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:14.527 INFO: launching applications... 00:04:14.527 20:34:18 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:14.527 20:34:18 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:14.527 20:34:18 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:14.527 20:34:18 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:14.527 20:34:18 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:14.527 20:34:18 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:14.527 20:34:18 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:14.527 20:34:18 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:14.527 20:34:18 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1535698 00:04:14.527 20:34:18 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:14.527 20:34:18 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:14.527 Waiting for target to run... 00:04:14.527 20:34:18 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1535698 /var/tmp/spdk_tgt.sock 00:04:14.527 20:34:18 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 1535698 ']' 00:04:14.527 20:34:18 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:14.527 20:34:18 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:14.527 20:34:18 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:14.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:14.527 20:34:18 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:14.527 20:34:18 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:14.527 [2024-11-26 20:34:18.090143] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:04:14.527 [2024-11-26 20:34:18.090235] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1535698 ] 00:04:14.785 [2024-11-26 20:34:18.436525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.785 [2024-11-26 20:34:18.478120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.717 20:34:19 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:15.717 20:34:19 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:15.717 20:34:19 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:15.717 00:04:15.717 20:34:19 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:15.717 INFO: shutting down applications... 00:04:15.717 20:34:19 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:15.717 20:34:19 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:15.717 20:34:19 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:15.717 20:34:19 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1535698 ]] 00:04:15.717 20:34:19 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1535698 00:04:15.717 20:34:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:15.717 20:34:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:15.717 20:34:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1535698 00:04:15.717 20:34:19 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:15.978 20:34:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:15.978 20:34:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:15.978 20:34:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1535698 00:04:15.978 20:34:19 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:15.978 20:34:19 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:15.978 20:34:19 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:15.978 20:34:19 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:15.978 SPDK target shutdown done 00:04:15.978 20:34:19 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:15.978 Success 00:04:15.978 00:04:15.978 real 0m1.694s 00:04:15.978 user 0m1.700s 00:04:15.978 sys 0m0.455s 00:04:15.978 20:34:19 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.978 20:34:19 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:15.978 ************************************ 00:04:15.978 END TEST json_config_extra_key 00:04:15.978 ************************************ 00:04:15.978 20:34:19 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:15.978 20:34:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.978 20:34:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.978 20:34:19 -- common/autotest_common.sh@10 -- # set +x 00:04:15.978 ************************************ 00:04:15.978 START TEST alias_rpc 00:04:15.978 ************************************ 00:04:15.978 20:34:19 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:16.236 * Looking for test storage... 00:04:16.236 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:16.236 20:34:19 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:16.236 20:34:19 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:16.236 20:34:19 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:16.236 20:34:19 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:16.236 20:34:19 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:16.236 20:34:19 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:16.236 20:34:19 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:16.236 20:34:19 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:16.236 20:34:19 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:16.236 20:34:19 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:16.236 20:34:19 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:16.236 20:34:19 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:16.236 20:34:19 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:16.236 20:34:19 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:16.236 20:34:19 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:16.236 20:34:19 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:16.236 20:34:19 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:16.236 20:34:19 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:16.236 20:34:19 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:16.236 20:34:19 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:16.236 20:34:19 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:16.236 20:34:19 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:16.236 20:34:19 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:16.236 20:34:19 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:16.236 20:34:19 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:16.236 20:34:19 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:16.236 20:34:19 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:16.236 20:34:19 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:16.236 20:34:19 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:16.236 20:34:19 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:16.236 20:34:19 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:16.236 20:34:19 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:16.236 20:34:19 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:16.236 20:34:19 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:16.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.236 --rc genhtml_branch_coverage=1 00:04:16.236 --rc genhtml_function_coverage=1 00:04:16.236 --rc genhtml_legend=1 00:04:16.236 --rc geninfo_all_blocks=1 00:04:16.236 --rc geninfo_unexecuted_blocks=1 00:04:16.236 00:04:16.236 ' 00:04:16.236 20:34:19 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:16.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.236 --rc genhtml_branch_coverage=1 00:04:16.236 --rc genhtml_function_coverage=1 00:04:16.236 --rc genhtml_legend=1 00:04:16.236 --rc geninfo_all_blocks=1 00:04:16.236 --rc geninfo_unexecuted_blocks=1 00:04:16.236 00:04:16.236 ' 00:04:16.236 20:34:19 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:16.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.236 --rc genhtml_branch_coverage=1 00:04:16.236 --rc genhtml_function_coverage=1 00:04:16.236 --rc genhtml_legend=1 00:04:16.236 --rc geninfo_all_blocks=1 00:04:16.236 --rc geninfo_unexecuted_blocks=1 00:04:16.236 00:04:16.236 ' 00:04:16.236 20:34:19 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:16.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:16.236 --rc genhtml_branch_coverage=1 00:04:16.236 --rc genhtml_function_coverage=1 00:04:16.236 --rc genhtml_legend=1 00:04:16.236 --rc geninfo_all_blocks=1 00:04:16.236 --rc geninfo_unexecuted_blocks=1 00:04:16.236 00:04:16.236 ' 00:04:16.236 20:34:19 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:16.236 20:34:19 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1535965 00:04:16.236 20:34:19 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:16.236 20:34:19 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1535965 00:04:16.236 20:34:19 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 1535965 ']' 00:04:16.236 20:34:19 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:16.237 20:34:19 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:16.237 20:34:19 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:16.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:16.237 20:34:19 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:16.237 20:34:19 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.237 [2024-11-26 20:34:19.838779] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:04:16.237 [2024-11-26 20:34:19.838871] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1535965 ] 00:04:16.237 [2024-11-26 20:34:19.905873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:16.494 [2024-11-26 20:34:19.967025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.751 20:34:20 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:16.751 20:34:20 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:16.751 20:34:20 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:17.008 20:34:20 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1535965 00:04:17.008 20:34:20 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 1535965 ']' 00:04:17.008 20:34:20 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 1535965 00:04:17.008 20:34:20 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:17.008 20:34:20 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:17.008 20:34:20 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1535965 00:04:17.008 20:34:20 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:17.009 20:34:20 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:17.009 20:34:20 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1535965' 00:04:17.009 killing process with pid 1535965 00:04:17.009 20:34:20 alias_rpc -- common/autotest_common.sh@973 -- # kill 1535965 00:04:17.009 20:34:20 alias_rpc -- common/autotest_common.sh@978 -- # wait 1535965 00:04:17.573 00:04:17.573 real 0m1.397s 00:04:17.573 user 0m1.514s 00:04:17.573 sys 0m0.463s 00:04:17.573 20:34:21 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.573 20:34:21 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.573 ************************************ 00:04:17.573 END TEST alias_rpc 00:04:17.573 ************************************ 00:04:17.573 20:34:21 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:17.573 20:34:21 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:17.573 20:34:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.573 20:34:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.573 20:34:21 -- common/autotest_common.sh@10 -- # set +x 00:04:17.573 ************************************ 00:04:17.573 START TEST spdkcli_tcp 00:04:17.573 ************************************ 00:04:17.573 20:34:21 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:17.573 * Looking for test storage... 00:04:17.573 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:17.573 20:34:21 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:17.573 20:34:21 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:17.573 20:34:21 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:17.573 20:34:21 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:17.573 20:34:21 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:17.573 20:34:21 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:17.573 20:34:21 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:17.573 20:34:21 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:17.573 20:34:21 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:17.573 20:34:21 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:17.573 20:34:21 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:17.573 20:34:21 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:17.573 20:34:21 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:17.573 20:34:21 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:17.573 20:34:21 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:17.573 20:34:21 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:17.573 20:34:21 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:17.573 20:34:21 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:17.573 20:34:21 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:17.573 20:34:21 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:17.573 20:34:21 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:17.573 20:34:21 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:17.573 20:34:21 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:17.573 20:34:21 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:17.573 20:34:21 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:17.573 20:34:21 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:17.573 20:34:21 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:17.573 20:34:21 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:17.573 20:34:21 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:17.573 20:34:21 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:17.573 20:34:21 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:17.573 20:34:21 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:17.573 20:34:21 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:17.573 20:34:21 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:17.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.573 --rc genhtml_branch_coverage=1 00:04:17.573 --rc genhtml_function_coverage=1 00:04:17.573 --rc genhtml_legend=1 00:04:17.573 --rc geninfo_all_blocks=1 00:04:17.573 --rc geninfo_unexecuted_blocks=1 00:04:17.573 00:04:17.573 ' 00:04:17.573 20:34:21 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:17.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.573 --rc genhtml_branch_coverage=1 00:04:17.573 --rc genhtml_function_coverage=1 00:04:17.573 --rc genhtml_legend=1 00:04:17.573 --rc geninfo_all_blocks=1 00:04:17.573 --rc geninfo_unexecuted_blocks=1 00:04:17.573 00:04:17.573 ' 00:04:17.573 20:34:21 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:17.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.573 --rc genhtml_branch_coverage=1 00:04:17.574 --rc genhtml_function_coverage=1 00:04:17.574 --rc genhtml_legend=1 00:04:17.574 --rc geninfo_all_blocks=1 00:04:17.574 --rc geninfo_unexecuted_blocks=1 00:04:17.574 00:04:17.574 ' 00:04:17.574 20:34:21 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:17.574 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.574 --rc genhtml_branch_coverage=1 00:04:17.574 --rc genhtml_function_coverage=1 00:04:17.574 --rc genhtml_legend=1 00:04:17.574 --rc geninfo_all_blocks=1 00:04:17.574 --rc geninfo_unexecuted_blocks=1 00:04:17.574 00:04:17.574 ' 00:04:17.574 20:34:21 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:17.574 20:34:21 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:17.574 20:34:21 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:17.574 20:34:21 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:17.574 20:34:21 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:17.574 20:34:21 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:17.574 20:34:21 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:17.574 20:34:21 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:17.574 20:34:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:17.574 20:34:21 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1536163 00:04:17.574 20:34:21 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:17.574 20:34:21 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1536163 00:04:17.574 20:34:21 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 1536163 ']' 00:04:17.574 20:34:21 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:17.574 20:34:21 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:17.574 20:34:21 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:17.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:17.574 20:34:21 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:17.574 20:34:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:17.830 [2024-11-26 20:34:21.293656] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:04:17.830 [2024-11-26 20:34:21.293745] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1536163 ] 00:04:17.830 [2024-11-26 20:34:21.360747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:17.830 [2024-11-26 20:34:21.420737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:17.830 [2024-11-26 20:34:21.420741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.087 20:34:21 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:18.087 20:34:21 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:18.087 20:34:21 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1536292 00:04:18.087 20:34:21 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:18.087 20:34:21 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:18.344 [ 00:04:18.344 "bdev_malloc_delete", 00:04:18.344 "bdev_malloc_create", 00:04:18.344 "bdev_null_resize", 00:04:18.344 "bdev_null_delete", 00:04:18.344 "bdev_null_create", 00:04:18.344 "bdev_nvme_cuse_unregister", 00:04:18.345 "bdev_nvme_cuse_register", 00:04:18.345 "bdev_opal_new_user", 00:04:18.345 "bdev_opal_set_lock_state", 00:04:18.345 "bdev_opal_delete", 00:04:18.345 "bdev_opal_get_info", 00:04:18.345 "bdev_opal_create", 00:04:18.345 "bdev_nvme_opal_revert", 00:04:18.345 "bdev_nvme_opal_init", 00:04:18.345 "bdev_nvme_send_cmd", 00:04:18.345 "bdev_nvme_set_keys", 00:04:18.345 "bdev_nvme_get_path_iostat", 00:04:18.345 "bdev_nvme_get_mdns_discovery_info", 00:04:18.345 "bdev_nvme_stop_mdns_discovery", 00:04:18.345 "bdev_nvme_start_mdns_discovery", 00:04:18.345 "bdev_nvme_set_multipath_policy", 00:04:18.345 "bdev_nvme_set_preferred_path", 00:04:18.345 "bdev_nvme_get_io_paths", 00:04:18.345 "bdev_nvme_remove_error_injection", 00:04:18.345 "bdev_nvme_add_error_injection", 00:04:18.345 "bdev_nvme_get_discovery_info", 00:04:18.345 "bdev_nvme_stop_discovery", 00:04:18.345 "bdev_nvme_start_discovery", 00:04:18.345 "bdev_nvme_get_controller_health_info", 00:04:18.345 "bdev_nvme_disable_controller", 00:04:18.345 "bdev_nvme_enable_controller", 00:04:18.345 "bdev_nvme_reset_controller", 00:04:18.345 "bdev_nvme_get_transport_statistics", 00:04:18.345 "bdev_nvme_apply_firmware", 00:04:18.345 "bdev_nvme_detach_controller", 00:04:18.345 "bdev_nvme_get_controllers", 00:04:18.345 "bdev_nvme_attach_controller", 00:04:18.345 "bdev_nvme_set_hotplug", 00:04:18.345 "bdev_nvme_set_options", 00:04:18.345 "bdev_passthru_delete", 00:04:18.345 "bdev_passthru_create", 00:04:18.345 "bdev_lvol_set_parent_bdev", 00:04:18.345 "bdev_lvol_set_parent", 00:04:18.345 "bdev_lvol_check_shallow_copy", 00:04:18.345 "bdev_lvol_start_shallow_copy", 00:04:18.345 "bdev_lvol_grow_lvstore", 00:04:18.345 "bdev_lvol_get_lvols", 00:04:18.345 "bdev_lvol_get_lvstores", 00:04:18.345 "bdev_lvol_delete", 00:04:18.345 "bdev_lvol_set_read_only", 00:04:18.345 "bdev_lvol_resize", 00:04:18.345 "bdev_lvol_decouple_parent", 00:04:18.345 "bdev_lvol_inflate", 00:04:18.345 "bdev_lvol_rename", 00:04:18.345 "bdev_lvol_clone_bdev", 00:04:18.345 "bdev_lvol_clone", 00:04:18.345 "bdev_lvol_snapshot", 00:04:18.345 "bdev_lvol_create", 00:04:18.345 "bdev_lvol_delete_lvstore", 00:04:18.345 "bdev_lvol_rename_lvstore", 00:04:18.345 "bdev_lvol_create_lvstore", 00:04:18.345 "bdev_raid_set_options", 00:04:18.345 "bdev_raid_remove_base_bdev", 00:04:18.345 "bdev_raid_add_base_bdev", 00:04:18.345 "bdev_raid_delete", 00:04:18.345 "bdev_raid_create", 00:04:18.345 "bdev_raid_get_bdevs", 00:04:18.345 "bdev_error_inject_error", 00:04:18.345 "bdev_error_delete", 00:04:18.345 "bdev_error_create", 00:04:18.345 "bdev_split_delete", 00:04:18.345 "bdev_split_create", 00:04:18.345 "bdev_delay_delete", 00:04:18.345 "bdev_delay_create", 00:04:18.345 "bdev_delay_update_latency", 00:04:18.345 "bdev_zone_block_delete", 00:04:18.345 "bdev_zone_block_create", 00:04:18.345 "blobfs_create", 00:04:18.345 "blobfs_detect", 00:04:18.345 "blobfs_set_cache_size", 00:04:18.345 "bdev_aio_delete", 00:04:18.345 "bdev_aio_rescan", 00:04:18.345 "bdev_aio_create", 00:04:18.345 "bdev_ftl_set_property", 00:04:18.345 "bdev_ftl_get_properties", 00:04:18.345 "bdev_ftl_get_stats", 00:04:18.345 "bdev_ftl_unmap", 00:04:18.345 "bdev_ftl_unload", 00:04:18.345 "bdev_ftl_delete", 00:04:18.345 "bdev_ftl_load", 00:04:18.345 "bdev_ftl_create", 00:04:18.345 "bdev_virtio_attach_controller", 00:04:18.345 "bdev_virtio_scsi_get_devices", 00:04:18.345 "bdev_virtio_detach_controller", 00:04:18.345 "bdev_virtio_blk_set_hotplug", 00:04:18.345 "bdev_iscsi_delete", 00:04:18.345 "bdev_iscsi_create", 00:04:18.345 "bdev_iscsi_set_options", 00:04:18.345 "accel_error_inject_error", 00:04:18.345 "ioat_scan_accel_module", 00:04:18.345 "dsa_scan_accel_module", 00:04:18.345 "iaa_scan_accel_module", 00:04:18.345 "vfu_virtio_create_fs_endpoint", 00:04:18.345 "vfu_virtio_create_scsi_endpoint", 00:04:18.345 "vfu_virtio_scsi_remove_target", 00:04:18.345 "vfu_virtio_scsi_add_target", 00:04:18.345 "vfu_virtio_create_blk_endpoint", 00:04:18.345 "vfu_virtio_delete_endpoint", 00:04:18.345 "keyring_file_remove_key", 00:04:18.345 "keyring_file_add_key", 00:04:18.345 "keyring_linux_set_options", 00:04:18.345 "fsdev_aio_delete", 00:04:18.345 "fsdev_aio_create", 00:04:18.345 "iscsi_get_histogram", 00:04:18.345 "iscsi_enable_histogram", 00:04:18.345 "iscsi_set_options", 00:04:18.345 "iscsi_get_auth_groups", 00:04:18.345 "iscsi_auth_group_remove_secret", 00:04:18.345 "iscsi_auth_group_add_secret", 00:04:18.345 "iscsi_delete_auth_group", 00:04:18.345 "iscsi_create_auth_group", 00:04:18.345 "iscsi_set_discovery_auth", 00:04:18.345 "iscsi_get_options", 00:04:18.345 "iscsi_target_node_request_logout", 00:04:18.345 "iscsi_target_node_set_redirect", 00:04:18.345 "iscsi_target_node_set_auth", 00:04:18.345 "iscsi_target_node_add_lun", 00:04:18.345 "iscsi_get_stats", 00:04:18.345 "iscsi_get_connections", 00:04:18.345 "iscsi_portal_group_set_auth", 00:04:18.345 "iscsi_start_portal_group", 00:04:18.345 "iscsi_delete_portal_group", 00:04:18.345 "iscsi_create_portal_group", 00:04:18.345 "iscsi_get_portal_groups", 00:04:18.345 "iscsi_delete_target_node", 00:04:18.345 "iscsi_target_node_remove_pg_ig_maps", 00:04:18.345 "iscsi_target_node_add_pg_ig_maps", 00:04:18.345 "iscsi_create_target_node", 00:04:18.345 "iscsi_get_target_nodes", 00:04:18.345 "iscsi_delete_initiator_group", 00:04:18.345 "iscsi_initiator_group_remove_initiators", 00:04:18.345 "iscsi_initiator_group_add_initiators", 00:04:18.345 "iscsi_create_initiator_group", 00:04:18.345 "iscsi_get_initiator_groups", 00:04:18.345 "nvmf_set_crdt", 00:04:18.345 "nvmf_set_config", 00:04:18.345 "nvmf_set_max_subsystems", 00:04:18.345 "nvmf_stop_mdns_prr", 00:04:18.345 "nvmf_publish_mdns_prr", 00:04:18.345 "nvmf_subsystem_get_listeners", 00:04:18.345 "nvmf_subsystem_get_qpairs", 00:04:18.345 "nvmf_subsystem_get_controllers", 00:04:18.345 "nvmf_get_stats", 00:04:18.345 "nvmf_get_transports", 00:04:18.345 "nvmf_create_transport", 00:04:18.345 "nvmf_get_targets", 00:04:18.345 "nvmf_delete_target", 00:04:18.345 "nvmf_create_target", 00:04:18.345 "nvmf_subsystem_allow_any_host", 00:04:18.345 "nvmf_subsystem_set_keys", 00:04:18.345 "nvmf_subsystem_remove_host", 00:04:18.345 "nvmf_subsystem_add_host", 00:04:18.345 "nvmf_ns_remove_host", 00:04:18.345 "nvmf_ns_add_host", 00:04:18.345 "nvmf_subsystem_remove_ns", 00:04:18.345 "nvmf_subsystem_set_ns_ana_group", 00:04:18.345 "nvmf_subsystem_add_ns", 00:04:18.345 "nvmf_subsystem_listener_set_ana_state", 00:04:18.345 "nvmf_discovery_get_referrals", 00:04:18.345 "nvmf_discovery_remove_referral", 00:04:18.345 "nvmf_discovery_add_referral", 00:04:18.345 "nvmf_subsystem_remove_listener", 00:04:18.345 "nvmf_subsystem_add_listener", 00:04:18.345 "nvmf_delete_subsystem", 00:04:18.345 "nvmf_create_subsystem", 00:04:18.345 "nvmf_get_subsystems", 00:04:18.345 "env_dpdk_get_mem_stats", 00:04:18.345 "nbd_get_disks", 00:04:18.345 "nbd_stop_disk", 00:04:18.345 "nbd_start_disk", 00:04:18.345 "ublk_recover_disk", 00:04:18.345 "ublk_get_disks", 00:04:18.345 "ublk_stop_disk", 00:04:18.345 "ublk_start_disk", 00:04:18.345 "ublk_destroy_target", 00:04:18.345 "ublk_create_target", 00:04:18.345 "virtio_blk_create_transport", 00:04:18.345 "virtio_blk_get_transports", 00:04:18.345 "vhost_controller_set_coalescing", 00:04:18.345 "vhost_get_controllers", 00:04:18.345 "vhost_delete_controller", 00:04:18.345 "vhost_create_blk_controller", 00:04:18.345 "vhost_scsi_controller_remove_target", 00:04:18.345 "vhost_scsi_controller_add_target", 00:04:18.345 "vhost_start_scsi_controller", 00:04:18.345 "vhost_create_scsi_controller", 00:04:18.345 "thread_set_cpumask", 00:04:18.345 "scheduler_set_options", 00:04:18.345 "framework_get_governor", 00:04:18.345 "framework_get_scheduler", 00:04:18.345 "framework_set_scheduler", 00:04:18.345 "framework_get_reactors", 00:04:18.345 "thread_get_io_channels", 00:04:18.345 "thread_get_pollers", 00:04:18.345 "thread_get_stats", 00:04:18.345 "framework_monitor_context_switch", 00:04:18.345 "spdk_kill_instance", 00:04:18.345 "log_enable_timestamps", 00:04:18.345 "log_get_flags", 00:04:18.345 "log_clear_flag", 00:04:18.345 "log_set_flag", 00:04:18.345 "log_get_level", 00:04:18.345 "log_set_level", 00:04:18.345 "log_get_print_level", 00:04:18.345 "log_set_print_level", 00:04:18.345 "framework_enable_cpumask_locks", 00:04:18.345 "framework_disable_cpumask_locks", 00:04:18.345 "framework_wait_init", 00:04:18.346 "framework_start_init", 00:04:18.346 "scsi_get_devices", 00:04:18.346 "bdev_get_histogram", 00:04:18.346 "bdev_enable_histogram", 00:04:18.346 "bdev_set_qos_limit", 00:04:18.346 "bdev_set_qd_sampling_period", 00:04:18.346 "bdev_get_bdevs", 00:04:18.346 "bdev_reset_iostat", 00:04:18.346 "bdev_get_iostat", 00:04:18.346 "bdev_examine", 00:04:18.346 "bdev_wait_for_examine", 00:04:18.346 "bdev_set_options", 00:04:18.346 "accel_get_stats", 00:04:18.346 "accel_set_options", 00:04:18.346 "accel_set_driver", 00:04:18.346 "accel_crypto_key_destroy", 00:04:18.346 "accel_crypto_keys_get", 00:04:18.346 "accel_crypto_key_create", 00:04:18.346 "accel_assign_opc", 00:04:18.346 "accel_get_module_info", 00:04:18.346 "accel_get_opc_assignments", 00:04:18.346 "vmd_rescan", 00:04:18.346 "vmd_remove_device", 00:04:18.346 "vmd_enable", 00:04:18.346 "sock_get_default_impl", 00:04:18.346 "sock_set_default_impl", 00:04:18.346 "sock_impl_set_options", 00:04:18.346 "sock_impl_get_options", 00:04:18.346 "iobuf_get_stats", 00:04:18.346 "iobuf_set_options", 00:04:18.346 "keyring_get_keys", 00:04:18.346 "vfu_tgt_set_base_path", 00:04:18.346 "framework_get_pci_devices", 00:04:18.346 "framework_get_config", 00:04:18.346 "framework_get_subsystems", 00:04:18.346 "fsdev_set_opts", 00:04:18.346 "fsdev_get_opts", 00:04:18.346 "trace_get_info", 00:04:18.346 "trace_get_tpoint_group_mask", 00:04:18.346 "trace_disable_tpoint_group", 00:04:18.346 "trace_enable_tpoint_group", 00:04:18.346 "trace_clear_tpoint_mask", 00:04:18.346 "trace_set_tpoint_mask", 00:04:18.346 "notify_get_notifications", 00:04:18.346 "notify_get_types", 00:04:18.346 "spdk_get_version", 00:04:18.346 "rpc_get_methods" 00:04:18.346 ] 00:04:18.346 20:34:21 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:18.346 20:34:21 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:18.346 20:34:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:18.346 20:34:21 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:18.346 20:34:21 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1536163 00:04:18.346 20:34:21 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 1536163 ']' 00:04:18.346 20:34:21 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 1536163 00:04:18.346 20:34:21 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:18.346 20:34:21 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:18.346 20:34:21 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1536163 00:04:18.346 20:34:22 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:18.346 20:34:22 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:18.346 20:34:22 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1536163' 00:04:18.346 killing process with pid 1536163 00:04:18.346 20:34:22 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 1536163 00:04:18.346 20:34:22 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 1536163 00:04:18.910 00:04:18.910 real 0m1.362s 00:04:18.910 user 0m2.450s 00:04:18.910 sys 0m0.471s 00:04:18.910 20:34:22 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:18.910 20:34:22 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:18.910 ************************************ 00:04:18.910 END TEST spdkcli_tcp 00:04:18.910 ************************************ 00:04:18.910 20:34:22 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:18.910 20:34:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:18.910 20:34:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.910 20:34:22 -- common/autotest_common.sh@10 -- # set +x 00:04:18.910 ************************************ 00:04:18.910 START TEST dpdk_mem_utility 00:04:18.910 ************************************ 00:04:18.910 20:34:22 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:18.910 * Looking for test storage... 00:04:18.910 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:18.910 20:34:22 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:18.910 20:34:22 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:18.910 20:34:22 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:19.168 20:34:22 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:19.168 20:34:22 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:19.168 20:34:22 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:19.168 20:34:22 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:19.168 20:34:22 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:19.168 20:34:22 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:19.168 20:34:22 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:19.168 20:34:22 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:19.168 20:34:22 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:19.168 20:34:22 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:19.168 20:34:22 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:19.168 20:34:22 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:19.168 20:34:22 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:19.168 20:34:22 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:19.168 20:34:22 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:19.168 20:34:22 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:19.168 20:34:22 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:19.168 20:34:22 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:19.168 20:34:22 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:19.168 20:34:22 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:19.168 20:34:22 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:19.168 20:34:22 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:19.168 20:34:22 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:19.168 20:34:22 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:19.168 20:34:22 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:19.168 20:34:22 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:19.168 20:34:22 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:19.168 20:34:22 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:19.168 20:34:22 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:19.168 20:34:22 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:19.168 20:34:22 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:19.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.168 --rc genhtml_branch_coverage=1 00:04:19.168 --rc genhtml_function_coverage=1 00:04:19.168 --rc genhtml_legend=1 00:04:19.168 --rc geninfo_all_blocks=1 00:04:19.168 --rc geninfo_unexecuted_blocks=1 00:04:19.168 00:04:19.168 ' 00:04:19.168 20:34:22 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:19.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.168 --rc genhtml_branch_coverage=1 00:04:19.168 --rc genhtml_function_coverage=1 00:04:19.168 --rc genhtml_legend=1 00:04:19.168 --rc geninfo_all_blocks=1 00:04:19.168 --rc geninfo_unexecuted_blocks=1 00:04:19.168 00:04:19.168 ' 00:04:19.168 20:34:22 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:19.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.169 --rc genhtml_branch_coverage=1 00:04:19.169 --rc genhtml_function_coverage=1 00:04:19.169 --rc genhtml_legend=1 00:04:19.169 --rc geninfo_all_blocks=1 00:04:19.169 --rc geninfo_unexecuted_blocks=1 00:04:19.169 00:04:19.169 ' 00:04:19.169 20:34:22 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:19.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.169 --rc genhtml_branch_coverage=1 00:04:19.169 --rc genhtml_function_coverage=1 00:04:19.169 --rc genhtml_legend=1 00:04:19.169 --rc geninfo_all_blocks=1 00:04:19.169 --rc geninfo_unexecuted_blocks=1 00:04:19.169 00:04:19.169 ' 00:04:19.169 20:34:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:19.169 20:34:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1536494 00:04:19.169 20:34:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:19.169 20:34:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1536494 00:04:19.169 20:34:22 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 1536494 ']' 00:04:19.169 20:34:22 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:19.169 20:34:22 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:19.169 20:34:22 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:19.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:19.169 20:34:22 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:19.169 20:34:22 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:19.169 [2024-11-26 20:34:22.698975] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:04:19.169 [2024-11-26 20:34:22.699069] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1536494 ] 00:04:19.169 [2024-11-26 20:34:22.764331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:19.169 [2024-11-26 20:34:22.823217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.426 20:34:23 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:19.426 20:34:23 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:19.426 20:34:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:19.426 20:34:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:19.427 20:34:23 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.427 20:34:23 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:19.427 { 00:04:19.427 "filename": "/tmp/spdk_mem_dump.txt" 00:04:19.427 } 00:04:19.427 20:34:23 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.427 20:34:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:19.685 DPDK memory size 818.000000 MiB in 1 heap(s) 00:04:19.685 1 heaps totaling size 818.000000 MiB 00:04:19.685 size: 818.000000 MiB heap id: 0 00:04:19.685 end heaps---------- 00:04:19.685 9 mempools totaling size 603.782043 MiB 00:04:19.685 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:19.685 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:19.685 size: 100.555481 MiB name: bdev_io_1536494 00:04:19.685 size: 50.003479 MiB name: msgpool_1536494 00:04:19.685 size: 36.509338 MiB name: fsdev_io_1536494 00:04:19.685 size: 21.763794 MiB name: PDU_Pool 00:04:19.685 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:19.685 size: 4.133484 MiB name: evtpool_1536494 00:04:19.685 size: 0.026123 MiB name: Session_Pool 00:04:19.685 end mempools------- 00:04:19.685 6 memzones totaling size 4.142822 MiB 00:04:19.685 size: 1.000366 MiB name: RG_ring_0_1536494 00:04:19.685 size: 1.000366 MiB name: RG_ring_1_1536494 00:04:19.685 size: 1.000366 MiB name: RG_ring_4_1536494 00:04:19.685 size: 1.000366 MiB name: RG_ring_5_1536494 00:04:19.685 size: 0.125366 MiB name: RG_ring_2_1536494 00:04:19.685 size: 0.015991 MiB name: RG_ring_3_1536494 00:04:19.685 end memzones------- 00:04:19.685 20:34:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:19.685 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:19.685 list of free elements. size: 10.852478 MiB 00:04:19.685 element at address: 0x200019200000 with size: 0.999878 MiB 00:04:19.685 element at address: 0x200019400000 with size: 0.999878 MiB 00:04:19.685 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:19.685 element at address: 0x200032000000 with size: 0.994446 MiB 00:04:19.685 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:19.685 element at address: 0x200012c00000 with size: 0.944275 MiB 00:04:19.685 element at address: 0x200019600000 with size: 0.936584 MiB 00:04:19.685 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:19.685 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:04:19.685 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:19.685 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:19.685 element at address: 0x200019800000 with size: 0.485657 MiB 00:04:19.685 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:19.685 element at address: 0x200028200000 with size: 0.410034 MiB 00:04:19.685 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:19.685 list of standard malloc elements. size: 199.218628 MiB 00:04:19.685 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:19.685 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:19.685 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:19.685 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:04:19.685 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:04:19.685 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:19.685 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:04:19.685 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:19.685 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:04:19.685 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:19.685 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:19.685 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:19.685 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:19.685 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:19.685 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:19.685 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:19.685 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:19.685 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:19.685 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:19.685 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:19.685 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:19.685 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:19.685 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:19.685 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:19.685 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:19.685 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:19.685 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:19.685 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:19.685 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:19.685 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:19.685 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:19.685 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:19.685 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:19.685 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:04:19.685 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:04:19.685 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:04:19.685 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:04:19.685 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:04:19.685 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:04:19.685 element at address: 0x200028268f80 with size: 0.000183 MiB 00:04:19.685 element at address: 0x200028269040 with size: 0.000183 MiB 00:04:19.685 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:04:19.685 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:04:19.685 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:04:19.685 list of memzone associated elements. size: 607.928894 MiB 00:04:19.685 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:04:19.685 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:19.685 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:04:19.685 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:19.685 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:04:19.685 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_1536494_0 00:04:19.685 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:19.685 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1536494_0 00:04:19.685 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:19.685 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1536494_0 00:04:19.685 element at address: 0x2000199be940 with size: 20.255554 MiB 00:04:19.685 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:19.685 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:04:19.685 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:19.686 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:19.686 associated memzone info: size: 3.000122 MiB name: MP_evtpool_1536494_0 00:04:19.686 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:19.686 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1536494 00:04:19.686 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:19.686 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1536494 00:04:19.686 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:19.686 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:19.686 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:04:19.686 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:19.686 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:19.686 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:19.686 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:19.686 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:19.686 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:19.686 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1536494 00:04:19.686 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:19.686 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1536494 00:04:19.686 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:04:19.686 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1536494 00:04:19.686 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:04:19.686 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1536494 00:04:19.686 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:19.686 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1536494 00:04:19.686 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:19.686 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1536494 00:04:19.686 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:19.686 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:19.686 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:19.686 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:19.686 element at address: 0x20001987c540 with size: 0.250488 MiB 00:04:19.686 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:19.686 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:19.686 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_1536494 00:04:19.686 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:19.686 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1536494 00:04:19.686 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:19.686 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:19.686 element at address: 0x200028269100 with size: 0.023743 MiB 00:04:19.686 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:19.686 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:19.686 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1536494 00:04:19.686 element at address: 0x20002826f240 with size: 0.002441 MiB 00:04:19.686 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:19.686 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:19.686 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1536494 00:04:19.686 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:19.686 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1536494 00:04:19.686 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:19.686 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1536494 00:04:19.686 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:04:19.686 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:19.686 20:34:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:19.686 20:34:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1536494 00:04:19.686 20:34:23 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 1536494 ']' 00:04:19.686 20:34:23 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 1536494 00:04:19.686 20:34:23 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:19.686 20:34:23 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:19.686 20:34:23 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1536494 00:04:19.686 20:34:23 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:19.686 20:34:23 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:19.686 20:34:23 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1536494' 00:04:19.686 killing process with pid 1536494 00:04:19.686 20:34:23 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 1536494 00:04:19.686 20:34:23 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 1536494 00:04:20.250 00:04:20.250 real 0m1.175s 00:04:20.250 user 0m1.135s 00:04:20.250 sys 0m0.444s 00:04:20.250 20:34:23 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.250 20:34:23 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:20.250 ************************************ 00:04:20.250 END TEST dpdk_mem_utility 00:04:20.250 ************************************ 00:04:20.250 20:34:23 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:20.250 20:34:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.250 20:34:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.250 20:34:23 -- common/autotest_common.sh@10 -- # set +x 00:04:20.250 ************************************ 00:04:20.250 START TEST event 00:04:20.250 ************************************ 00:04:20.250 20:34:23 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:20.250 * Looking for test storage... 00:04:20.250 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:20.250 20:34:23 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:20.250 20:34:23 event -- common/autotest_common.sh@1693 -- # lcov --version 00:04:20.250 20:34:23 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:20.250 20:34:23 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:20.250 20:34:23 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:20.250 20:34:23 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:20.250 20:34:23 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:20.250 20:34:23 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:20.250 20:34:23 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:20.250 20:34:23 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:20.250 20:34:23 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:20.250 20:34:23 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:20.250 20:34:23 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:20.250 20:34:23 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:20.250 20:34:23 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:20.250 20:34:23 event -- scripts/common.sh@344 -- # case "$op" in 00:04:20.250 20:34:23 event -- scripts/common.sh@345 -- # : 1 00:04:20.250 20:34:23 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:20.250 20:34:23 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:20.250 20:34:23 event -- scripts/common.sh@365 -- # decimal 1 00:04:20.250 20:34:23 event -- scripts/common.sh@353 -- # local d=1 00:04:20.250 20:34:23 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:20.250 20:34:23 event -- scripts/common.sh@355 -- # echo 1 00:04:20.250 20:34:23 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:20.250 20:34:23 event -- scripts/common.sh@366 -- # decimal 2 00:04:20.250 20:34:23 event -- scripts/common.sh@353 -- # local d=2 00:04:20.250 20:34:23 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:20.250 20:34:23 event -- scripts/common.sh@355 -- # echo 2 00:04:20.250 20:34:23 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:20.250 20:34:23 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:20.250 20:34:23 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:20.250 20:34:23 event -- scripts/common.sh@368 -- # return 0 00:04:20.250 20:34:23 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:20.250 20:34:23 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:20.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.250 --rc genhtml_branch_coverage=1 00:04:20.250 --rc genhtml_function_coverage=1 00:04:20.250 --rc genhtml_legend=1 00:04:20.250 --rc geninfo_all_blocks=1 00:04:20.250 --rc geninfo_unexecuted_blocks=1 00:04:20.250 00:04:20.250 ' 00:04:20.250 20:34:23 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:20.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.250 --rc genhtml_branch_coverage=1 00:04:20.250 --rc genhtml_function_coverage=1 00:04:20.250 --rc genhtml_legend=1 00:04:20.250 --rc geninfo_all_blocks=1 00:04:20.250 --rc geninfo_unexecuted_blocks=1 00:04:20.250 00:04:20.250 ' 00:04:20.250 20:34:23 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:20.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.250 --rc genhtml_branch_coverage=1 00:04:20.250 --rc genhtml_function_coverage=1 00:04:20.250 --rc genhtml_legend=1 00:04:20.250 --rc geninfo_all_blocks=1 00:04:20.250 --rc geninfo_unexecuted_blocks=1 00:04:20.250 00:04:20.250 ' 00:04:20.250 20:34:23 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:20.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.250 --rc genhtml_branch_coverage=1 00:04:20.250 --rc genhtml_function_coverage=1 00:04:20.250 --rc genhtml_legend=1 00:04:20.250 --rc geninfo_all_blocks=1 00:04:20.250 --rc geninfo_unexecuted_blocks=1 00:04:20.250 00:04:20.250 ' 00:04:20.250 20:34:23 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:20.250 20:34:23 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:20.250 20:34:23 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:20.250 20:34:23 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:20.250 20:34:23 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.250 20:34:23 event -- common/autotest_common.sh@10 -- # set +x 00:04:20.250 ************************************ 00:04:20.250 START TEST event_perf 00:04:20.250 ************************************ 00:04:20.250 20:34:23 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:20.250 Running I/O for 1 seconds...[2024-11-26 20:34:23.903835] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:04:20.250 [2024-11-26 20:34:23.903896] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1536692 ] 00:04:20.508 [2024-11-26 20:34:23.971449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:20.508 [2024-11-26 20:34:24.033028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:20.508 [2024-11-26 20:34:24.033133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:20.508 [2024-11-26 20:34:24.033225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:20.508 [2024-11-26 20:34:24.033233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.438 Running I/O for 1 seconds... 00:04:21.438 lcore 0: 235268 00:04:21.438 lcore 1: 235266 00:04:21.438 lcore 2: 235267 00:04:21.438 lcore 3: 235267 00:04:21.438 done. 00:04:21.438 00:04:21.438 real 0m1.209s 00:04:21.438 user 0m4.136s 00:04:21.438 sys 0m0.069s 00:04:21.438 20:34:25 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.438 20:34:25 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:21.438 ************************************ 00:04:21.438 END TEST event_perf 00:04:21.438 ************************************ 00:04:21.438 20:34:25 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:21.438 20:34:25 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:21.438 20:34:25 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.438 20:34:25 event -- common/autotest_common.sh@10 -- # set +x 00:04:21.696 ************************************ 00:04:21.696 START TEST event_reactor 00:04:21.696 ************************************ 00:04:21.696 20:34:25 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:21.696 [2024-11-26 20:34:25.158999] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:04:21.696 [2024-11-26 20:34:25.159064] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1536851 ] 00:04:21.696 [2024-11-26 20:34:25.225233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.696 [2024-11-26 20:34:25.282300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.067 test_start 00:04:23.067 oneshot 00:04:23.067 tick 100 00:04:23.067 tick 100 00:04:23.067 tick 250 00:04:23.067 tick 100 00:04:23.067 tick 100 00:04:23.067 tick 100 00:04:23.067 tick 250 00:04:23.067 tick 500 00:04:23.067 tick 100 00:04:23.067 tick 100 00:04:23.067 tick 250 00:04:23.067 tick 100 00:04:23.067 tick 100 00:04:23.067 test_end 00:04:23.067 00:04:23.067 real 0m1.200s 00:04:23.067 user 0m1.127s 00:04:23.067 sys 0m0.069s 00:04:23.067 20:34:26 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:23.067 20:34:26 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:23.067 ************************************ 00:04:23.067 END TEST event_reactor 00:04:23.067 ************************************ 00:04:23.067 20:34:26 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:23.067 20:34:26 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:23.067 20:34:26 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.067 20:34:26 event -- common/autotest_common.sh@10 -- # set +x 00:04:23.067 ************************************ 00:04:23.067 START TEST event_reactor_perf 00:04:23.067 ************************************ 00:04:23.067 20:34:26 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:23.067 [2024-11-26 20:34:26.410109] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:04:23.067 [2024-11-26 20:34:26.410172] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1537010 ] 00:04:23.067 [2024-11-26 20:34:26.475857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.067 [2024-11-26 20:34:26.530380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.999 test_start 00:04:23.999 test_end 00:04:23.999 Performance: 448156 events per second 00:04:23.999 00:04:23.999 real 0m1.194s 00:04:24.000 user 0m1.125s 00:04:24.000 sys 0m0.065s 00:04:24.000 20:34:27 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.000 20:34:27 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:24.000 ************************************ 00:04:24.000 END TEST event_reactor_perf 00:04:24.000 ************************************ 00:04:24.000 20:34:27 event -- event/event.sh@49 -- # uname -s 00:04:24.000 20:34:27 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:24.000 20:34:27 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:24.000 20:34:27 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:24.000 20:34:27 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.000 20:34:27 event -- common/autotest_common.sh@10 -- # set +x 00:04:24.000 ************************************ 00:04:24.000 START TEST event_scheduler 00:04:24.000 ************************************ 00:04:24.000 20:34:27 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:24.000 * Looking for test storage... 00:04:24.258 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:24.258 20:34:27 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:24.258 20:34:27 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:04:24.258 20:34:27 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:24.258 20:34:27 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:24.258 20:34:27 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:24.258 20:34:27 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:24.258 20:34:27 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:24.258 20:34:27 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:24.258 20:34:27 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:24.258 20:34:27 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:24.258 20:34:27 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:24.258 20:34:27 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:24.258 20:34:27 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:24.258 20:34:27 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:24.258 20:34:27 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:24.258 20:34:27 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:24.258 20:34:27 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:24.258 20:34:27 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:24.258 20:34:27 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:24.258 20:34:27 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:24.258 20:34:27 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:24.258 20:34:27 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:24.258 20:34:27 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:24.258 20:34:27 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:24.258 20:34:27 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:24.258 20:34:27 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:24.258 20:34:27 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:24.258 20:34:27 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:24.258 20:34:27 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:24.258 20:34:27 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:24.258 20:34:27 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:24.258 20:34:27 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:24.258 20:34:27 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:24.258 20:34:27 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:24.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.258 --rc genhtml_branch_coverage=1 00:04:24.258 --rc genhtml_function_coverage=1 00:04:24.258 --rc genhtml_legend=1 00:04:24.258 --rc geninfo_all_blocks=1 00:04:24.258 --rc geninfo_unexecuted_blocks=1 00:04:24.258 00:04:24.258 ' 00:04:24.258 20:34:27 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:24.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.258 --rc genhtml_branch_coverage=1 00:04:24.258 --rc genhtml_function_coverage=1 00:04:24.258 --rc genhtml_legend=1 00:04:24.258 --rc geninfo_all_blocks=1 00:04:24.258 --rc geninfo_unexecuted_blocks=1 00:04:24.258 00:04:24.258 ' 00:04:24.258 20:34:27 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:24.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.258 --rc genhtml_branch_coverage=1 00:04:24.258 --rc genhtml_function_coverage=1 00:04:24.258 --rc genhtml_legend=1 00:04:24.258 --rc geninfo_all_blocks=1 00:04:24.258 --rc geninfo_unexecuted_blocks=1 00:04:24.258 00:04:24.258 ' 00:04:24.258 20:34:27 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:24.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.258 --rc genhtml_branch_coverage=1 00:04:24.258 --rc genhtml_function_coverage=1 00:04:24.258 --rc genhtml_legend=1 00:04:24.258 --rc geninfo_all_blocks=1 00:04:24.258 --rc geninfo_unexecuted_blocks=1 00:04:24.258 00:04:24.258 ' 00:04:24.258 20:34:27 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:24.258 20:34:27 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1537202 00:04:24.258 20:34:27 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:24.258 20:34:27 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:24.258 20:34:27 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1537202 00:04:24.258 20:34:27 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 1537202 ']' 00:04:24.258 20:34:27 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:24.259 20:34:27 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:24.259 20:34:27 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:24.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:24.259 20:34:27 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:24.259 20:34:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:24.259 [2024-11-26 20:34:27.823810] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:04:24.259 [2024-11-26 20:34:27.823899] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1537202 ] 00:04:24.259 [2024-11-26 20:34:27.892819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:24.517 [2024-11-26 20:34:27.956749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.517 [2024-11-26 20:34:27.956809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:24.517 [2024-11-26 20:34:27.956858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:24.517 [2024-11-26 20:34:27.956862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:24.517 20:34:28 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:24.517 20:34:28 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:24.517 20:34:28 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:24.517 20:34:28 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:24.517 20:34:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:24.517 [2024-11-26 20:34:28.077806] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:24.517 [2024-11-26 20:34:28.077834] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:24.517 [2024-11-26 20:34:28.077851] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:24.517 [2024-11-26 20:34:28.077861] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:24.517 [2024-11-26 20:34:28.077871] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:24.517 20:34:28 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:24.517 20:34:28 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:24.517 20:34:28 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:24.517 20:34:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:24.517 [2024-11-26 20:34:28.183377] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:24.517 20:34:28 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:24.517 20:34:28 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:24.517 20:34:28 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:24.517 20:34:28 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.517 20:34:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:24.517 ************************************ 00:04:24.517 START TEST scheduler_create_thread 00:04:24.517 ************************************ 00:04:24.517 20:34:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:24.517 20:34:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:24.517 20:34:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:24.517 20:34:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:24.775 2 00:04:24.775 20:34:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:24.775 20:34:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:24.775 20:34:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:24.775 20:34:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:24.775 3 00:04:24.775 20:34:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:24.775 20:34:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:24.775 20:34:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:24.775 20:34:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:24.775 4 00:04:24.775 20:34:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:24.775 20:34:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:24.775 20:34:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:24.775 20:34:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:24.775 5 00:04:24.775 20:34:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:24.775 20:34:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:24.775 20:34:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:24.775 20:34:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:24.775 6 00:04:24.775 20:34:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:24.776 20:34:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:24.776 20:34:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:24.776 20:34:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:24.776 7 00:04:24.776 20:34:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:24.776 20:34:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:24.776 20:34:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:24.776 20:34:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:24.776 8 00:04:24.776 20:34:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:24.776 20:34:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:24.776 20:34:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:24.776 20:34:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:24.776 9 00:04:24.776 20:34:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:24.776 20:34:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:24.776 20:34:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:24.776 20:34:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:24.776 10 00:04:24.776 20:34:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:24.776 20:34:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:24.776 20:34:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:24.776 20:34:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:24.776 20:34:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:24.776 20:34:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:24.776 20:34:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:24.776 20:34:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:24.776 20:34:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:24.776 20:34:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:24.776 20:34:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:24.776 20:34:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:24.776 20:34:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:24.776 20:34:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:24.776 20:34:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:24.776 20:34:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:24.776 20:34:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:24.776 20:34:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:25.365 20:34:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:25.365 00:04:25.365 real 0m0.591s 00:04:25.365 user 0m0.008s 00:04:25.365 sys 0m0.006s 00:04:25.365 20:34:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:25.365 20:34:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:25.365 ************************************ 00:04:25.365 END TEST scheduler_create_thread 00:04:25.365 ************************************ 00:04:25.365 20:34:28 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:25.365 20:34:28 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1537202 00:04:25.365 20:34:28 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 1537202 ']' 00:04:25.365 20:34:28 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 1537202 00:04:25.365 20:34:28 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:25.365 20:34:28 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:25.365 20:34:28 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1537202 00:04:25.365 20:34:28 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:25.365 20:34:28 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:25.365 20:34:28 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1537202' 00:04:25.365 killing process with pid 1537202 00:04:25.365 20:34:28 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 1537202 00:04:25.365 20:34:28 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 1537202 00:04:25.622 [2024-11-26 20:34:29.283517] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:25.881 00:04:25.881 real 0m1.856s 00:04:25.881 user 0m2.560s 00:04:25.881 sys 0m0.361s 00:04:25.881 20:34:29 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:25.881 20:34:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:25.881 ************************************ 00:04:25.881 END TEST event_scheduler 00:04:25.881 ************************************ 00:04:25.881 20:34:29 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:25.881 20:34:29 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:25.881 20:34:29 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:25.881 20:34:29 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:25.881 20:34:29 event -- common/autotest_common.sh@10 -- # set +x 00:04:25.881 ************************************ 00:04:25.881 START TEST app_repeat 00:04:25.881 ************************************ 00:04:25.881 20:34:29 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:25.881 20:34:29 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.881 20:34:29 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:25.881 20:34:29 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:25.881 20:34:29 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:25.881 20:34:29 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:25.881 20:34:29 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:25.881 20:34:29 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:25.881 20:34:29 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1537510 00:04:25.881 20:34:29 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:25.881 20:34:29 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:25.881 20:34:29 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1537510' 00:04:25.881 Process app_repeat pid: 1537510 00:04:25.881 20:34:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:25.881 20:34:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:25.881 spdk_app_start Round 0 00:04:25.881 20:34:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1537510 /var/tmp/spdk-nbd.sock 00:04:25.881 20:34:29 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1537510 ']' 00:04:25.881 20:34:29 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:25.881 20:34:29 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:25.881 20:34:29 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:25.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:25.881 20:34:29 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:25.881 20:34:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:26.139 [2024-11-26 20:34:29.581432] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:04:26.139 [2024-11-26 20:34:29.581500] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1537510 ] 00:04:26.139 [2024-11-26 20:34:29.648541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:26.139 [2024-11-26 20:34:29.707978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:26.139 [2024-11-26 20:34:29.707981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.397 20:34:29 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:26.397 20:34:29 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:26.397 20:34:29 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:26.654 Malloc0 00:04:26.654 20:34:30 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:26.912 Malloc1 00:04:26.912 20:34:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:26.912 20:34:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:26.912 20:34:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:26.912 20:34:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:26.912 20:34:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:26.912 20:34:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:26.913 20:34:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:26.913 20:34:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:26.913 20:34:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:26.913 20:34:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:26.913 20:34:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:26.913 20:34:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:26.913 20:34:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:26.913 20:34:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:26.913 20:34:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:26.913 20:34:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:27.170 /dev/nbd0 00:04:27.170 20:34:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:27.170 20:34:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:27.170 20:34:30 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:27.170 20:34:30 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:27.170 20:34:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:27.170 20:34:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:27.170 20:34:30 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:27.170 20:34:30 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:27.170 20:34:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:27.170 20:34:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:27.170 20:34:30 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:27.170 1+0 records in 00:04:27.170 1+0 records out 00:04:27.170 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000242804 s, 16.9 MB/s 00:04:27.170 20:34:30 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:27.170 20:34:30 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:27.170 20:34:30 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:27.170 20:34:30 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:27.170 20:34:30 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:27.170 20:34:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:27.170 20:34:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:27.170 20:34:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:27.428 /dev/nbd1 00:04:27.428 20:34:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:27.428 20:34:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:27.428 20:34:31 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:27.428 20:34:31 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:27.428 20:34:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:27.428 20:34:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:27.428 20:34:31 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:27.428 20:34:31 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:27.428 20:34:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:27.428 20:34:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:27.428 20:34:31 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:27.428 1+0 records in 00:04:27.428 1+0 records out 00:04:27.428 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000214625 s, 19.1 MB/s 00:04:27.428 20:34:31 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:27.428 20:34:31 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:27.428 20:34:31 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:27.428 20:34:31 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:27.428 20:34:31 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:27.428 20:34:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:27.428 20:34:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:27.428 20:34:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:27.428 20:34:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:27.428 20:34:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:27.685 20:34:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:27.685 { 00:04:27.685 "nbd_device": "/dev/nbd0", 00:04:27.685 "bdev_name": "Malloc0" 00:04:27.685 }, 00:04:27.685 { 00:04:27.685 "nbd_device": "/dev/nbd1", 00:04:27.685 "bdev_name": "Malloc1" 00:04:27.685 } 00:04:27.685 ]' 00:04:27.685 20:34:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:27.685 { 00:04:27.685 "nbd_device": "/dev/nbd0", 00:04:27.685 "bdev_name": "Malloc0" 00:04:27.685 }, 00:04:27.685 { 00:04:27.685 "nbd_device": "/dev/nbd1", 00:04:27.685 "bdev_name": "Malloc1" 00:04:27.685 } 00:04:27.685 ]' 00:04:27.685 20:34:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:27.942 20:34:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:27.942 /dev/nbd1' 00:04:27.942 20:34:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:27.942 /dev/nbd1' 00:04:27.942 20:34:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:27.942 20:34:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:27.942 20:34:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:27.942 20:34:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:27.942 20:34:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:27.942 20:34:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:27.942 20:34:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:27.942 20:34:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:27.943 20:34:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:27.943 20:34:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:27.943 20:34:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:27.943 20:34:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:27.943 256+0 records in 00:04:27.943 256+0 records out 00:04:27.943 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00509574 s, 206 MB/s 00:04:27.943 20:34:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:27.943 20:34:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:27.943 256+0 records in 00:04:27.943 256+0 records out 00:04:27.943 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0194932 s, 53.8 MB/s 00:04:27.943 20:34:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:27.943 20:34:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:27.943 256+0 records in 00:04:27.943 256+0 records out 00:04:27.943 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0224172 s, 46.8 MB/s 00:04:27.943 20:34:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:27.943 20:34:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:27.943 20:34:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:27.943 20:34:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:27.943 20:34:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:27.943 20:34:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:27.943 20:34:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:27.943 20:34:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:27.943 20:34:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:27.943 20:34:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:27.943 20:34:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:27.943 20:34:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:27.943 20:34:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:27.943 20:34:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:27.943 20:34:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:27.943 20:34:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:27.943 20:34:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:27.943 20:34:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:27.943 20:34:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:28.200 20:34:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:28.200 20:34:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:28.200 20:34:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:28.200 20:34:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:28.200 20:34:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:28.200 20:34:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:28.200 20:34:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:28.200 20:34:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:28.200 20:34:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:28.200 20:34:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:28.457 20:34:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:28.457 20:34:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:28.457 20:34:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:28.457 20:34:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:28.457 20:34:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:28.457 20:34:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:28.457 20:34:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:28.457 20:34:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:28.457 20:34:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:28.457 20:34:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:28.457 20:34:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:28.714 20:34:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:28.714 20:34:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:28.714 20:34:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:28.714 20:34:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:28.714 20:34:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:28.714 20:34:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:28.714 20:34:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:28.714 20:34:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:28.714 20:34:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:28.714 20:34:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:28.714 20:34:32 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:28.714 20:34:32 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:28.714 20:34:32 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:29.278 20:34:32 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:29.278 [2024-11-26 20:34:32.893936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:29.278 [2024-11-26 20:34:32.947980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.278 [2024-11-26 20:34:32.947984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:29.537 [2024-11-26 20:34:33.006943] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:29.537 [2024-11-26 20:34:33.007005] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:32.151 20:34:35 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:32.151 20:34:35 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:32.151 spdk_app_start Round 1 00:04:32.151 20:34:35 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1537510 /var/tmp/spdk-nbd.sock 00:04:32.151 20:34:35 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1537510 ']' 00:04:32.151 20:34:35 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:32.151 20:34:35 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:32.151 20:34:35 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:32.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:32.151 20:34:35 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:32.151 20:34:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:32.408 20:34:35 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:32.408 20:34:35 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:32.408 20:34:35 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:32.666 Malloc0 00:04:32.666 20:34:36 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:32.923 Malloc1 00:04:32.923 20:34:36 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:32.923 20:34:36 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:32.923 20:34:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:32.923 20:34:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:32.923 20:34:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:32.923 20:34:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:32.923 20:34:36 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:32.923 20:34:36 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:32.923 20:34:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:32.923 20:34:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:32.923 20:34:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:32.923 20:34:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:32.923 20:34:36 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:32.923 20:34:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:32.923 20:34:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:32.923 20:34:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:33.181 /dev/nbd0 00:04:33.181 20:34:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:33.181 20:34:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:33.181 20:34:36 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:33.181 20:34:36 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:33.181 20:34:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:33.181 20:34:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:33.181 20:34:36 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:33.181 20:34:36 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:33.181 20:34:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:33.181 20:34:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:33.181 20:34:36 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:33.181 1+0 records in 00:04:33.181 1+0 records out 00:04:33.181 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000139751 s, 29.3 MB/s 00:04:33.181 20:34:36 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:33.181 20:34:36 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:33.181 20:34:36 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:33.181 20:34:36 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:33.181 20:34:36 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:33.181 20:34:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:33.181 20:34:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:33.181 20:34:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:33.746 /dev/nbd1 00:04:33.746 20:34:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:33.746 20:34:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:33.746 20:34:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:33.747 20:34:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:33.747 20:34:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:33.747 20:34:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:33.747 20:34:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:33.747 20:34:37 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:33.747 20:34:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:33.747 20:34:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:33.747 20:34:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:33.747 1+0 records in 00:04:33.747 1+0 records out 00:04:33.747 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000198155 s, 20.7 MB/s 00:04:33.747 20:34:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:33.747 20:34:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:33.747 20:34:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:33.747 20:34:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:33.747 20:34:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:33.747 20:34:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:33.747 20:34:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:33.747 20:34:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:33.747 20:34:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:33.747 20:34:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:33.747 20:34:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:33.747 { 00:04:33.747 "nbd_device": "/dev/nbd0", 00:04:33.747 "bdev_name": "Malloc0" 00:04:33.747 }, 00:04:33.747 { 00:04:33.747 "nbd_device": "/dev/nbd1", 00:04:33.747 "bdev_name": "Malloc1" 00:04:33.747 } 00:04:33.747 ]' 00:04:34.004 20:34:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:34.004 { 00:04:34.004 "nbd_device": "/dev/nbd0", 00:04:34.004 "bdev_name": "Malloc0" 00:04:34.004 }, 00:04:34.004 { 00:04:34.004 "nbd_device": "/dev/nbd1", 00:04:34.004 "bdev_name": "Malloc1" 00:04:34.004 } 00:04:34.004 ]' 00:04:34.004 20:34:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:34.004 20:34:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:34.004 /dev/nbd1' 00:04:34.004 20:34:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:34.004 /dev/nbd1' 00:04:34.004 20:34:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:34.004 20:34:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:34.004 20:34:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:34.004 20:34:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:34.004 20:34:37 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:34.004 20:34:37 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:34.004 20:34:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:34.004 20:34:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:34.004 20:34:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:34.004 20:34:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:34.004 20:34:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:34.004 20:34:37 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:34.004 256+0 records in 00:04:34.004 256+0 records out 00:04:34.004 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00434979 s, 241 MB/s 00:04:34.004 20:34:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:34.004 20:34:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:34.004 256+0 records in 00:04:34.004 256+0 records out 00:04:34.004 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.020938 s, 50.1 MB/s 00:04:34.004 20:34:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:34.004 20:34:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:34.004 256+0 records in 00:04:34.004 256+0 records out 00:04:34.004 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0222071 s, 47.2 MB/s 00:04:34.004 20:34:37 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:34.004 20:34:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:34.004 20:34:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:34.004 20:34:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:34.004 20:34:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:34.004 20:34:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:34.004 20:34:37 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:34.004 20:34:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:34.004 20:34:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:34.004 20:34:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:34.005 20:34:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:34.005 20:34:37 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:34.005 20:34:37 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:34.005 20:34:37 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:34.005 20:34:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:34.005 20:34:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:34.005 20:34:37 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:34.005 20:34:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:34.005 20:34:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:34.262 20:34:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:34.262 20:34:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:34.262 20:34:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:34.262 20:34:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:34.262 20:34:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:34.262 20:34:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:34.262 20:34:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:34.262 20:34:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:34.262 20:34:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:34.262 20:34:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:34.518 20:34:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:34.518 20:34:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:34.518 20:34:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:34.518 20:34:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:34.518 20:34:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:34.518 20:34:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:34.518 20:34:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:34.518 20:34:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:34.518 20:34:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:34.518 20:34:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:34.518 20:34:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:34.775 20:34:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:34.775 20:34:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:34.775 20:34:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:34.775 20:34:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:34.775 20:34:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:34.775 20:34:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:34.775 20:34:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:34.775 20:34:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:34.775 20:34:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:34.775 20:34:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:34.775 20:34:38 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:34.775 20:34:38 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:34.775 20:34:38 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:35.337 20:34:38 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:35.337 [2024-11-26 20:34:38.948219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:35.337 [2024-11-26 20:34:39.002265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.337 [2024-11-26 20:34:39.002265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:35.594 [2024-11-26 20:34:39.064159] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:35.594 [2024-11-26 20:34:39.064235] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:38.116 20:34:41 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:38.116 20:34:41 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:38.116 spdk_app_start Round 2 00:04:38.116 20:34:41 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1537510 /var/tmp/spdk-nbd.sock 00:04:38.116 20:34:41 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1537510 ']' 00:04:38.116 20:34:41 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:38.116 20:34:41 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:38.116 20:34:41 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:38.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:38.116 20:34:41 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:38.116 20:34:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:38.374 20:34:42 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:38.374 20:34:42 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:38.375 20:34:42 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:38.632 Malloc0 00:04:38.633 20:34:42 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:38.891 Malloc1 00:04:38.891 20:34:42 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:38.891 20:34:42 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:38.891 20:34:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:38.891 20:34:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:38.891 20:34:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:38.891 20:34:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:38.891 20:34:42 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:38.891 20:34:42 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:38.891 20:34:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:38.891 20:34:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:38.891 20:34:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:38.891 20:34:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:38.891 20:34:42 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:38.891 20:34:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:38.891 20:34:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:38.891 20:34:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:39.456 /dev/nbd0 00:04:39.456 20:34:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:39.456 20:34:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:39.456 20:34:42 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:39.456 20:34:42 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:39.456 20:34:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:39.456 20:34:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:39.456 20:34:42 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:39.456 20:34:42 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:39.456 20:34:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:39.456 20:34:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:39.456 20:34:42 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:39.456 1+0 records in 00:04:39.456 1+0 records out 00:04:39.456 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000314267 s, 13.0 MB/s 00:04:39.456 20:34:42 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:39.456 20:34:42 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:39.456 20:34:42 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:39.456 20:34:42 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:39.456 20:34:42 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:39.456 20:34:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:39.456 20:34:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:39.456 20:34:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:39.714 /dev/nbd1 00:04:39.714 20:34:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:39.714 20:34:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:39.714 20:34:43 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:39.714 20:34:43 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:39.714 20:34:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:39.714 20:34:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:39.714 20:34:43 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:39.714 20:34:43 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:39.714 20:34:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:39.714 20:34:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:39.714 20:34:43 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:39.714 1+0 records in 00:04:39.714 1+0 records out 00:04:39.714 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000176151 s, 23.3 MB/s 00:04:39.714 20:34:43 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:39.714 20:34:43 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:39.714 20:34:43 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:39.714 20:34:43 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:39.714 20:34:43 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:39.714 20:34:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:39.714 20:34:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:39.714 20:34:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:39.714 20:34:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.714 20:34:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:39.973 20:34:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:39.973 { 00:04:39.973 "nbd_device": "/dev/nbd0", 00:04:39.973 "bdev_name": "Malloc0" 00:04:39.973 }, 00:04:39.973 { 00:04:39.973 "nbd_device": "/dev/nbd1", 00:04:39.973 "bdev_name": "Malloc1" 00:04:39.973 } 00:04:39.973 ]' 00:04:39.973 20:34:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:39.973 { 00:04:39.973 "nbd_device": "/dev/nbd0", 00:04:39.973 "bdev_name": "Malloc0" 00:04:39.973 }, 00:04:39.973 { 00:04:39.973 "nbd_device": "/dev/nbd1", 00:04:39.973 "bdev_name": "Malloc1" 00:04:39.973 } 00:04:39.973 ]' 00:04:39.973 20:34:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:39.973 20:34:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:39.973 /dev/nbd1' 00:04:39.973 20:34:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:39.973 /dev/nbd1' 00:04:39.973 20:34:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:39.973 20:34:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:39.973 20:34:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:39.973 20:34:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:39.973 20:34:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:39.973 20:34:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:39.973 20:34:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.973 20:34:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:39.973 20:34:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:39.973 20:34:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:39.973 20:34:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:39.973 20:34:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:39.973 256+0 records in 00:04:39.973 256+0 records out 00:04:39.973 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00511711 s, 205 MB/s 00:04:39.973 20:34:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:39.973 20:34:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:39.973 256+0 records in 00:04:39.973 256+0 records out 00:04:39.973 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0197296 s, 53.1 MB/s 00:04:39.973 20:34:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:39.973 20:34:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:39.973 256+0 records in 00:04:39.973 256+0 records out 00:04:39.973 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0216773 s, 48.4 MB/s 00:04:39.973 20:34:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:39.973 20:34:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.973 20:34:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:39.973 20:34:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:39.973 20:34:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:39.973 20:34:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:39.973 20:34:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:39.973 20:34:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:39.973 20:34:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:39.973 20:34:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:39.973 20:34:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:39.974 20:34:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:39.974 20:34:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:39.974 20:34:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:39.974 20:34:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:39.974 20:34:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:39.974 20:34:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:39.974 20:34:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:39.974 20:34:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:40.231 20:34:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:40.231 20:34:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:40.231 20:34:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:40.231 20:34:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:40.231 20:34:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:40.231 20:34:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:40.231 20:34:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:40.231 20:34:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:40.231 20:34:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:40.231 20:34:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:40.797 20:34:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:40.797 20:34:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:40.797 20:34:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:40.797 20:34:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:40.797 20:34:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:40.797 20:34:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:40.797 20:34:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:40.797 20:34:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:40.797 20:34:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:40.797 20:34:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.797 20:34:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:40.797 20:34:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:40.797 20:34:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:40.797 20:34:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:41.054 20:34:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:41.054 20:34:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:41.054 20:34:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:41.054 20:34:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:41.054 20:34:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:41.054 20:34:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:41.054 20:34:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:41.054 20:34:44 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:41.054 20:34:44 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:41.054 20:34:44 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:41.312 20:34:44 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:41.570 [2024-11-26 20:34:45.021216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:41.570 [2024-11-26 20:34:45.076548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:41.570 [2024-11-26 20:34:45.076552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.570 [2024-11-26 20:34:45.132248] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:41.570 [2024-11-26 20:34:45.132377] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:44.139 20:34:47 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1537510 /var/tmp/spdk-nbd.sock 00:04:44.139 20:34:47 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1537510 ']' 00:04:44.139 20:34:47 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:44.139 20:34:47 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:44.139 20:34:47 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:44.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:44.139 20:34:47 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:44.139 20:34:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:44.397 20:34:48 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:44.397 20:34:48 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:44.397 20:34:48 event.app_repeat -- event/event.sh@39 -- # killprocess 1537510 00:04:44.397 20:34:48 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 1537510 ']' 00:04:44.397 20:34:48 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 1537510 00:04:44.397 20:34:48 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:04:44.397 20:34:48 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:44.397 20:34:48 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1537510 00:04:44.656 20:34:48 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:44.656 20:34:48 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:44.656 20:34:48 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1537510' 00:04:44.656 killing process with pid 1537510 00:04:44.656 20:34:48 event.app_repeat -- common/autotest_common.sh@973 -- # kill 1537510 00:04:44.656 20:34:48 event.app_repeat -- common/autotest_common.sh@978 -- # wait 1537510 00:04:44.656 spdk_app_start is called in Round 0. 00:04:44.656 Shutdown signal received, stop current app iteration 00:04:44.656 Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 reinitialization... 00:04:44.656 spdk_app_start is called in Round 1. 00:04:44.656 Shutdown signal received, stop current app iteration 00:04:44.656 Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 reinitialization... 00:04:44.656 spdk_app_start is called in Round 2. 00:04:44.656 Shutdown signal received, stop current app iteration 00:04:44.656 Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 reinitialization... 00:04:44.656 spdk_app_start is called in Round 3. 00:04:44.656 Shutdown signal received, stop current app iteration 00:04:44.656 20:34:48 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:44.656 20:34:48 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:44.656 00:04:44.656 real 0m18.768s 00:04:44.656 user 0m41.583s 00:04:44.656 sys 0m3.206s 00:04:44.656 20:34:48 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.656 20:34:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:44.656 ************************************ 00:04:44.656 END TEST app_repeat 00:04:44.656 ************************************ 00:04:44.656 20:34:48 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:44.656 20:34:48 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:44.656 20:34:48 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.656 20:34:48 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.656 20:34:48 event -- common/autotest_common.sh@10 -- # set +x 00:04:44.916 ************************************ 00:04:44.916 START TEST cpu_locks 00:04:44.916 ************************************ 00:04:44.916 20:34:48 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:44.916 * Looking for test storage... 00:04:44.916 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:44.916 20:34:48 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:44.916 20:34:48 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:04:44.916 20:34:48 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:44.916 20:34:48 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:44.916 20:34:48 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:44.916 20:34:48 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:44.916 20:34:48 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:44.916 20:34:48 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:44.916 20:34:48 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:44.916 20:34:48 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:44.916 20:34:48 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:44.916 20:34:48 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:44.916 20:34:48 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:44.916 20:34:48 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:44.916 20:34:48 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:44.916 20:34:48 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:44.916 20:34:48 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:44.916 20:34:48 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:44.916 20:34:48 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:44.916 20:34:48 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:44.916 20:34:48 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:44.916 20:34:48 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:44.916 20:34:48 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:44.916 20:34:48 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:44.916 20:34:48 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:44.916 20:34:48 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:44.916 20:34:48 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:44.916 20:34:48 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:44.916 20:34:48 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:44.916 20:34:48 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:44.916 20:34:48 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:44.916 20:34:48 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:44.916 20:34:48 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:44.916 20:34:48 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:44.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.916 --rc genhtml_branch_coverage=1 00:04:44.916 --rc genhtml_function_coverage=1 00:04:44.916 --rc genhtml_legend=1 00:04:44.916 --rc geninfo_all_blocks=1 00:04:44.916 --rc geninfo_unexecuted_blocks=1 00:04:44.916 00:04:44.916 ' 00:04:44.916 20:34:48 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:44.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.916 --rc genhtml_branch_coverage=1 00:04:44.916 --rc genhtml_function_coverage=1 00:04:44.916 --rc genhtml_legend=1 00:04:44.916 --rc geninfo_all_blocks=1 00:04:44.916 --rc geninfo_unexecuted_blocks=1 00:04:44.916 00:04:44.916 ' 00:04:44.916 20:34:48 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:44.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.916 --rc genhtml_branch_coverage=1 00:04:44.916 --rc genhtml_function_coverage=1 00:04:44.916 --rc genhtml_legend=1 00:04:44.916 --rc geninfo_all_blocks=1 00:04:44.916 --rc geninfo_unexecuted_blocks=1 00:04:44.916 00:04:44.916 ' 00:04:44.916 20:34:48 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:44.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.916 --rc genhtml_branch_coverage=1 00:04:44.916 --rc genhtml_function_coverage=1 00:04:44.916 --rc genhtml_legend=1 00:04:44.916 --rc geninfo_all_blocks=1 00:04:44.916 --rc geninfo_unexecuted_blocks=1 00:04:44.916 00:04:44.916 ' 00:04:44.916 20:34:48 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:44.916 20:34:48 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:44.916 20:34:48 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:44.916 20:34:48 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:44.917 20:34:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.917 20:34:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.917 20:34:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:44.917 ************************************ 00:04:44.917 START TEST default_locks 00:04:44.917 ************************************ 00:04:44.917 20:34:48 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:04:44.917 20:34:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1540016 00:04:44.917 20:34:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:44.917 20:34:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1540016 00:04:44.917 20:34:48 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1540016 ']' 00:04:44.917 20:34:48 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.917 20:34:48 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:44.917 20:34:48 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.917 20:34:48 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:44.917 20:34:48 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:44.917 [2024-11-26 20:34:48.599776] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:04:44.917 [2024-11-26 20:34:48.599859] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1540016 ] 00:04:45.175 [2024-11-26 20:34:48.664630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.175 [2024-11-26 20:34:48.723990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.433 20:34:48 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:45.433 20:34:48 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:04:45.433 20:34:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1540016 00:04:45.433 20:34:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1540016 00:04:45.433 20:34:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:45.691 lslocks: write error 00:04:45.691 20:34:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1540016 00:04:45.691 20:34:49 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 1540016 ']' 00:04:45.691 20:34:49 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 1540016 00:04:45.691 20:34:49 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:04:45.691 20:34:49 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:45.691 20:34:49 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1540016 00:04:45.691 20:34:49 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:45.691 20:34:49 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:45.691 20:34:49 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1540016' 00:04:45.691 killing process with pid 1540016 00:04:45.691 20:34:49 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 1540016 00:04:45.691 20:34:49 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 1540016 00:04:45.949 20:34:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1540016 00:04:45.949 20:34:49 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:04:45.949 20:34:49 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1540016 00:04:46.207 20:34:49 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:46.207 20:34:49 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:46.207 20:34:49 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:46.207 20:34:49 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:46.207 20:34:49 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 1540016 00:04:46.207 20:34:49 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1540016 ']' 00:04:46.207 20:34:49 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.207 20:34:49 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:46.207 20:34:49 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.207 20:34:49 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:46.207 20:34:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:46.207 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1540016) - No such process 00:04:46.207 ERROR: process (pid: 1540016) is no longer running 00:04:46.207 20:34:49 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:46.207 20:34:49 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:04:46.207 20:34:49 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:04:46.207 20:34:49 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:46.207 20:34:49 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:46.207 20:34:49 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:46.207 20:34:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:46.207 20:34:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:46.207 20:34:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:46.207 20:34:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:46.207 00:04:46.207 real 0m1.106s 00:04:46.207 user 0m1.072s 00:04:46.207 sys 0m0.491s 00:04:46.207 20:34:49 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.207 20:34:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:46.207 ************************************ 00:04:46.207 END TEST default_locks 00:04:46.207 ************************************ 00:04:46.207 20:34:49 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:46.207 20:34:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.207 20:34:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.207 20:34:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:46.207 ************************************ 00:04:46.207 START TEST default_locks_via_rpc 00:04:46.207 ************************************ 00:04:46.207 20:34:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:04:46.207 20:34:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1540181 00:04:46.207 20:34:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:46.207 20:34:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1540181 00:04:46.207 20:34:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1540181 ']' 00:04:46.207 20:34:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.207 20:34:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:46.207 20:34:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.207 20:34:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:46.207 20:34:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.207 [2024-11-26 20:34:49.758703] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:04:46.207 [2024-11-26 20:34:49.758799] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1540181 ] 00:04:46.207 [2024-11-26 20:34:49.821912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.207 [2024-11-26 20:34:49.875580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.465 20:34:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:46.465 20:34:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:46.465 20:34:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:46.465 20:34:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.465 20:34:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.465 20:34:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.465 20:34:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:46.465 20:34:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:46.465 20:34:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:46.465 20:34:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:46.465 20:34:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:46.465 20:34:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.465 20:34:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.465 20:34:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.465 20:34:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1540181 00:04:46.465 20:34:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1540181 00:04:46.465 20:34:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:46.722 20:34:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1540181 00:04:46.722 20:34:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 1540181 ']' 00:04:46.722 20:34:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 1540181 00:04:46.722 20:34:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:04:46.722 20:34:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:46.722 20:34:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1540181 00:04:46.722 20:34:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:46.722 20:34:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:46.722 20:34:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1540181' 00:04:46.722 killing process with pid 1540181 00:04:46.722 20:34:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 1540181 00:04:46.722 20:34:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 1540181 00:04:47.288 00:04:47.288 real 0m1.124s 00:04:47.288 user 0m1.081s 00:04:47.288 sys 0m0.495s 00:04:47.288 20:34:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.288 20:34:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.288 ************************************ 00:04:47.288 END TEST default_locks_via_rpc 00:04:47.288 ************************************ 00:04:47.288 20:34:50 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:47.288 20:34:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:47.288 20:34:50 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.288 20:34:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:47.288 ************************************ 00:04:47.288 START TEST non_locking_app_on_locked_coremask 00:04:47.288 ************************************ 00:04:47.288 20:34:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:04:47.288 20:34:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1540346 00:04:47.288 20:34:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:47.288 20:34:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1540346 /var/tmp/spdk.sock 00:04:47.288 20:34:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1540346 ']' 00:04:47.288 20:34:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.288 20:34:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:47.288 20:34:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.288 20:34:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:47.288 20:34:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:47.288 [2024-11-26 20:34:50.936543] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:04:47.288 [2024-11-26 20:34:50.936645] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1540346 ] 00:04:47.545 [2024-11-26 20:34:51.004034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.545 [2024-11-26 20:34:51.064142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.803 20:34:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:47.803 20:34:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:47.803 20:34:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1540353 00:04:47.803 20:34:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:47.803 20:34:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1540353 /var/tmp/spdk2.sock 00:04:47.803 20:34:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1540353 ']' 00:04:47.803 20:34:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:47.803 20:34:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:47.803 20:34:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:47.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:47.803 20:34:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:47.803 20:34:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:47.803 [2024-11-26 20:34:51.393311] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:04:47.803 [2024-11-26 20:34:51.393399] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1540353 ] 00:04:47.803 [2024-11-26 20:34:51.495621] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:47.803 [2024-11-26 20:34:51.495678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.059 [2024-11-26 20:34:51.616839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.991 20:34:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:48.991 20:34:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:48.991 20:34:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1540346 00:04:48.991 20:34:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1540346 00:04:48.991 20:34:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:49.249 lslocks: write error 00:04:49.249 20:34:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1540346 00:04:49.249 20:34:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1540346 ']' 00:04:49.249 20:34:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1540346 00:04:49.249 20:34:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:49.249 20:34:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:49.249 20:34:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1540346 00:04:49.249 20:34:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:49.249 20:34:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:49.249 20:34:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1540346' 00:04:49.249 killing process with pid 1540346 00:04:49.249 20:34:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1540346 00:04:49.249 20:34:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1540346 00:04:50.183 20:34:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1540353 00:04:50.183 20:34:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1540353 ']' 00:04:50.183 20:34:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1540353 00:04:50.183 20:34:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:50.183 20:34:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:50.183 20:34:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1540353 00:04:50.183 20:34:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:50.183 20:34:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:50.183 20:34:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1540353' 00:04:50.183 killing process with pid 1540353 00:04:50.183 20:34:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1540353 00:04:50.183 20:34:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1540353 00:04:50.441 00:04:50.441 real 0m3.174s 00:04:50.441 user 0m3.379s 00:04:50.441 sys 0m1.051s 00:04:50.441 20:34:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.441 20:34:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:50.441 ************************************ 00:04:50.441 END TEST non_locking_app_on_locked_coremask 00:04:50.441 ************************************ 00:04:50.441 20:34:54 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:50.441 20:34:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.441 20:34:54 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.441 20:34:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:50.441 ************************************ 00:04:50.441 START TEST locking_app_on_unlocked_coremask 00:04:50.441 ************************************ 00:04:50.441 20:34:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:04:50.441 20:34:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1540780 00:04:50.441 20:34:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:50.441 20:34:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1540780 /var/tmp/spdk.sock 00:04:50.442 20:34:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1540780 ']' 00:04:50.442 20:34:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.442 20:34:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:50.442 20:34:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.442 20:34:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:50.442 20:34:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:50.699 [2024-11-26 20:34:54.157333] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:04:50.699 [2024-11-26 20:34:54.157426] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1540780 ] 00:04:50.699 [2024-11-26 20:34:54.222035] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:50.699 [2024-11-26 20:34:54.222072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.699 [2024-11-26 20:34:54.282447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.957 20:34:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:50.957 20:34:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:50.957 20:34:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1540783 00:04:50.957 20:34:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:50.957 20:34:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1540783 /var/tmp/spdk2.sock 00:04:50.957 20:34:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1540783 ']' 00:04:50.957 20:34:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:50.957 20:34:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:50.957 20:34:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:50.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:50.957 20:34:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:50.957 20:34:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:50.957 [2024-11-26 20:34:54.608366] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:04:50.957 [2024-11-26 20:34:54.608453] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1540783 ] 00:04:51.215 [2024-11-26 20:34:54.704832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.215 [2024-11-26 20:34:54.817492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.149 20:34:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:52.149 20:34:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:52.149 20:34:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1540783 00:04:52.149 20:34:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1540783 00:04:52.149 20:34:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:52.407 lslocks: write error 00:04:52.407 20:34:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1540780 00:04:52.407 20:34:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1540780 ']' 00:04:52.407 20:34:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1540780 00:04:52.407 20:34:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:52.407 20:34:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:52.407 20:34:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1540780 00:04:52.407 20:34:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:52.407 20:34:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:52.407 20:34:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1540780' 00:04:52.407 killing process with pid 1540780 00:04:52.407 20:34:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1540780 00:04:52.407 20:34:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1540780 00:04:53.340 20:34:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1540783 00:04:53.340 20:34:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1540783 ']' 00:04:53.340 20:34:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1540783 00:04:53.340 20:34:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:53.340 20:34:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:53.340 20:34:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1540783 00:04:53.340 20:34:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:53.340 20:34:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:53.340 20:34:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1540783' 00:04:53.340 killing process with pid 1540783 00:04:53.340 20:34:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1540783 00:04:53.340 20:34:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1540783 00:04:53.597 00:04:53.597 real 0m3.145s 00:04:53.597 user 0m3.370s 00:04:53.597 sys 0m0.996s 00:04:53.597 20:34:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.597 20:34:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:53.597 ************************************ 00:04:53.597 END TEST locking_app_on_unlocked_coremask 00:04:53.597 ************************************ 00:04:53.597 20:34:57 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:53.597 20:34:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.597 20:34:57 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.597 20:34:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:53.855 ************************************ 00:04:53.856 START TEST locking_app_on_locked_coremask 00:04:53.856 ************************************ 00:04:53.856 20:34:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:04:53.856 20:34:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1541094 00:04:53.856 20:34:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:53.856 20:34:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1541094 /var/tmp/spdk.sock 00:04:53.856 20:34:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1541094 ']' 00:04:53.856 20:34:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.856 20:34:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:53.856 20:34:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.856 20:34:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:53.856 20:34:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:53.856 [2024-11-26 20:34:57.355899] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:04:53.856 [2024-11-26 20:34:57.355978] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1541094 ] 00:04:53.856 [2024-11-26 20:34:57.424507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.856 [2024-11-26 20:34:57.485414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.113 20:34:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:54.113 20:34:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:54.113 20:34:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1541217 00:04:54.113 20:34:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:54.113 20:34:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1541217 /var/tmp/spdk2.sock 00:04:54.113 20:34:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:54.113 20:34:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1541217 /var/tmp/spdk2.sock 00:04:54.113 20:34:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:54.113 20:34:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:54.113 20:34:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:54.113 20:34:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:54.113 20:34:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1541217 /var/tmp/spdk2.sock 00:04:54.113 20:34:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1541217 ']' 00:04:54.113 20:34:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:54.113 20:34:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:54.113 20:34:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:54.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:54.113 20:34:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:54.113 20:34:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:54.370 [2024-11-26 20:34:57.821225] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:04:54.370 [2024-11-26 20:34:57.821345] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1541217 ] 00:04:54.370 [2024-11-26 20:34:57.922116] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1541094 has claimed it. 00:04:54.370 [2024-11-26 20:34:57.922176] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:54.934 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1541217) - No such process 00:04:54.934 ERROR: process (pid: 1541217) is no longer running 00:04:54.934 20:34:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:54.934 20:34:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:54.934 20:34:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:54.934 20:34:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:54.934 20:34:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:54.934 20:34:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:54.934 20:34:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1541094 00:04:54.934 20:34:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1541094 00:04:54.934 20:34:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:55.499 lslocks: write error 00:04:55.499 20:34:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1541094 00:04:55.499 20:34:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1541094 ']' 00:04:55.499 20:34:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1541094 00:04:55.499 20:34:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:55.499 20:34:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:55.499 20:34:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1541094 00:04:55.499 20:34:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:55.499 20:34:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:55.499 20:34:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1541094' 00:04:55.499 killing process with pid 1541094 00:04:55.499 20:34:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1541094 00:04:55.499 20:34:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1541094 00:04:55.756 00:04:55.756 real 0m2.083s 00:04:55.756 user 0m2.291s 00:04:55.756 sys 0m0.652s 00:04:55.756 20:34:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.756 20:34:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:55.756 ************************************ 00:04:55.756 END TEST locking_app_on_locked_coremask 00:04:55.756 ************************************ 00:04:55.756 20:34:59 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:55.757 20:34:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.757 20:34:59 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.757 20:34:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:55.757 ************************************ 00:04:55.757 START TEST locking_overlapped_coremask 00:04:55.757 ************************************ 00:04:55.757 20:34:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:04:55.757 20:34:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1541391 00:04:55.757 20:34:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:55.757 20:34:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1541391 /var/tmp/spdk.sock 00:04:55.757 20:34:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1541391 ']' 00:04:55.757 20:34:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.757 20:34:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:55.757 20:34:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.757 20:34:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:55.757 20:34:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:56.015 [2024-11-26 20:34:59.492354] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:04:56.015 [2024-11-26 20:34:59.492451] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1541391 ] 00:04:56.015 [2024-11-26 20:34:59.558806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:56.015 [2024-11-26 20:34:59.621329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:56.015 [2024-11-26 20:34:59.621413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:56.015 [2024-11-26 20:34:59.621417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.273 20:34:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:56.273 20:34:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:56.273 20:34:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1541523 00:04:56.273 20:34:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1541523 /var/tmp/spdk2.sock 00:04:56.273 20:34:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:56.273 20:34:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1541523 /var/tmp/spdk2.sock 00:04:56.273 20:34:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:56.273 20:34:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:56.273 20:34:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:56.273 20:34:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:56.273 20:34:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:56.273 20:34:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1541523 /var/tmp/spdk2.sock 00:04:56.273 20:34:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1541523 ']' 00:04:56.273 20:34:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:56.273 20:34:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:56.273 20:34:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:56.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:56.273 20:34:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:56.273 20:34:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:56.273 [2024-11-26 20:34:59.963655] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:04:56.273 [2024-11-26 20:34:59.963747] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1541523 ] 00:04:56.531 [2024-11-26 20:35:00.077809] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1541391 has claimed it. 00:04:56.531 [2024-11-26 20:35:00.077886] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:57.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1541523) - No such process 00:04:57.142 ERROR: process (pid: 1541523) is no longer running 00:04:57.142 20:35:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:57.142 20:35:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:57.142 20:35:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:57.142 20:35:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:57.142 20:35:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:57.142 20:35:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:57.142 20:35:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:57.142 20:35:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:57.142 20:35:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:57.142 20:35:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:57.142 20:35:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1541391 00:04:57.142 20:35:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 1541391 ']' 00:04:57.142 20:35:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 1541391 00:04:57.142 20:35:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:04:57.142 20:35:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:57.142 20:35:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1541391 00:04:57.142 20:35:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:57.142 20:35:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:57.142 20:35:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1541391' 00:04:57.142 killing process with pid 1541391 00:04:57.142 20:35:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 1541391 00:04:57.142 20:35:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 1541391 00:04:57.460 00:04:57.460 real 0m1.713s 00:04:57.460 user 0m4.773s 00:04:57.460 sys 0m0.473s 00:04:57.460 20:35:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.460 20:35:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:57.460 ************************************ 00:04:57.460 END TEST locking_overlapped_coremask 00:04:57.460 ************************************ 00:04:57.718 20:35:01 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:57.718 20:35:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.718 20:35:01 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.718 20:35:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:57.718 ************************************ 00:04:57.718 START TEST locking_overlapped_coremask_via_rpc 00:04:57.718 ************************************ 00:04:57.718 20:35:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:04:57.718 20:35:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1541690 00:04:57.718 20:35:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1541690 /var/tmp/spdk.sock 00:04:57.718 20:35:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:57.718 20:35:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1541690 ']' 00:04:57.718 20:35:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.718 20:35:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:57.718 20:35:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.718 20:35:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:57.718 20:35:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.718 [2024-11-26 20:35:01.258137] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:04:57.718 [2024-11-26 20:35:01.258233] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1541690 ] 00:04:57.718 [2024-11-26 20:35:01.322124] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:57.718 [2024-11-26 20:35:01.322156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:57.718 [2024-11-26 20:35:01.378915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:57.718 [2024-11-26 20:35:01.378978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:57.718 [2024-11-26 20:35:01.378982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.976 20:35:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:57.976 20:35:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:57.976 20:35:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1541697 00:04:57.976 20:35:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:57.976 20:35:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1541697 /var/tmp/spdk2.sock 00:04:57.976 20:35:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1541697 ']' 00:04:57.976 20:35:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:57.976 20:35:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:57.976 20:35:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:57.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:57.976 20:35:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:57.976 20:35:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.235 [2024-11-26 20:35:01.713027] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:04:58.235 [2024-11-26 20:35:01.713118] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1541697 ] 00:04:58.235 [2024-11-26 20:35:01.819810] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:58.235 [2024-11-26 20:35:01.819849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:58.493 [2024-11-26 20:35:01.945622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:58.493 [2024-11-26 20:35:01.945682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:04:58.493 [2024-11-26 20:35:01.945685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:59.060 20:35:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:59.060 20:35:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:59.060 20:35:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:59.060 20:35:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.060 20:35:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.060 20:35:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.060 20:35:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:59.060 20:35:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:59.060 20:35:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:59.060 20:35:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:59.060 20:35:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:59.060 20:35:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:59.060 20:35:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:59.060 20:35:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:59.060 20:35:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.060 20:35:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.060 [2024-11-26 20:35:02.522410] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1541690 has claimed it. 00:04:59.060 request: 00:04:59.060 { 00:04:59.060 "method": "framework_enable_cpumask_locks", 00:04:59.060 "req_id": 1 00:04:59.060 } 00:04:59.060 Got JSON-RPC error response 00:04:59.060 response: 00:04:59.060 { 00:04:59.060 "code": -32603, 00:04:59.060 "message": "Failed to claim CPU core: 2" 00:04:59.060 } 00:04:59.060 20:35:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:59.060 20:35:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:59.060 20:35:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:59.060 20:35:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:59.060 20:35:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:59.060 20:35:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1541690 /var/tmp/spdk.sock 00:04:59.060 20:35:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1541690 ']' 00:04:59.060 20:35:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.060 20:35:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:59.060 20:35:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.060 20:35:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:59.060 20:35:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.318 20:35:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:59.318 20:35:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:59.318 20:35:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1541697 /var/tmp/spdk2.sock 00:04:59.318 20:35:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1541697 ']' 00:04:59.318 20:35:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:59.318 20:35:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:59.318 20:35:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:59.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:59.318 20:35:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:59.318 20:35:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.576 20:35:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:59.576 20:35:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:59.576 20:35:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:59.576 20:35:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:59.576 20:35:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:59.576 20:35:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:59.576 00:04:59.576 real 0m1.885s 00:04:59.576 user 0m0.965s 00:04:59.576 sys 0m0.147s 00:04:59.576 20:35:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.576 20:35:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.576 ************************************ 00:04:59.576 END TEST locking_overlapped_coremask_via_rpc 00:04:59.577 ************************************ 00:04:59.577 20:35:03 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:04:59.577 20:35:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1541690 ]] 00:04:59.577 20:35:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1541690 00:04:59.577 20:35:03 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1541690 ']' 00:04:59.577 20:35:03 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1541690 00:04:59.577 20:35:03 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:04:59.577 20:35:03 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:59.577 20:35:03 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1541690 00:04:59.577 20:35:03 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:59.577 20:35:03 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:59.577 20:35:03 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1541690' 00:04:59.577 killing process with pid 1541690 00:04:59.577 20:35:03 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1541690 00:04:59.577 20:35:03 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1541690 00:05:00.148 20:35:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1541697 ]] 00:05:00.148 20:35:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1541697 00:05:00.148 20:35:03 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1541697 ']' 00:05:00.148 20:35:03 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1541697 00:05:00.148 20:35:03 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:00.148 20:35:03 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:00.148 20:35:03 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1541697 00:05:00.148 20:35:03 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:00.148 20:35:03 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:00.148 20:35:03 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1541697' 00:05:00.148 killing process with pid 1541697 00:05:00.148 20:35:03 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1541697 00:05:00.148 20:35:03 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1541697 00:05:00.406 20:35:04 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:00.406 20:35:04 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:00.406 20:35:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1541690 ]] 00:05:00.406 20:35:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1541690 00:05:00.406 20:35:04 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1541690 ']' 00:05:00.406 20:35:04 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1541690 00:05:00.406 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1541690) - No such process 00:05:00.406 20:35:04 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1541690 is not found' 00:05:00.406 Process with pid 1541690 is not found 00:05:00.406 20:35:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1541697 ]] 00:05:00.406 20:35:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1541697 00:05:00.406 20:35:04 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1541697 ']' 00:05:00.406 20:35:04 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1541697 00:05:00.406 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1541697) - No such process 00:05:00.406 20:35:04 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1541697 is not found' 00:05:00.406 Process with pid 1541697 is not found 00:05:00.406 20:35:04 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:00.406 00:05:00.406 real 0m15.699s 00:05:00.406 user 0m27.913s 00:05:00.406 sys 0m5.257s 00:05:00.406 20:35:04 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.406 20:35:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:00.406 ************************************ 00:05:00.406 END TEST cpu_locks 00:05:00.406 ************************************ 00:05:00.406 00:05:00.406 real 0m40.376s 00:05:00.406 user 1m18.672s 00:05:00.406 sys 0m9.278s 00:05:00.406 20:35:04 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.406 20:35:04 event -- common/autotest_common.sh@10 -- # set +x 00:05:00.406 ************************************ 00:05:00.406 END TEST event 00:05:00.406 ************************************ 00:05:00.664 20:35:04 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:00.664 20:35:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.664 20:35:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.664 20:35:04 -- common/autotest_common.sh@10 -- # set +x 00:05:00.664 ************************************ 00:05:00.664 START TEST thread 00:05:00.664 ************************************ 00:05:00.664 20:35:04 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:00.664 * Looking for test storage... 00:05:00.664 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:00.664 20:35:04 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:00.664 20:35:04 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:05:00.664 20:35:04 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:00.664 20:35:04 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:00.664 20:35:04 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:00.664 20:35:04 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:00.664 20:35:04 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:00.664 20:35:04 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.664 20:35:04 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:00.664 20:35:04 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:00.664 20:35:04 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:00.664 20:35:04 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:00.664 20:35:04 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:00.664 20:35:04 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:00.664 20:35:04 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:00.665 20:35:04 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:00.665 20:35:04 thread -- scripts/common.sh@345 -- # : 1 00:05:00.665 20:35:04 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:00.665 20:35:04 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.665 20:35:04 thread -- scripts/common.sh@365 -- # decimal 1 00:05:00.665 20:35:04 thread -- scripts/common.sh@353 -- # local d=1 00:05:00.665 20:35:04 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.665 20:35:04 thread -- scripts/common.sh@355 -- # echo 1 00:05:00.665 20:35:04 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:00.665 20:35:04 thread -- scripts/common.sh@366 -- # decimal 2 00:05:00.665 20:35:04 thread -- scripts/common.sh@353 -- # local d=2 00:05:00.665 20:35:04 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.665 20:35:04 thread -- scripts/common.sh@355 -- # echo 2 00:05:00.665 20:35:04 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:00.665 20:35:04 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:00.665 20:35:04 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:00.665 20:35:04 thread -- scripts/common.sh@368 -- # return 0 00:05:00.665 20:35:04 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.665 20:35:04 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:00.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.665 --rc genhtml_branch_coverage=1 00:05:00.665 --rc genhtml_function_coverage=1 00:05:00.665 --rc genhtml_legend=1 00:05:00.665 --rc geninfo_all_blocks=1 00:05:00.665 --rc geninfo_unexecuted_blocks=1 00:05:00.665 00:05:00.665 ' 00:05:00.665 20:35:04 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:00.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.665 --rc genhtml_branch_coverage=1 00:05:00.665 --rc genhtml_function_coverage=1 00:05:00.665 --rc genhtml_legend=1 00:05:00.665 --rc geninfo_all_blocks=1 00:05:00.665 --rc geninfo_unexecuted_blocks=1 00:05:00.665 00:05:00.665 ' 00:05:00.665 20:35:04 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:00.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.665 --rc genhtml_branch_coverage=1 00:05:00.665 --rc genhtml_function_coverage=1 00:05:00.665 --rc genhtml_legend=1 00:05:00.665 --rc geninfo_all_blocks=1 00:05:00.665 --rc geninfo_unexecuted_blocks=1 00:05:00.665 00:05:00.665 ' 00:05:00.665 20:35:04 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:00.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.665 --rc genhtml_branch_coverage=1 00:05:00.665 --rc genhtml_function_coverage=1 00:05:00.665 --rc genhtml_legend=1 00:05:00.665 --rc geninfo_all_blocks=1 00:05:00.665 --rc geninfo_unexecuted_blocks=1 00:05:00.665 00:05:00.665 ' 00:05:00.665 20:35:04 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:00.665 20:35:04 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:00.665 20:35:04 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.665 20:35:04 thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.665 ************************************ 00:05:00.665 START TEST thread_poller_perf 00:05:00.665 ************************************ 00:05:00.665 20:35:04 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:00.665 [2024-11-26 20:35:04.329746] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:05:00.665 [2024-11-26 20:35:04.329812] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1542195 ] 00:05:00.923 [2024-11-26 20:35:04.397803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.923 [2024-11-26 20:35:04.456947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.923 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:01.857 [2024-11-26T19:35:05.554Z] ====================================== 00:05:01.857 [2024-11-26T19:35:05.554Z] busy:2709683928 (cyc) 00:05:01.857 [2024-11-26T19:35:05.554Z] total_run_count: 367000 00:05:01.857 [2024-11-26T19:35:05.554Z] tsc_hz: 2700000000 (cyc) 00:05:01.857 [2024-11-26T19:35:05.554Z] ====================================== 00:05:01.857 [2024-11-26T19:35:05.554Z] poller_cost: 7383 (cyc), 2734 (nsec) 00:05:01.857 00:05:01.857 real 0m1.210s 00:05:01.857 user 0m1.137s 00:05:01.857 sys 0m0.069s 00:05:01.857 20:35:05 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.857 20:35:05 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:01.857 ************************************ 00:05:01.857 END TEST thread_poller_perf 00:05:01.857 ************************************ 00:05:01.857 20:35:05 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:02.115 20:35:05 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:02.115 20:35:05 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.115 20:35:05 thread -- common/autotest_common.sh@10 -- # set +x 00:05:02.115 ************************************ 00:05:02.115 START TEST thread_poller_perf 00:05:02.115 ************************************ 00:05:02.115 20:35:05 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:02.115 [2024-11-26 20:35:05.593034] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:05:02.115 [2024-11-26 20:35:05.593097] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1542348 ] 00:05:02.115 [2024-11-26 20:35:05.658004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.115 [2024-11-26 20:35:05.713038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.115 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:03.490 [2024-11-26T19:35:07.187Z] ====================================== 00:05:03.490 [2024-11-26T19:35:07.187Z] busy:2702486973 (cyc) 00:05:03.490 [2024-11-26T19:35:07.187Z] total_run_count: 4841000 00:05:03.490 [2024-11-26T19:35:07.187Z] tsc_hz: 2700000000 (cyc) 00:05:03.490 [2024-11-26T19:35:07.187Z] ====================================== 00:05:03.490 [2024-11-26T19:35:07.187Z] poller_cost: 558 (cyc), 206 (nsec) 00:05:03.490 00:05:03.490 real 0m1.198s 00:05:03.490 user 0m1.135s 00:05:03.490 sys 0m0.059s 00:05:03.490 20:35:06 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.490 20:35:06 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:03.490 ************************************ 00:05:03.490 END TEST thread_poller_perf 00:05:03.490 ************************************ 00:05:03.490 20:35:06 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:03.490 00:05:03.490 real 0m2.660s 00:05:03.490 user 0m2.412s 00:05:03.490 sys 0m0.252s 00:05:03.490 20:35:06 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.490 20:35:06 thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.490 ************************************ 00:05:03.490 END TEST thread 00:05:03.490 ************************************ 00:05:03.490 20:35:06 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:03.490 20:35:06 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:03.490 20:35:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.490 20:35:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.490 20:35:06 -- common/autotest_common.sh@10 -- # set +x 00:05:03.490 ************************************ 00:05:03.490 START TEST app_cmdline 00:05:03.490 ************************************ 00:05:03.490 20:35:06 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:03.490 * Looking for test storage... 00:05:03.490 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:03.490 20:35:06 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:03.490 20:35:06 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:05:03.490 20:35:06 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:03.490 20:35:06 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:03.490 20:35:06 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.490 20:35:06 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.490 20:35:06 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.490 20:35:06 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.490 20:35:06 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.490 20:35:06 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.490 20:35:06 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.490 20:35:06 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.490 20:35:06 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.490 20:35:06 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.490 20:35:06 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.490 20:35:06 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:03.490 20:35:06 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:03.490 20:35:06 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.490 20:35:06 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.490 20:35:06 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:03.490 20:35:06 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:03.490 20:35:06 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.490 20:35:06 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:03.490 20:35:06 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.490 20:35:06 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:03.490 20:35:06 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:03.490 20:35:06 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.490 20:35:06 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:03.490 20:35:06 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.490 20:35:06 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.490 20:35:06 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.490 20:35:06 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:03.490 20:35:06 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.490 20:35:06 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:03.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.490 --rc genhtml_branch_coverage=1 00:05:03.490 --rc genhtml_function_coverage=1 00:05:03.490 --rc genhtml_legend=1 00:05:03.490 --rc geninfo_all_blocks=1 00:05:03.490 --rc geninfo_unexecuted_blocks=1 00:05:03.490 00:05:03.490 ' 00:05:03.490 20:35:06 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:03.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.490 --rc genhtml_branch_coverage=1 00:05:03.490 --rc genhtml_function_coverage=1 00:05:03.490 --rc genhtml_legend=1 00:05:03.490 --rc geninfo_all_blocks=1 00:05:03.490 --rc geninfo_unexecuted_blocks=1 00:05:03.490 00:05:03.490 ' 00:05:03.490 20:35:06 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:03.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.490 --rc genhtml_branch_coverage=1 00:05:03.490 --rc genhtml_function_coverage=1 00:05:03.490 --rc genhtml_legend=1 00:05:03.490 --rc geninfo_all_blocks=1 00:05:03.490 --rc geninfo_unexecuted_blocks=1 00:05:03.490 00:05:03.490 ' 00:05:03.490 20:35:06 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:03.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.490 --rc genhtml_branch_coverage=1 00:05:03.490 --rc genhtml_function_coverage=1 00:05:03.490 --rc genhtml_legend=1 00:05:03.490 --rc geninfo_all_blocks=1 00:05:03.490 --rc geninfo_unexecuted_blocks=1 00:05:03.490 00:05:03.490 ' 00:05:03.490 20:35:06 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:03.490 20:35:06 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1542554 00:05:03.490 20:35:06 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:03.490 20:35:06 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1542554 00:05:03.490 20:35:07 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 1542554 ']' 00:05:03.490 20:35:07 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.490 20:35:07 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:03.490 20:35:07 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.490 20:35:07 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:03.490 20:35:07 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:03.490 [2024-11-26 20:35:07.052028] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:05:03.490 [2024-11-26 20:35:07.052120] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1542554 ] 00:05:03.490 [2024-11-26 20:35:07.117825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.490 [2024-11-26 20:35:07.174029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.748 20:35:07 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.748 20:35:07 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:03.748 20:35:07 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:04.006 { 00:05:04.006 "version": "SPDK v25.01-pre git sha1 752c08b51", 00:05:04.006 "fields": { 00:05:04.006 "major": 25, 00:05:04.006 "minor": 1, 00:05:04.006 "patch": 0, 00:05:04.006 "suffix": "-pre", 00:05:04.006 "commit": "752c08b51" 00:05:04.006 } 00:05:04.006 } 00:05:04.265 20:35:07 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:04.265 20:35:07 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:04.265 20:35:07 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:04.265 20:35:07 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:04.265 20:35:07 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:04.265 20:35:07 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:04.265 20:35:07 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:04.265 20:35:07 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:04.265 20:35:07 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:04.265 20:35:07 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:04.265 20:35:07 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:04.265 20:35:07 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:04.265 20:35:07 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:04.265 20:35:07 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:04.265 20:35:07 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:04.265 20:35:07 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:04.265 20:35:07 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:04.265 20:35:07 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:04.265 20:35:07 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:04.265 20:35:07 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:04.265 20:35:07 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:04.265 20:35:07 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:04.265 20:35:07 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:04.265 20:35:07 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:04.524 request: 00:05:04.524 { 00:05:04.524 "method": "env_dpdk_get_mem_stats", 00:05:04.524 "req_id": 1 00:05:04.524 } 00:05:04.524 Got JSON-RPC error response 00:05:04.524 response: 00:05:04.524 { 00:05:04.524 "code": -32601, 00:05:04.524 "message": "Method not found" 00:05:04.524 } 00:05:04.524 20:35:08 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:04.524 20:35:08 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:04.524 20:35:08 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:04.524 20:35:08 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:04.524 20:35:08 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1542554 00:05:04.524 20:35:08 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 1542554 ']' 00:05:04.524 20:35:08 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 1542554 00:05:04.524 20:35:08 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:04.524 20:35:08 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:04.524 20:35:08 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1542554 00:05:04.524 20:35:08 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:04.524 20:35:08 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:04.524 20:35:08 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1542554' 00:05:04.524 killing process with pid 1542554 00:05:04.524 20:35:08 app_cmdline -- common/autotest_common.sh@973 -- # kill 1542554 00:05:04.524 20:35:08 app_cmdline -- common/autotest_common.sh@978 -- # wait 1542554 00:05:04.782 00:05:04.782 real 0m1.624s 00:05:04.782 user 0m1.976s 00:05:04.782 sys 0m0.480s 00:05:04.782 20:35:08 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.782 20:35:08 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:04.782 ************************************ 00:05:04.782 END TEST app_cmdline 00:05:04.782 ************************************ 00:05:05.041 20:35:08 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:05.041 20:35:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.041 20:35:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.041 20:35:08 -- common/autotest_common.sh@10 -- # set +x 00:05:05.041 ************************************ 00:05:05.041 START TEST version 00:05:05.041 ************************************ 00:05:05.041 20:35:08 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:05.041 * Looking for test storage... 00:05:05.041 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:05.041 20:35:08 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:05.041 20:35:08 version -- common/autotest_common.sh@1693 -- # lcov --version 00:05:05.041 20:35:08 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:05.041 20:35:08 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:05.041 20:35:08 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:05.041 20:35:08 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:05.041 20:35:08 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:05.041 20:35:08 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.041 20:35:08 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:05.041 20:35:08 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:05.041 20:35:08 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:05.041 20:35:08 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:05.041 20:35:08 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:05.041 20:35:08 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:05.041 20:35:08 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:05.041 20:35:08 version -- scripts/common.sh@344 -- # case "$op" in 00:05:05.041 20:35:08 version -- scripts/common.sh@345 -- # : 1 00:05:05.041 20:35:08 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:05.041 20:35:08 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.041 20:35:08 version -- scripts/common.sh@365 -- # decimal 1 00:05:05.041 20:35:08 version -- scripts/common.sh@353 -- # local d=1 00:05:05.041 20:35:08 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.041 20:35:08 version -- scripts/common.sh@355 -- # echo 1 00:05:05.041 20:35:08 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:05.041 20:35:08 version -- scripts/common.sh@366 -- # decimal 2 00:05:05.041 20:35:08 version -- scripts/common.sh@353 -- # local d=2 00:05:05.041 20:35:08 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.041 20:35:08 version -- scripts/common.sh@355 -- # echo 2 00:05:05.041 20:35:08 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:05.041 20:35:08 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:05.041 20:35:08 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:05.041 20:35:08 version -- scripts/common.sh@368 -- # return 0 00:05:05.041 20:35:08 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.041 20:35:08 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:05.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.041 --rc genhtml_branch_coverage=1 00:05:05.041 --rc genhtml_function_coverage=1 00:05:05.041 --rc genhtml_legend=1 00:05:05.041 --rc geninfo_all_blocks=1 00:05:05.041 --rc geninfo_unexecuted_blocks=1 00:05:05.041 00:05:05.041 ' 00:05:05.041 20:35:08 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:05.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.041 --rc genhtml_branch_coverage=1 00:05:05.041 --rc genhtml_function_coverage=1 00:05:05.041 --rc genhtml_legend=1 00:05:05.041 --rc geninfo_all_blocks=1 00:05:05.041 --rc geninfo_unexecuted_blocks=1 00:05:05.041 00:05:05.041 ' 00:05:05.041 20:35:08 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:05.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.041 --rc genhtml_branch_coverage=1 00:05:05.041 --rc genhtml_function_coverage=1 00:05:05.041 --rc genhtml_legend=1 00:05:05.041 --rc geninfo_all_blocks=1 00:05:05.041 --rc geninfo_unexecuted_blocks=1 00:05:05.041 00:05:05.041 ' 00:05:05.041 20:35:08 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:05.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.041 --rc genhtml_branch_coverage=1 00:05:05.041 --rc genhtml_function_coverage=1 00:05:05.041 --rc genhtml_legend=1 00:05:05.041 --rc geninfo_all_blocks=1 00:05:05.041 --rc geninfo_unexecuted_blocks=1 00:05:05.041 00:05:05.041 ' 00:05:05.041 20:35:08 version -- app/version.sh@17 -- # get_header_version major 00:05:05.041 20:35:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:05.041 20:35:08 version -- app/version.sh@14 -- # cut -f2 00:05:05.041 20:35:08 version -- app/version.sh@14 -- # tr -d '"' 00:05:05.041 20:35:08 version -- app/version.sh@17 -- # major=25 00:05:05.041 20:35:08 version -- app/version.sh@18 -- # get_header_version minor 00:05:05.041 20:35:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:05.041 20:35:08 version -- app/version.sh@14 -- # cut -f2 00:05:05.041 20:35:08 version -- app/version.sh@14 -- # tr -d '"' 00:05:05.041 20:35:08 version -- app/version.sh@18 -- # minor=1 00:05:05.041 20:35:08 version -- app/version.sh@19 -- # get_header_version patch 00:05:05.041 20:35:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:05.041 20:35:08 version -- app/version.sh@14 -- # cut -f2 00:05:05.041 20:35:08 version -- app/version.sh@14 -- # tr -d '"' 00:05:05.041 20:35:08 version -- app/version.sh@19 -- # patch=0 00:05:05.041 20:35:08 version -- app/version.sh@20 -- # get_header_version suffix 00:05:05.041 20:35:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:05.041 20:35:08 version -- app/version.sh@14 -- # cut -f2 00:05:05.041 20:35:08 version -- app/version.sh@14 -- # tr -d '"' 00:05:05.041 20:35:08 version -- app/version.sh@20 -- # suffix=-pre 00:05:05.041 20:35:08 version -- app/version.sh@22 -- # version=25.1 00:05:05.041 20:35:08 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:05.041 20:35:08 version -- app/version.sh@28 -- # version=25.1rc0 00:05:05.041 20:35:08 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:05.041 20:35:08 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:05.041 20:35:08 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:05.041 20:35:08 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:05.041 00:05:05.041 real 0m0.196s 00:05:05.041 user 0m0.131s 00:05:05.041 sys 0m0.090s 00:05:05.041 20:35:08 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.041 20:35:08 version -- common/autotest_common.sh@10 -- # set +x 00:05:05.041 ************************************ 00:05:05.041 END TEST version 00:05:05.041 ************************************ 00:05:05.299 20:35:08 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:05.299 20:35:08 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:05.299 20:35:08 -- spdk/autotest.sh@194 -- # uname -s 00:05:05.299 20:35:08 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:05.300 20:35:08 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:05.300 20:35:08 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:05.300 20:35:08 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:05.300 20:35:08 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:05.300 20:35:08 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:05.300 20:35:08 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:05.300 20:35:08 -- common/autotest_common.sh@10 -- # set +x 00:05:05.300 20:35:08 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:05.300 20:35:08 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:05:05.300 20:35:08 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:05:05.300 20:35:08 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:05:05.300 20:35:08 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:05:05.300 20:35:08 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:05:05.300 20:35:08 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:05.300 20:35:08 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:05.300 20:35:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.300 20:35:08 -- common/autotest_common.sh@10 -- # set +x 00:05:05.300 ************************************ 00:05:05.300 START TEST nvmf_tcp 00:05:05.300 ************************************ 00:05:05.300 20:35:08 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:05.300 * Looking for test storage... 00:05:05.300 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:05.300 20:35:08 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:05.300 20:35:08 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:05.300 20:35:08 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:05.300 20:35:08 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:05.300 20:35:08 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:05.300 20:35:08 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:05.300 20:35:08 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:05.300 20:35:08 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.300 20:35:08 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:05.300 20:35:08 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:05.300 20:35:08 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:05.300 20:35:08 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:05.300 20:35:08 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:05.300 20:35:08 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:05.300 20:35:08 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:05.300 20:35:08 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:05.300 20:35:08 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:05.300 20:35:08 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:05.300 20:35:08 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.300 20:35:08 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:05.300 20:35:08 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:05.300 20:35:08 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.300 20:35:08 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:05.300 20:35:08 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:05.300 20:35:08 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:05.300 20:35:08 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:05.300 20:35:08 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.300 20:35:08 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:05.300 20:35:08 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:05.300 20:35:08 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:05.300 20:35:08 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:05.300 20:35:08 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:05.300 20:35:08 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.300 20:35:08 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:05.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.300 --rc genhtml_branch_coverage=1 00:05:05.300 --rc genhtml_function_coverage=1 00:05:05.300 --rc genhtml_legend=1 00:05:05.300 --rc geninfo_all_blocks=1 00:05:05.300 --rc geninfo_unexecuted_blocks=1 00:05:05.300 00:05:05.300 ' 00:05:05.300 20:35:08 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:05.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.300 --rc genhtml_branch_coverage=1 00:05:05.300 --rc genhtml_function_coverage=1 00:05:05.300 --rc genhtml_legend=1 00:05:05.300 --rc geninfo_all_blocks=1 00:05:05.300 --rc geninfo_unexecuted_blocks=1 00:05:05.300 00:05:05.300 ' 00:05:05.300 20:35:08 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:05.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.300 --rc genhtml_branch_coverage=1 00:05:05.300 --rc genhtml_function_coverage=1 00:05:05.300 --rc genhtml_legend=1 00:05:05.300 --rc geninfo_all_blocks=1 00:05:05.300 --rc geninfo_unexecuted_blocks=1 00:05:05.300 00:05:05.300 ' 00:05:05.300 20:35:08 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:05.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.300 --rc genhtml_branch_coverage=1 00:05:05.300 --rc genhtml_function_coverage=1 00:05:05.300 --rc genhtml_legend=1 00:05:05.300 --rc geninfo_all_blocks=1 00:05:05.300 --rc geninfo_unexecuted_blocks=1 00:05:05.300 00:05:05.300 ' 00:05:05.300 20:35:08 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:05.300 20:35:08 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:05.300 20:35:08 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:05.300 20:35:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:05.300 20:35:08 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.300 20:35:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:05.300 ************************************ 00:05:05.300 START TEST nvmf_target_core 00:05:05.300 ************************************ 00:05:05.300 20:35:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:05.558 * Looking for test storage... 00:05:05.558 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:05.558 20:35:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:05.558 20:35:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:05:05.558 20:35:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:05.558 20:35:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:05.558 20:35:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:05.558 20:35:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:05.558 20:35:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:05.558 20:35:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.558 20:35:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:05.558 20:35:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:05.558 20:35:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:05.558 20:35:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:05.558 20:35:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:05.558 20:35:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:05.558 20:35:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:05.558 20:35:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:05.558 20:35:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:05.558 20:35:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:05.558 20:35:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.558 20:35:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:05.558 20:35:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:05.558 20:35:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.558 20:35:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:05.558 20:35:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:05.558 20:35:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:05.558 20:35:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:05.558 20:35:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.558 20:35:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:05.558 20:35:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:05.558 20:35:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:05.558 20:35:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:05.558 20:35:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:05.558 20:35:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:05.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.559 --rc genhtml_branch_coverage=1 00:05:05.559 --rc genhtml_function_coverage=1 00:05:05.559 --rc genhtml_legend=1 00:05:05.559 --rc geninfo_all_blocks=1 00:05:05.559 --rc geninfo_unexecuted_blocks=1 00:05:05.559 00:05:05.559 ' 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:05.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.559 --rc genhtml_branch_coverage=1 00:05:05.559 --rc genhtml_function_coverage=1 00:05:05.559 --rc genhtml_legend=1 00:05:05.559 --rc geninfo_all_blocks=1 00:05:05.559 --rc geninfo_unexecuted_blocks=1 00:05:05.559 00:05:05.559 ' 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:05.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.559 --rc genhtml_branch_coverage=1 00:05:05.559 --rc genhtml_function_coverage=1 00:05:05.559 --rc genhtml_legend=1 00:05:05.559 --rc geninfo_all_blocks=1 00:05:05.559 --rc geninfo_unexecuted_blocks=1 00:05:05.559 00:05:05.559 ' 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:05.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.559 --rc genhtml_branch_coverage=1 00:05:05.559 --rc genhtml_function_coverage=1 00:05:05.559 --rc genhtml_legend=1 00:05:05.559 --rc geninfo_all_blocks=1 00:05:05.559 --rc geninfo_unexecuted_blocks=1 00:05:05.559 00:05:05.559 ' 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:05.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:05.559 ************************************ 00:05:05.559 START TEST nvmf_abort 00:05:05.559 ************************************ 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:05.559 * Looking for test storage... 00:05:05.559 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:05.559 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:05.560 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:05.560 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:05.560 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:05.560 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:05.560 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:05.560 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:05.560 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:05.560 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.819 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:05.819 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:05.819 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.819 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:05.819 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:05.819 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:05.819 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:05.819 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.819 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:05.819 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:05.819 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:05.819 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:05.819 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:05.819 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.819 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:05.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.819 --rc genhtml_branch_coverage=1 00:05:05.819 --rc genhtml_function_coverage=1 00:05:05.819 --rc genhtml_legend=1 00:05:05.819 --rc geninfo_all_blocks=1 00:05:05.819 --rc geninfo_unexecuted_blocks=1 00:05:05.819 00:05:05.819 ' 00:05:05.819 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:05.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.819 --rc genhtml_branch_coverage=1 00:05:05.819 --rc genhtml_function_coverage=1 00:05:05.819 --rc genhtml_legend=1 00:05:05.819 --rc geninfo_all_blocks=1 00:05:05.819 --rc geninfo_unexecuted_blocks=1 00:05:05.819 00:05:05.819 ' 00:05:05.819 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:05.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.819 --rc genhtml_branch_coverage=1 00:05:05.819 --rc genhtml_function_coverage=1 00:05:05.819 --rc genhtml_legend=1 00:05:05.819 --rc geninfo_all_blocks=1 00:05:05.819 --rc geninfo_unexecuted_blocks=1 00:05:05.819 00:05:05.819 ' 00:05:05.819 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:05.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.819 --rc genhtml_branch_coverage=1 00:05:05.819 --rc genhtml_function_coverage=1 00:05:05.819 --rc genhtml_legend=1 00:05:05.819 --rc geninfo_all_blocks=1 00:05:05.819 --rc geninfo_unexecuted_blocks=1 00:05:05.819 00:05:05.819 ' 00:05:05.819 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:05.819 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:05.819 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:05.819 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:05.819 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:05.819 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:05.819 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:05.819 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:05.819 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:05.819 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:05.819 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:05.819 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:05.819 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:05:05.819 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:05:05.819 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:05.819 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:05.819 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:05.819 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:05.819 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:05.819 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:05.819 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:05.819 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:05.819 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:05.819 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.819 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.819 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.819 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:05.819 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.820 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:05.820 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:05.820 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:05.820 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:05.820 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:05.820 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:05.820 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:05.820 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:05.820 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:05.820 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:05.820 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:05.820 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:05.820 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:05.820 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:05.820 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:05.820 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:05.820 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:05.820 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:05.820 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:05.820 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:05.820 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:05.820 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:05.820 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:05.820 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:05.820 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:05.820 20:35:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:05:07.722 Found 0000:09:00.0 (0x8086 - 0x159b) 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:05:07.722 Found 0000:09:00.1 (0x8086 - 0x159b) 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:05:07.722 Found net devices under 0000:09:00.0: cvl_0_0 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:05:07.722 Found net devices under 0000:09:00.1: cvl_0_1 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:07.722 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:07.723 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:07.723 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:07.723 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:07.723 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:07.723 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:07.723 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:07.723 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:07.723 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:07.723 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:07.723 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:07.723 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:07.723 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:07.980 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:07.980 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:07.980 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:07.980 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:07.980 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:07.980 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:07.981 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:07.981 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:07.981 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:07.981 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:07.981 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:05:07.981 00:05:07.981 --- 10.0.0.2 ping statistics --- 00:05:07.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:07.981 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:05:07.981 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:07.981 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:07.981 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:05:07.981 00:05:07.981 --- 10.0.0.1 ping statistics --- 00:05:07.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:07.981 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:05:07.981 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:07.981 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:07.981 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:07.981 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:07.981 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:07.981 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:07.981 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:07.981 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:07.981 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:07.981 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:07.981 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:07.981 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:07.981 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:07.981 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1544639 00:05:07.981 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:07.981 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1544639 00:05:07.981 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1544639 ']' 00:05:07.981 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.981 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.981 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.981 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.981 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:07.981 [2024-11-26 20:35:11.609444] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:05:07.981 [2024-11-26 20:35:11.609539] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:08.238 [2024-11-26 20:35:11.681004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:08.239 [2024-11-26 20:35:11.736451] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:08.239 [2024-11-26 20:35:11.736505] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:08.239 [2024-11-26 20:35:11.736533] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:08.239 [2024-11-26 20:35:11.736544] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:08.239 [2024-11-26 20:35:11.736554] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:08.239 [2024-11-26 20:35:11.738022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:08.239 [2024-11-26 20:35:11.738093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:08.239 [2024-11-26 20:35:11.738098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.239 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:08.239 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:05:08.239 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:08.239 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:08.239 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:08.239 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:08.239 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:08.239 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.239 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:08.239 [2024-11-26 20:35:11.885714] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:08.239 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.239 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:08.239 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.239 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:08.239 Malloc0 00:05:08.239 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.239 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:08.239 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.239 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:08.496 Delay0 00:05:08.496 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.496 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:08.496 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.496 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:08.496 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.496 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:08.496 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.496 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:08.496 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.496 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:08.496 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.496 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:08.496 [2024-11-26 20:35:11.958010] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:08.496 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.496 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:08.496 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.496 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:08.496 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.496 20:35:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:08.496 [2024-11-26 20:35:12.104389] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:11.079 Initializing NVMe Controllers 00:05:11.079 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:11.079 controller IO queue size 128 less than required 00:05:11.079 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:11.079 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:11.079 Initialization complete. Launching workers. 00:05:11.079 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28294 00:05:11.079 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28355, failed to submit 62 00:05:11.079 success 28298, unsuccessful 57, failed 0 00:05:11.079 20:35:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:11.079 20:35:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.079 20:35:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:11.079 20:35:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.079 20:35:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:11.079 20:35:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:11.079 20:35:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:11.079 20:35:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:11.079 20:35:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:11.079 20:35:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:11.079 20:35:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:11.079 20:35:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:11.079 rmmod nvme_tcp 00:05:11.079 rmmod nvme_fabrics 00:05:11.079 rmmod nvme_keyring 00:05:11.079 20:35:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:11.079 20:35:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:11.079 20:35:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:11.079 20:35:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1544639 ']' 00:05:11.079 20:35:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1544639 00:05:11.079 20:35:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1544639 ']' 00:05:11.079 20:35:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1544639 00:05:11.079 20:35:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:05:11.079 20:35:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:11.079 20:35:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1544639 00:05:11.079 20:35:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:11.079 20:35:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:11.079 20:35:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1544639' 00:05:11.079 killing process with pid 1544639 00:05:11.079 20:35:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1544639 00:05:11.079 20:35:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1544639 00:05:11.079 20:35:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:11.079 20:35:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:11.079 20:35:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:11.079 20:35:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:11.079 20:35:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:11.079 20:35:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:11.079 20:35:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:11.079 20:35:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:11.079 20:35:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:11.079 20:35:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:11.079 20:35:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:11.079 20:35:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:12.985 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:12.985 00:05:12.985 real 0m7.468s 00:05:12.985 user 0m10.882s 00:05:12.985 sys 0m2.582s 00:05:12.985 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.985 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:12.985 ************************************ 00:05:12.985 END TEST nvmf_abort 00:05:12.985 ************************************ 00:05:12.985 20:35:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:12.985 20:35:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:12.985 20:35:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.985 20:35:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:12.985 ************************************ 00:05:12.985 START TEST nvmf_ns_hotplug_stress 00:05:12.985 ************************************ 00:05:12.985 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:13.244 * Looking for test storage... 00:05:13.244 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:13.244 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:13.244 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:05:13.244 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:13.244 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:13.244 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:13.244 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:13.244 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:13.244 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:13.244 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:13.244 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:13.244 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:13.244 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:13.244 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:13.244 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:13.244 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:13.244 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:13.244 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:13.244 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:13.244 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:13.244 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:13.244 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:13.244 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:13.244 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:13.244 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:13.244 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:13.244 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:13.244 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:13.244 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:13.244 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:13.244 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:13.244 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:13.244 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:13.244 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:13.244 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:13.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.244 --rc genhtml_branch_coverage=1 00:05:13.244 --rc genhtml_function_coverage=1 00:05:13.244 --rc genhtml_legend=1 00:05:13.244 --rc geninfo_all_blocks=1 00:05:13.244 --rc geninfo_unexecuted_blocks=1 00:05:13.244 00:05:13.244 ' 00:05:13.244 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:13.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.244 --rc genhtml_branch_coverage=1 00:05:13.244 --rc genhtml_function_coverage=1 00:05:13.244 --rc genhtml_legend=1 00:05:13.244 --rc geninfo_all_blocks=1 00:05:13.244 --rc geninfo_unexecuted_blocks=1 00:05:13.244 00:05:13.244 ' 00:05:13.244 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:13.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.244 --rc genhtml_branch_coverage=1 00:05:13.244 --rc genhtml_function_coverage=1 00:05:13.244 --rc genhtml_legend=1 00:05:13.244 --rc geninfo_all_blocks=1 00:05:13.244 --rc geninfo_unexecuted_blocks=1 00:05:13.244 00:05:13.244 ' 00:05:13.244 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:13.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.244 --rc genhtml_branch_coverage=1 00:05:13.244 --rc genhtml_function_coverage=1 00:05:13.244 --rc genhtml_legend=1 00:05:13.244 --rc geninfo_all_blocks=1 00:05:13.245 --rc geninfo_unexecuted_blocks=1 00:05:13.245 00:05:13.245 ' 00:05:13.245 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:13.245 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:13.245 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:13.245 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:13.245 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:13.245 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:13.245 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:13.245 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:13.245 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:13.245 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:13.245 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:13.245 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:13.245 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:05:13.245 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:05:13.245 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:13.245 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:13.245 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:13.245 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:13.245 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:13.245 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:13.245 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:13.245 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:13.245 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:13.245 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.245 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.245 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.245 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:13.245 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.245 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:13.245 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:13.245 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:13.245 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:13.245 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:13.245 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:13.245 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:13.245 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:13.245 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:13.245 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:13.245 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:13.245 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:13.245 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:13.245 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:13.245 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:13.245 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:13.245 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:13.245 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:13.245 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:13.245 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:13.245 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:13.245 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:13.245 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:13.245 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:13.245 20:35:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:15.773 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:15.773 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:15.773 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:15.773 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:15.773 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:15.773 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:15.773 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:15.773 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:15.773 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:15.773 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:15.773 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:15.773 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:15.773 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:15.773 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:15.773 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:15.773 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:15.773 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:15.773 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:15.773 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:15.773 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:15.773 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:15.773 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:15.773 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:15.773 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:15.773 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:15.773 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:15.773 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:15.773 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:15.773 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:15.773 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:15.773 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:15.773 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:15.773 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:15.773 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:15.773 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:05:15.773 Found 0000:09:00.0 (0x8086 - 0x159b) 00:05:15.773 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:15.773 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:15.773 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:15.773 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:15.773 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:15.773 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:15.773 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:05:15.773 Found 0000:09:00.1 (0x8086 - 0x159b) 00:05:15.774 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:15.774 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:15.774 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:15.774 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:15.774 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:15.774 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:15.774 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:15.774 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:15.774 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:15.774 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:15.774 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:15.774 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:15.774 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:15.774 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:15.774 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:15.774 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:05:15.774 Found net devices under 0000:09:00.0: cvl_0_0 00:05:15.774 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:15.774 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:15.774 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:15.774 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:15.774 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:15.774 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:15.774 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:15.774 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:15.774 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:05:15.774 Found net devices under 0000:09:00.1: cvl_0_1 00:05:15.774 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:15.774 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:15.774 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:15.774 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:15.774 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:15.774 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:15.774 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:15.774 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:15.774 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:15.774 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:15.774 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:15.774 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:15.774 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:15.774 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:15.774 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:15.774 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:15.774 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:15.774 20:35:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:15.774 20:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:15.774 20:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:15.774 20:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:15.774 20:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:15.774 20:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:15.774 20:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:15.774 20:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:15.774 20:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:15.774 20:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:15.774 20:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:15.774 20:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:15.774 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:15.774 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:05:15.774 00:05:15.774 --- 10.0.0.2 ping statistics --- 00:05:15.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:15.774 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:05:15.774 20:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:15.774 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:15.774 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:05:15.774 00:05:15.774 --- 10.0.0.1 ping statistics --- 00:05:15.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:15.774 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:05:15.774 20:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:15.774 20:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:15.774 20:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:15.774 20:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:15.774 20:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:15.774 20:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:15.774 20:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:15.774 20:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:15.774 20:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:15.774 20:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:15.774 20:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:15.774 20:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:15.774 20:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:15.774 20:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1547013 00:05:15.774 20:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:15.774 20:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1547013 00:05:15.774 20:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1547013 ']' 00:05:15.774 20:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.774 20:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.774 20:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.774 20:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.774 20:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:15.774 [2024-11-26 20:35:19.197769] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:05:15.774 [2024-11-26 20:35:19.197862] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:15.774 [2024-11-26 20:35:19.273835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:15.774 [2024-11-26 20:35:19.332959] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:15.774 [2024-11-26 20:35:19.333014] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:15.774 [2024-11-26 20:35:19.333043] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:15.774 [2024-11-26 20:35:19.333055] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:15.774 [2024-11-26 20:35:19.333064] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:15.774 [2024-11-26 20:35:19.334698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:15.774 [2024-11-26 20:35:19.334763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:15.774 [2024-11-26 20:35:19.334768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.774 20:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:15.774 20:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:05:15.774 20:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:15.774 20:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:15.774 20:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:16.032 20:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:16.033 20:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:16.033 20:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:16.290 [2024-11-26 20:35:19.731140] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:16.290 20:35:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:16.547 20:35:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:16.805 [2024-11-26 20:35:20.294252] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:16.805 20:35:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:17.061 20:35:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:17.318 Malloc0 00:05:17.318 20:35:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:17.575 Delay0 00:05:17.575 20:35:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:17.832 20:35:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:18.089 NULL1 00:05:18.090 20:35:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:18.347 20:35:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1547308 00:05:18.347 20:35:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:18.347 20:35:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1547308 00:05:18.347 20:35:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:19.719 Read completed with error (sct=0, sc=11) 00:05:19.719 20:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:19.719 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:19.719 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:19.719 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:19.719 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:19.977 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:19.977 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:19.977 20:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:19.977 20:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:20.234 true 00:05:20.235 20:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1547308 00:05:20.235 20:35:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:21.165 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:21.165 20:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:21.165 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:21.165 20:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:21.165 20:35:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:21.421 true 00:05:21.421 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1547308 00:05:21.421 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:21.678 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:21.935 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:21.935 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:22.191 true 00:05:22.448 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1547308 00:05:22.448 20:35:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:23.013 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:23.270 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:23.270 20:35:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:23.528 true 00:05:23.528 20:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1547308 00:05:23.528 20:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:23.794 20:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:24.114 20:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:24.114 20:35:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:24.386 true 00:05:24.386 20:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1547308 00:05:24.386 20:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:24.643 20:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:24.900 20:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:24.900 20:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:25.157 true 00:05:25.157 20:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1547308 00:05:25.157 20:35:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:26.087 20:35:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:26.651 20:35:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:26.651 20:35:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:26.651 true 00:05:26.908 20:35:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1547308 00:05:26.908 20:35:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:27.166 20:35:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:27.424 20:35:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:27.424 20:35:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:27.681 true 00:05:27.681 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1547308 00:05:27.681 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:27.939 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:28.196 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:28.196 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:28.452 true 00:05:28.452 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1547308 00:05:28.452 20:35:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:29.384 20:35:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:29.643 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:29.643 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:29.900 true 00:05:29.900 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1547308 00:05:29.900 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.157 20:35:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.415 20:35:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:30.415 20:35:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:30.671 true 00:05:30.671 20:35:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1547308 00:05:30.671 20:35:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.928 20:35:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:31.186 20:35:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:31.186 20:35:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:31.444 true 00:05:31.702 20:35:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1547308 00:05:31.702 20:35:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.636 20:35:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:32.917 20:35:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:32.918 20:35:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:33.175 true 00:05:33.175 20:35:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1547308 00:05:33.175 20:35:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.432 20:35:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.690 20:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:33.690 20:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:33.948 true 00:05:33.948 20:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1547308 00:05:33.948 20:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.206 20:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.464 20:35:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:34.464 20:35:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:34.722 true 00:05:34.722 20:35:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1547308 00:05:34.722 20:35:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.665 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:35.665 20:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:35.665 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:35.926 20:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:35.926 20:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:36.185 true 00:05:36.185 20:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1547308 00:05:36.185 20:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.442 20:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.700 20:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:36.701 20:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:36.959 true 00:05:36.959 20:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1547308 00:05:36.959 20:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.217 20:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:37.474 20:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:37.474 20:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:37.730 true 00:05:37.730 20:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1547308 00:05:37.730 20:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.103 20:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.103 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:39.103 20:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:39.103 20:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:39.359 true 00:05:39.359 20:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1547308 00:05:39.359 20:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.615 20:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.871 20:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:39.871 20:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:40.128 true 00:05:40.128 20:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1547308 00:05:40.128 20:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.385 20:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.642 20:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:40.642 20:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:40.899 true 00:05:40.899 20:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1547308 00:05:40.899 20:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.269 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:42.269 20:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.269 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:42.269 20:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:42.269 20:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:42.527 true 00:05:42.527 20:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1547308 00:05:42.527 20:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.784 20:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.041 20:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:43.041 20:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:43.298 true 00:05:43.298 20:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1547308 00:05:43.298 20:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.229 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.229 20:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.229 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.229 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.485 20:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:44.485 20:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:44.742 true 00:05:44.742 20:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1547308 00:05:44.742 20:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.998 20:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.255 20:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:45.255 20:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:45.513 true 00:05:45.513 20:35:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1547308 00:05:45.513 20:35:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.445 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.702 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:46.702 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:46.959 true 00:05:46.959 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1547308 00:05:46.959 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.217 20:35:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.474 20:35:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:47.474 20:35:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:47.731 true 00:05:47.731 20:35:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1547308 00:05:47.731 20:35:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.988 20:35:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.619 20:35:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:48.619 20:35:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:05:48.619 true 00:05:48.619 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1547308 00:05:48.619 20:35:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.575 Initializing NVMe Controllers 00:05:49.575 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:49.575 Controller IO queue size 128, less than required. 00:05:49.575 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:49.575 Controller IO queue size 128, less than required. 00:05:49.575 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:49.575 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:05:49.575 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:05:49.575 Initialization complete. Launching workers. 00:05:49.576 ======================================================== 00:05:49.576 Latency(us) 00:05:49.576 Device Information : IOPS MiB/s Average min max 00:05:49.576 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 599.47 0.29 94807.58 3155.27 1012473.11 00:05:49.576 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8604.38 4.20 14876.86 3314.99 521035.15 00:05:49.576 ======================================================== 00:05:49.576 Total : 9203.85 4.49 20082.91 3155.27 1012473.11 00:05:49.576 00:05:49.576 20:35:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.832 20:35:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:05:49.832 20:35:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:05:50.089 true 00:05:50.089 20:35:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1547308 00:05:50.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1547308) - No such process 00:05:50.089 20:35:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1547308 00:05:50.089 20:35:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.346 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:50.603 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:05:50.603 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:05:50.603 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:05:50.603 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:50.603 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:05:51.167 null0 00:05:51.167 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:51.167 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:51.167 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:05:51.167 null1 00:05:51.167 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:51.167 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:51.167 20:35:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:05:51.424 null2 00:05:51.424 20:35:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:51.424 20:35:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:51.424 20:35:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:05:51.681 null3 00:05:51.939 20:35:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:51.939 20:35:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:51.939 20:35:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:05:52.196 null4 00:05:52.196 20:35:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:52.196 20:35:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:52.196 20:35:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:05:52.453 null5 00:05:52.453 20:35:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:52.454 20:35:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:52.454 20:35:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:05:52.711 null6 00:05:52.711 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:52.711 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:52.711 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:05:52.969 null7 00:05:52.969 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1551518 1551519 1551521 1551523 1551525 1551527 1551529 1551531 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:52.970 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:53.228 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:53.228 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:53.228 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:53.228 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:53.228 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:53.228 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:53.228 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.228 20:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:53.486 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.486 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.486 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:53.486 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.486 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.486 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:53.486 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.486 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.486 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:53.486 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.486 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.486 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:53.486 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.486 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.486 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:53.486 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.486 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.486 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:53.486 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.486 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.486 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:53.486 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:53.486 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:53.486 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:53.744 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:53.744 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:53.744 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:53.744 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:53.744 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:53.744 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:53.744 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:53.744 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.002 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.002 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.002 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:54.002 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.002 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.002 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:54.002 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.002 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.002 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:54.002 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.002 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.002 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:54.002 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.002 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.002 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:54.002 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.002 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.002 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.002 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:54.002 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.002 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:54.002 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.002 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.002 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:54.566 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:54.566 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:54.566 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:54.566 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:54.566 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:54.566 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:54.566 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:54.566 20:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.824 20:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.824 20:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.824 20:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:54.824 20:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.824 20:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.824 20:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:54.824 20:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.824 20:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.824 20:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:54.824 20:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.824 20:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.824 20:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:54.824 20:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.824 20:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.824 20:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:54.824 20:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.824 20:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.824 20:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.824 20:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.824 20:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:54.824 20:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:54.824 20:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:54.824 20:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:54.824 20:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:55.081 20:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:55.081 20:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.081 20:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:55.081 20:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:55.081 20:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:55.081 20:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:55.081 20:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:55.081 20:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:55.338 20:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.338 20:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.338 20:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:55.338 20:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.338 20:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.338 20:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:55.338 20:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.338 20:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.338 20:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:55.338 20:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.338 20:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.338 20:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:55.338 20:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.338 20:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.338 20:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:55.338 20:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.338 20:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.338 20:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:55.338 20:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.338 20:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.338 20:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:55.338 20:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.339 20:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.339 20:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:55.596 20:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.596 20:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:55.596 20:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:55.596 20:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:55.596 20:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:55.596 20:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:55.596 20:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:55.596 20:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:55.852 20:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.852 20:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.852 20:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:55.852 20:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.852 20:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.852 20:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:55.852 20:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.852 20:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.852 20:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.852 20:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.852 20:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:55.852 20:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:55.852 20:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.852 20:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.852 20:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:55.852 20:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.852 20:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.852 20:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:55.852 20:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.852 20:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.852 20:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:55.852 20:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:55.853 20:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:55.853 20:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:56.110 20:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:56.110 20:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:56.367 20:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:56.367 20:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:56.367 20:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:56.367 20:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.367 20:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:56.367 20:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:56.624 20:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.624 20:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.625 20:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:56.625 20:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.625 20:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.625 20:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:56.625 20:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.625 20:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.625 20:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:56.625 20:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.625 20:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.625 20:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:56.625 20:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.625 20:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.625 20:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:56.625 20:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.625 20:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.625 20:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.625 20:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:56.625 20:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.625 20:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:56.625 20:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:56.625 20:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:56.625 20:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:56.882 20:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:56.882 20:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:56.882 20:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:56.882 20:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:56.882 20:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:56.882 20:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:56.882 20:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.882 20:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:57.139 20:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.139 20:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.139 20:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:57.139 20:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.139 20:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.139 20:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:57.139 20:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.139 20:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.139 20:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:57.139 20:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.139 20:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.140 20:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:57.140 20:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.140 20:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.140 20:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:57.140 20:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.140 20:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.140 20:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:57.140 20:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.140 20:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.140 20:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:57.140 20:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.140 20:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.140 20:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:57.397 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:57.397 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:57.397 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:57.397 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:57.397 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:57.397 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:57.397 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:57.397 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.655 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.655 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.655 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:57.655 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.655 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.655 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:57.655 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.655 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.655 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.655 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:57.655 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.655 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:57.655 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.655 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.655 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:57.655 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.655 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.655 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.655 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.655 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:57.655 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:57.655 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:57.655 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:57.655 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:57.912 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:58.170 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:58.170 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:58.170 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:58.170 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:58.170 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:58.170 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.170 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:58.427 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.427 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.427 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:58.427 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.427 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.427 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:58.427 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.427 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.427 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:58.427 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.427 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.427 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:58.427 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.427 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.427 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:58.427 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.427 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.427 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:58.427 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.427 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.427 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:58.427 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.427 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.427 20:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:58.684 20:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:58.684 20:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:58.684 20:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:58.684 20:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:58.684 20:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:58.684 20:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.684 20:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:58.684 20:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:58.941 20:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.941 20:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.941 20:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.941 20:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.941 20:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.941 20:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.941 20:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.941 20:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.941 20:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.941 20:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.941 20:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.941 20:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.941 20:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.941 20:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.941 20:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:58.941 20:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:58.941 20:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:05:58.941 20:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:05:58.941 20:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:58.941 20:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:05:58.941 20:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:58.941 20:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:05:58.941 20:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:58.941 20:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:58.941 rmmod nvme_tcp 00:05:58.941 rmmod nvme_fabrics 00:05:58.941 rmmod nvme_keyring 00:05:58.941 20:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:59.199 20:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:05:59.199 20:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:05:59.199 20:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1547013 ']' 00:05:59.199 20:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1547013 00:05:59.199 20:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1547013 ']' 00:05:59.199 20:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1547013 00:05:59.199 20:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:05:59.199 20:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.199 20:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1547013 00:05:59.199 20:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:59.199 20:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:59.199 20:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1547013' 00:05:59.199 killing process with pid 1547013 00:05:59.199 20:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1547013 00:05:59.199 20:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1547013 00:05:59.457 20:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:59.457 20:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:59.457 20:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:59.457 20:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:05:59.457 20:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:05:59.457 20:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:59.457 20:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:05:59.457 20:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:59.457 20:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:59.457 20:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:59.457 20:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:59.457 20:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:01.358 20:36:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:01.358 00:06:01.358 real 0m48.304s 00:06:01.358 user 3m44.463s 00:06:01.358 sys 0m15.935s 00:06:01.358 20:36:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.358 20:36:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:01.358 ************************************ 00:06:01.358 END TEST nvmf_ns_hotplug_stress 00:06:01.358 ************************************ 00:06:01.358 20:36:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:01.358 20:36:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:01.359 20:36:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.359 20:36:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:01.359 ************************************ 00:06:01.359 START TEST nvmf_delete_subsystem 00:06:01.359 ************************************ 00:06:01.359 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:01.616 * Looking for test storage... 00:06:01.616 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:01.616 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:01.616 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:06:01.616 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:01.616 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:01.616 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:01.616 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:01.616 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:01.616 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:01.616 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:01.616 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:01.616 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:01.616 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:01.616 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:01.616 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:01.616 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:01.616 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:01.616 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:01.616 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:01.616 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:01.616 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:01.616 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:01.616 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:01.616 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:01.616 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:01.616 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:01.616 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:01.616 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:01.616 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:01.617 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:01.617 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:01.617 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:01.617 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:01.617 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:01.617 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:01.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.617 --rc genhtml_branch_coverage=1 00:06:01.617 --rc genhtml_function_coverage=1 00:06:01.617 --rc genhtml_legend=1 00:06:01.617 --rc geninfo_all_blocks=1 00:06:01.617 --rc geninfo_unexecuted_blocks=1 00:06:01.617 00:06:01.617 ' 00:06:01.617 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:01.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.617 --rc genhtml_branch_coverage=1 00:06:01.617 --rc genhtml_function_coverage=1 00:06:01.617 --rc genhtml_legend=1 00:06:01.617 --rc geninfo_all_blocks=1 00:06:01.617 --rc geninfo_unexecuted_blocks=1 00:06:01.617 00:06:01.617 ' 00:06:01.617 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:01.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.617 --rc genhtml_branch_coverage=1 00:06:01.617 --rc genhtml_function_coverage=1 00:06:01.617 --rc genhtml_legend=1 00:06:01.617 --rc geninfo_all_blocks=1 00:06:01.617 --rc geninfo_unexecuted_blocks=1 00:06:01.617 00:06:01.617 ' 00:06:01.617 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:01.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.617 --rc genhtml_branch_coverage=1 00:06:01.617 --rc genhtml_function_coverage=1 00:06:01.617 --rc genhtml_legend=1 00:06:01.617 --rc geninfo_all_blocks=1 00:06:01.617 --rc geninfo_unexecuted_blocks=1 00:06:01.617 00:06:01.617 ' 00:06:01.617 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:01.617 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:01.617 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:01.617 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:01.617 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:01.617 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:01.617 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:01.617 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:01.617 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:01.617 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:01.617 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:01.617 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:01.617 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:01.617 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:01.617 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:01.617 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:01.617 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:01.617 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:01.617 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:01.617 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:01.617 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:01.617 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:01.617 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:01.617 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.617 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.617 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.617 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:01.617 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.617 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:01.617 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:01.617 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:01.617 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:01.617 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:01.617 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:01.617 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:01.617 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:01.617 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:01.617 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:01.617 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:01.617 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:01.617 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:01.617 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:01.617 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:01.617 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:01.617 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:01.617 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:01.617 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:01.617 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:01.617 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:01.617 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:01.617 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:01.617 20:36:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:06:04.149 Found 0000:09:00.0 (0x8086 - 0x159b) 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:06:04.149 Found 0000:09:00.1 (0x8086 - 0x159b) 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:06:04.149 Found net devices under 0000:09:00.0: cvl_0_0 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:06:04.149 Found net devices under 0000:09:00.1: cvl_0_1 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:04.149 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:04.150 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:04.150 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:04.150 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:04.150 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:04.150 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:04.150 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:04.150 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:04.150 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:04.150 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:04.150 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:04.150 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:04.150 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:04.150 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:04.150 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:04.150 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:04.150 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:04.150 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:06:04.150 00:06:04.150 --- 10.0.0.2 ping statistics --- 00:06:04.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:04.150 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:06:04.150 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:04.150 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:04.150 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:06:04.150 00:06:04.150 --- 10.0.0.1 ping statistics --- 00:06:04.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:04.150 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:06:04.150 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:04.150 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:04.150 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:04.150 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:04.150 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:04.150 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:04.150 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:04.150 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:04.150 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:04.150 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:04.150 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:04.150 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:04.150 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:04.150 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1554697 00:06:04.150 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:04.150 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1554697 00:06:04.150 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1554697 ']' 00:06:04.150 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.150 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.150 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.150 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.150 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:04.150 [2024-11-26 20:36:07.584450] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:06:04.150 [2024-11-26 20:36:07.584548] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:04.150 [2024-11-26 20:36:07.657705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:04.150 [2024-11-26 20:36:07.714136] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:04.150 [2024-11-26 20:36:07.714207] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:04.150 [2024-11-26 20:36:07.714230] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:04.150 [2024-11-26 20:36:07.714241] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:04.150 [2024-11-26 20:36:07.714251] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:04.150 [2024-11-26 20:36:07.715758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.150 [2024-11-26 20:36:07.715763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.150 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.150 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:06:04.150 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:04.150 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:04.150 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:04.408 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:04.408 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:04.408 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.408 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:04.408 [2024-11-26 20:36:07.864438] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:04.408 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.408 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:04.408 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.408 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:04.408 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.408 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:04.408 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.408 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:04.408 [2024-11-26 20:36:07.880667] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:04.408 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.408 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:04.408 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.408 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:04.408 NULL1 00:06:04.408 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.408 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:04.408 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.408 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:04.408 Delay0 00:06:04.408 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.408 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:04.408 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.408 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:04.408 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.408 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1554991 00:06:04.408 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:04.408 20:36:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:04.408 [2024-11-26 20:36:07.965544] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:06.302 20:36:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:06.302 20:36:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.302 20:36:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 starting I/O failed: -6 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 starting I/O failed: -6 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Write completed with error (sct=0, sc=8) 00:06:06.867 Write completed with error (sct=0, sc=8) 00:06:06.867 starting I/O failed: -6 00:06:06.867 Write completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 starting I/O failed: -6 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Write completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 starting I/O failed: -6 00:06:06.867 Write completed with error (sct=0, sc=8) 00:06:06.867 Write completed with error (sct=0, sc=8) 00:06:06.867 Write completed with error (sct=0, sc=8) 00:06:06.867 Write completed with error (sct=0, sc=8) 00:06:06.867 starting I/O failed: -6 00:06:06.867 Write completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 starting I/O failed: -6 00:06:06.867 Write completed with error (sct=0, sc=8) 00:06:06.867 Write completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 starting I/O failed: -6 00:06:06.867 Write completed with error (sct=0, sc=8) 00:06:06.867 Write completed with error (sct=0, sc=8) 00:06:06.867 Write completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 starting I/O failed: -6 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Write completed with error (sct=0, sc=8) 00:06:06.867 starting I/O failed: -6 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 starting I/O failed: -6 00:06:06.867 [2024-11-26 20:36:10.257816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fec0c00d020 is same with the state(6) to be set 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Write completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 starting I/O failed: -6 00:06:06.867 Write completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Write completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Write completed with error (sct=0, sc=8) 00:06:06.867 Write completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 starting I/O failed: -6 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Write completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Write completed with error (sct=0, sc=8) 00:06:06.867 Write completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Write completed with error (sct=0, sc=8) 00:06:06.867 starting I/O failed: -6 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Write completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 starting I/O failed: -6 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Write completed with error (sct=0, sc=8) 00:06:06.867 Write completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Write completed with error (sct=0, sc=8) 00:06:06.867 Write completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 starting I/O failed: -6 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Write completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Write completed with error (sct=0, sc=8) 00:06:06.867 Write completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Write completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 starting I/O failed: -6 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Write completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Write completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Write completed with error (sct=0, sc=8) 00:06:06.867 starting I/O failed: -6 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Write completed with error (sct=0, sc=8) 00:06:06.867 Write completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 starting I/O failed: -6 00:06:06.867 Write completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 starting I/O failed: -6 00:06:06.867 Write completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Write completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.867 Write completed with error (sct=0, sc=8) 00:06:06.867 starting I/O failed: -6 00:06:06.867 Write completed with error (sct=0, sc=8) 00:06:06.867 Read completed with error (sct=0, sc=8) 00:06:06.868 Write completed with error (sct=0, sc=8) 00:06:06.868 Read completed with error (sct=0, sc=8) 00:06:06.868 Read completed with error (sct=0, sc=8) 00:06:06.868 Write completed with error (sct=0, sc=8) 00:06:06.868 Read completed with error (sct=0, sc=8) 00:06:06.868 Write completed with error (sct=0, sc=8) 00:06:06.868 Read completed with error (sct=0, sc=8) 00:06:06.868 Write completed with error (sct=0, sc=8) 00:06:06.868 Read completed with error (sct=0, sc=8) 00:06:06.868 Read completed with error (sct=0, sc=8) 00:06:06.868 Read completed with error (sct=0, sc=8) 00:06:06.868 Write completed with error (sct=0, sc=8) 00:06:06.868 starting I/O failed: -6 00:06:06.868 Write completed with error (sct=0, sc=8) 00:06:06.868 Read completed with error (sct=0, sc=8) 00:06:06.868 Read completed with error (sct=0, sc=8) 00:06:06.868 Read completed with error (sct=0, sc=8) 00:06:06.868 Read completed with error (sct=0, sc=8) 00:06:06.868 Read completed with error (sct=0, sc=8) 00:06:06.868 Read completed with error (sct=0, sc=8) 00:06:06.868 Read completed with error (sct=0, sc=8) 00:06:06.868 Read completed with error (sct=0, sc=8) 00:06:06.868 Write completed with error (sct=0, sc=8) 00:06:06.868 Read completed with error (sct=0, sc=8) 00:06:06.868 starting I/O failed: -6 00:06:06.868 Write completed with error (sct=0, sc=8) 00:06:06.868 Read completed with error (sct=0, sc=8) 00:06:06.868 Write completed with error (sct=0, sc=8) 00:06:06.868 Read completed with error (sct=0, sc=8) 00:06:06.868 Write completed with error (sct=0, sc=8) 00:06:06.868 Read completed with error (sct=0, sc=8) 00:06:06.868 Write completed with error (sct=0, sc=8) 00:06:06.868 Read completed with error (sct=0, sc=8) 00:06:06.868 Read completed with error (sct=0, sc=8) 00:06:06.868 Read completed with error (sct=0, sc=8) 00:06:06.868 Read completed with error (sct=0, sc=8) 00:06:06.868 starting I/O failed: -6 00:06:06.868 Write completed with error (sct=0, sc=8) 00:06:06.868 Read completed with error (sct=0, sc=8) 00:06:06.868 Read completed with error (sct=0, sc=8) 00:06:06.868 Write completed with error (sct=0, sc=8) 00:06:06.868 Read completed with error (sct=0, sc=8) 00:06:06.868 Read completed with error (sct=0, sc=8) 00:06:06.868 Read completed with error (sct=0, sc=8) 00:06:06.868 Read completed with error (sct=0, sc=8) 00:06:06.868 Read completed with error (sct=0, sc=8) 00:06:06.868 Read completed with error (sct=0, sc=8) 00:06:06.868 Read completed with error (sct=0, sc=8) 00:06:06.868 Write completed with error (sct=0, sc=8) 00:06:06.868 Read completed with error (sct=0, sc=8) 00:06:06.868 Write completed with error (sct=0, sc=8) 00:06:06.868 Read completed with error (sct=0, sc=8) 00:06:06.868 Write completed with error (sct=0, sc=8) 00:06:06.868 Read completed with error (sct=0, sc=8) 00:06:06.868 Write completed with error (sct=0, sc=8) 00:06:06.868 Read completed with error (sct=0, sc=8) 00:06:06.868 Read completed with error (sct=0, sc=8) 00:06:06.868 Read completed with error (sct=0, sc=8) 00:06:06.868 Read completed with error (sct=0, sc=8) 00:06:06.868 Read completed with error (sct=0, sc=8) 00:06:06.868 Read completed with error (sct=0, sc=8) 00:06:06.868 starting I/O failed: -6 00:06:07.800 [2024-11-26 20:36:11.224793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152b9b0 is same with the state(6) to be set 00:06:07.800 Write completed with error (sct=0, sc=8) 00:06:07.800 Read completed with error (sct=0, sc=8) 00:06:07.800 Read completed with error (sct=0, sc=8) 00:06:07.800 Read completed with error (sct=0, sc=8) 00:06:07.800 Read completed with error (sct=0, sc=8) 00:06:07.800 Read completed with error (sct=0, sc=8) 00:06:07.800 Read completed with error (sct=0, sc=8) 00:06:07.800 Read completed with error (sct=0, sc=8) 00:06:07.800 Write completed with error (sct=0, sc=8) 00:06:07.800 Read completed with error (sct=0, sc=8) 00:06:07.800 Read completed with error (sct=0, sc=8) 00:06:07.800 Write completed with error (sct=0, sc=8) 00:06:07.800 Read completed with error (sct=0, sc=8) 00:06:07.800 Read completed with error (sct=0, sc=8) 00:06:07.800 Read completed with error (sct=0, sc=8) 00:06:07.800 [2024-11-26 20:36:11.258043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fec0c00d350 is same with the state(6) to be set 00:06:07.800 Read completed with error (sct=0, sc=8) 00:06:07.800 Write completed with error (sct=0, sc=8) 00:06:07.800 Read completed with error (sct=0, sc=8) 00:06:07.800 Read completed with error (sct=0, sc=8) 00:06:07.800 Read completed with error (sct=0, sc=8) 00:06:07.800 Write completed with error (sct=0, sc=8) 00:06:07.800 Read completed with error (sct=0, sc=8) 00:06:07.800 Write completed with error (sct=0, sc=8) 00:06:07.800 Write completed with error (sct=0, sc=8) 00:06:07.800 Write completed with error (sct=0, sc=8) 00:06:07.800 Read completed with error (sct=0, sc=8) 00:06:07.800 Write completed with error (sct=0, sc=8) 00:06:07.800 Write completed with error (sct=0, sc=8) 00:06:07.800 Write completed with error (sct=0, sc=8) 00:06:07.800 Read completed with error (sct=0, sc=8) 00:06:07.800 Read completed with error (sct=0, sc=8) 00:06:07.800 Read completed with error (sct=0, sc=8) 00:06:07.800 Read completed with error (sct=0, sc=8) 00:06:07.800 Read completed with error (sct=0, sc=8) 00:06:07.800 Read completed with error (sct=0, sc=8) 00:06:07.800 Write completed with error (sct=0, sc=8) 00:06:07.800 Write completed with error (sct=0, sc=8) 00:06:07.800 Read completed with error (sct=0, sc=8) 00:06:07.800 Read completed with error (sct=0, sc=8) 00:06:07.800 Write completed with error (sct=0, sc=8) 00:06:07.800 Write completed with error (sct=0, sc=8) 00:06:07.800 Read completed with error (sct=0, sc=8) 00:06:07.800 Read completed with error (sct=0, sc=8) 00:06:07.800 Write completed with error (sct=0, sc=8) 00:06:07.800 Write completed with error (sct=0, sc=8) 00:06:07.800 Read completed with error (sct=0, sc=8) 00:06:07.800 Read completed with error (sct=0, sc=8) 00:06:07.800 Read completed with error (sct=0, sc=8) 00:06:07.800 Write completed with error (sct=0, sc=8) 00:06:07.800 Read completed with error (sct=0, sc=8) 00:06:07.800 Read completed with error (sct=0, sc=8) 00:06:07.800 Write completed with error (sct=0, sc=8) 00:06:07.800 Read completed with error (sct=0, sc=8) 00:06:07.800 [2024-11-26 20:36:11.260389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152a4a0 is same with the state(6) to be set 00:06:07.800 Write completed with error (sct=0, sc=8) 00:06:07.800 Read completed with error (sct=0, sc=8) 00:06:07.800 Read completed with error (sct=0, sc=8) 00:06:07.800 Read completed with error (sct=0, sc=8) 00:06:07.800 Read completed with error (sct=0, sc=8) 00:06:07.800 Write completed with error (sct=0, sc=8) 00:06:07.800 Read completed with error (sct=0, sc=8) 00:06:07.800 Write completed with error (sct=0, sc=8) 00:06:07.800 Write completed with error (sct=0, sc=8) 00:06:07.800 Read completed with error (sct=0, sc=8) 00:06:07.800 Read completed with error (sct=0, sc=8) 00:06:07.800 Read completed with error (sct=0, sc=8) 00:06:07.801 Read completed with error (sct=0, sc=8) 00:06:07.801 Read completed with error (sct=0, sc=8) 00:06:07.801 Write completed with error (sct=0, sc=8) 00:06:07.801 Read completed with error (sct=0, sc=8) 00:06:07.801 Read completed with error (sct=0, sc=8) 00:06:07.801 Write completed with error (sct=0, sc=8) 00:06:07.801 Write completed with error (sct=0, sc=8) 00:06:07.801 Write completed with error (sct=0, sc=8) 00:06:07.801 Read completed with error (sct=0, sc=8) 00:06:07.801 Read completed with error (sct=0, sc=8) 00:06:07.801 Read completed with error (sct=0, sc=8) 00:06:07.801 Write completed with error (sct=0, sc=8) 00:06:07.801 Write completed with error (sct=0, sc=8) 00:06:07.801 Read completed with error (sct=0, sc=8) 00:06:07.801 Write completed with error (sct=0, sc=8) 00:06:07.801 Read completed with error (sct=0, sc=8) 00:06:07.801 Read completed with error (sct=0, sc=8) 00:06:07.801 Read completed with error (sct=0, sc=8) 00:06:07.801 Write completed with error (sct=0, sc=8) 00:06:07.801 Write completed with error (sct=0, sc=8) 00:06:07.801 Read completed with error (sct=0, sc=8) 00:06:07.801 Write completed with error (sct=0, sc=8) 00:06:07.801 Read completed with error (sct=0, sc=8) 00:06:07.801 Read completed with error (sct=0, sc=8) 00:06:07.801 Write completed with error (sct=0, sc=8) 00:06:07.801 Read completed with error (sct=0, sc=8) 00:06:07.801 [2024-11-26 20:36:11.260651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152a2c0 is same with the state(6) to be set 00:06:07.801 Read completed with error (sct=0, sc=8) 00:06:07.801 Write completed with error (sct=0, sc=8) 00:06:07.801 Read completed with error (sct=0, sc=8) 00:06:07.801 Read completed with error (sct=0, sc=8) 00:06:07.801 Write completed with error (sct=0, sc=8) 00:06:07.801 Read completed with error (sct=0, sc=8) 00:06:07.801 Read completed with error (sct=0, sc=8) 00:06:07.801 Write completed with error (sct=0, sc=8) 00:06:07.801 Read completed with error (sct=0, sc=8) 00:06:07.801 Read completed with error (sct=0, sc=8) 00:06:07.801 Read completed with error (sct=0, sc=8) 00:06:07.801 Write completed with error (sct=0, sc=8) 00:06:07.801 Read completed with error (sct=0, sc=8) 00:06:07.801 Read completed with error (sct=0, sc=8) 00:06:07.801 Read completed with error (sct=0, sc=8) 00:06:07.801 Write completed with error (sct=0, sc=8) 00:06:07.801 Read completed with error (sct=0, sc=8) 00:06:07.801 Read completed with error (sct=0, sc=8) 00:06:07.801 Write completed with error (sct=0, sc=8) 00:06:07.801 Read completed with error (sct=0, sc=8) 00:06:07.801 Read completed with error (sct=0, sc=8) 00:06:07.801 Read completed with error (sct=0, sc=8) 00:06:07.801 Read completed with error (sct=0, sc=8) 00:06:07.801 Read completed with error (sct=0, sc=8) 00:06:07.801 Read completed with error (sct=0, sc=8) 00:06:07.801 Read completed with error (sct=0, sc=8) 00:06:07.801 Read completed with error (sct=0, sc=8) 00:06:07.801 Write completed with error (sct=0, sc=8) 00:06:07.801 Write completed with error (sct=0, sc=8) 00:06:07.801 Read completed with error (sct=0, sc=8) 00:06:07.801 Read completed with error (sct=0, sc=8) 00:06:07.801 Read completed with error (sct=0, sc=8) 00:06:07.801 Read completed with error (sct=0, sc=8) 00:06:07.801 Read completed with error (sct=0, sc=8) 00:06:07.801 Read completed with error (sct=0, sc=8) 00:06:07.801 Read completed with error (sct=0, sc=8) 00:06:07.801 Write completed with error (sct=0, sc=8) 00:06:07.801 [2024-11-26 20:36:11.260886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152a860 is same with the state(6) to be set 00:06:07.801 Initializing NVMe Controllers 00:06:07.801 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:07.801 Controller IO queue size 128, less than required. 00:06:07.801 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:07.801 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:07.801 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:07.801 Initialization complete. Launching workers. 00:06:07.801 ======================================================== 00:06:07.801 Latency(us) 00:06:07.801 Device Information : IOPS MiB/s Average min max 00:06:07.801 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 194.43 0.09 946421.95 924.81 1012595.18 00:06:07.801 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 152.77 0.07 886320.06 600.99 1013541.30 00:06:07.801 ======================================================== 00:06:07.801 Total : 347.20 0.17 919977.12 600.99 1013541.30 00:06:07.801 00:06:07.801 [2024-11-26 20:36:11.261878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x152b9b0 (9): Bad file descriptor 00:06:07.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:07.801 20:36:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.801 20:36:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:07.801 20:36:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1554991 00:06:07.801 20:36:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:08.367 20:36:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:08.367 20:36:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1554991 00:06:08.367 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1554991) - No such process 00:06:08.367 20:36:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1554991 00:06:08.367 20:36:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:06:08.367 20:36:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1554991 00:06:08.367 20:36:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:06:08.367 20:36:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:08.367 20:36:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:06:08.367 20:36:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:08.367 20:36:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1554991 00:06:08.367 20:36:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:06:08.367 20:36:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:08.367 20:36:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:08.367 20:36:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:08.367 20:36:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:08.367 20:36:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.367 20:36:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:08.367 20:36:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.367 20:36:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:08.367 20:36:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.367 20:36:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:08.367 [2024-11-26 20:36:11.785655] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:08.367 20:36:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.367 20:36:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:08.367 20:36:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.367 20:36:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:08.367 20:36:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.367 20:36:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1555587 00:06:08.367 20:36:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:08.367 20:36:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:08.367 20:36:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1555587 00:06:08.367 20:36:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:08.367 [2024-11-26 20:36:11.859289] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:08.624 20:36:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:08.624 20:36:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1555587 00:06:08.624 20:36:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:09.189 20:36:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:09.189 20:36:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1555587 00:06:09.189 20:36:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:09.753 20:36:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:09.753 20:36:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1555587 00:06:09.753 20:36:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:10.318 20:36:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:10.318 20:36:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1555587 00:06:10.318 20:36:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:10.883 20:36:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:10.883 20:36:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1555587 00:06:10.883 20:36:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:11.141 20:36:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:11.141 20:36:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1555587 00:06:11.141 20:36:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:11.399 Initializing NVMe Controllers 00:06:11.399 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:11.399 Controller IO queue size 128, less than required. 00:06:11.399 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:11.399 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:11.399 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:11.399 Initialization complete. Launching workers. 00:06:11.399 ======================================================== 00:06:11.399 Latency(us) 00:06:11.399 Device Information : IOPS MiB/s Average min max 00:06:11.399 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003395.18 1000152.36 1041127.45 00:06:11.399 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004210.75 1000166.89 1010858.64 00:06:11.399 ======================================================== 00:06:11.399 Total : 256.00 0.12 1003802.96 1000152.36 1041127.45 00:06:11.399 00:06:11.656 20:36:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:11.656 20:36:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1555587 00:06:11.656 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1555587) - No such process 00:06:11.656 20:36:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1555587 00:06:11.656 20:36:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:11.656 20:36:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:11.656 20:36:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:11.656 20:36:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:11.656 20:36:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:11.656 20:36:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:11.656 20:36:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:11.656 20:36:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:11.656 rmmod nvme_tcp 00:06:11.656 rmmod nvme_fabrics 00:06:11.914 rmmod nvme_keyring 00:06:11.914 20:36:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:11.914 20:36:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:11.914 20:36:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:11.914 20:36:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1554697 ']' 00:06:11.914 20:36:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1554697 00:06:11.914 20:36:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1554697 ']' 00:06:11.914 20:36:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1554697 00:06:11.914 20:36:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:06:11.914 20:36:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:11.914 20:36:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1554697 00:06:11.914 20:36:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:11.914 20:36:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:11.914 20:36:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1554697' 00:06:11.914 killing process with pid 1554697 00:06:11.914 20:36:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1554697 00:06:11.914 20:36:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1554697 00:06:12.173 20:36:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:12.173 20:36:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:12.173 20:36:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:12.173 20:36:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:12.173 20:36:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:12.173 20:36:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:12.173 20:36:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:12.173 20:36:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:12.173 20:36:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:12.173 20:36:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:12.173 20:36:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:12.173 20:36:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:14.079 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:14.079 00:06:14.079 real 0m12.682s 00:06:14.079 user 0m28.452s 00:06:14.079 sys 0m3.098s 00:06:14.079 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.079 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:14.079 ************************************ 00:06:14.079 END TEST nvmf_delete_subsystem 00:06:14.079 ************************************ 00:06:14.079 20:36:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:14.079 20:36:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:14.079 20:36:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.079 20:36:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:14.079 ************************************ 00:06:14.079 START TEST nvmf_host_management 00:06:14.079 ************************************ 00:06:14.079 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:14.338 * Looking for test storage... 00:06:14.338 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:14.338 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:14.338 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:06:14.338 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:14.338 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:14.338 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:14.338 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:14.338 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:14.338 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:14.338 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:14.338 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:14.338 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:14.338 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:14.338 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:14.338 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:14.338 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:14.338 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:14.338 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:14.338 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:14.338 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.338 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:14.338 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:14.338 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:14.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.339 --rc genhtml_branch_coverage=1 00:06:14.339 --rc genhtml_function_coverage=1 00:06:14.339 --rc genhtml_legend=1 00:06:14.339 --rc geninfo_all_blocks=1 00:06:14.339 --rc geninfo_unexecuted_blocks=1 00:06:14.339 00:06:14.339 ' 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:14.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.339 --rc genhtml_branch_coverage=1 00:06:14.339 --rc genhtml_function_coverage=1 00:06:14.339 --rc genhtml_legend=1 00:06:14.339 --rc geninfo_all_blocks=1 00:06:14.339 --rc geninfo_unexecuted_blocks=1 00:06:14.339 00:06:14.339 ' 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:14.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.339 --rc genhtml_branch_coverage=1 00:06:14.339 --rc genhtml_function_coverage=1 00:06:14.339 --rc genhtml_legend=1 00:06:14.339 --rc geninfo_all_blocks=1 00:06:14.339 --rc geninfo_unexecuted_blocks=1 00:06:14.339 00:06:14.339 ' 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:14.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.339 --rc genhtml_branch_coverage=1 00:06:14.339 --rc genhtml_function_coverage=1 00:06:14.339 --rc genhtml_legend=1 00:06:14.339 --rc geninfo_all_blocks=1 00:06:14.339 --rc geninfo_unexecuted_blocks=1 00:06:14.339 00:06:14.339 ' 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:14.339 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:14.339 20:36:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:06:16.895 Found 0000:09:00.0 (0x8086 - 0x159b) 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:06:16.895 Found 0000:09:00.1 (0x8086 - 0x159b) 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:06:16.895 Found net devices under 0000:09:00.0: cvl_0_0 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:06:16.895 Found net devices under 0000:09:00.1: cvl_0_1 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:16.895 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:16.896 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:16.896 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.340 ms 00:06:16.896 00:06:16.896 --- 10.0.0.2 ping statistics --- 00:06:16.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:16.896 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:16.896 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:16.896 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:06:16.896 00:06:16.896 --- 10.0.0.1 ping statistics --- 00:06:16.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:16.896 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1557959 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1557959 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1557959 ']' 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:16.896 [2024-11-26 20:36:20.288170] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:06:16.896 [2024-11-26 20:36:20.288245] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:16.896 [2024-11-26 20:36:20.363812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:16.896 [2024-11-26 20:36:20.421362] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:16.896 [2024-11-26 20:36:20.421430] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:16.896 [2024-11-26 20:36:20.421455] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:16.896 [2024-11-26 20:36:20.421466] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:16.896 [2024-11-26 20:36:20.421477] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:16.896 [2024-11-26 20:36:20.423132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:16.896 [2024-11-26 20:36:20.423237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:16.896 [2024-11-26 20:36:20.423328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:16.896 [2024-11-26 20:36:20.423333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:16.896 [2024-11-26 20:36:20.576755] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:16.896 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:17.154 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:17.154 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:17.154 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.154 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:17.154 Malloc0 00:06:17.154 [2024-11-26 20:36:20.659371] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:17.154 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.154 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:17.154 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:17.154 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:17.154 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1558117 00:06:17.154 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1558117 /var/tmp/bdevperf.sock 00:06:17.154 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1558117 ']' 00:06:17.154 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:17.154 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:17.154 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:17.154 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.154 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:17.154 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:17.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:17.154 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.154 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:17.154 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:17.154 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:17.154 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:17.154 { 00:06:17.154 "params": { 00:06:17.154 "name": "Nvme$subsystem", 00:06:17.154 "trtype": "$TEST_TRANSPORT", 00:06:17.154 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:17.154 "adrfam": "ipv4", 00:06:17.154 "trsvcid": "$NVMF_PORT", 00:06:17.154 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:17.154 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:17.154 "hdgst": ${hdgst:-false}, 00:06:17.154 "ddgst": ${ddgst:-false} 00:06:17.154 }, 00:06:17.154 "method": "bdev_nvme_attach_controller" 00:06:17.154 } 00:06:17.154 EOF 00:06:17.154 )") 00:06:17.154 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:17.154 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:17.154 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:17.154 20:36:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:17.154 "params": { 00:06:17.154 "name": "Nvme0", 00:06:17.154 "trtype": "tcp", 00:06:17.154 "traddr": "10.0.0.2", 00:06:17.154 "adrfam": "ipv4", 00:06:17.154 "trsvcid": "4420", 00:06:17.154 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:17.154 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:17.154 "hdgst": false, 00:06:17.154 "ddgst": false 00:06:17.154 }, 00:06:17.154 "method": "bdev_nvme_attach_controller" 00:06:17.154 }' 00:06:17.154 [2024-11-26 20:36:20.742041] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:06:17.154 [2024-11-26 20:36:20.742107] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1558117 ] 00:06:17.154 [2024-11-26 20:36:20.812679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.411 [2024-11-26 20:36:20.873810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.411 Running I/O for 10 seconds... 00:06:17.668 20:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.668 20:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:17.668 20:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:17.668 20:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.668 20:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:17.668 20:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.668 20:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:17.668 20:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:17.668 20:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:17.668 20:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:17.668 20:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:17.668 20:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:17.668 20:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:17.668 20:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:17.668 20:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:17.668 20:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:17.668 20:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.668 20:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:17.668 20:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.668 20:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:06:17.668 20:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:06:17.668 20:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:06:17.927 20:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:06:17.927 20:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:17.927 20:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:17.927 20:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:17.927 20:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.927 20:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:17.927 20:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.927 20:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:06:17.927 20:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:06:17.927 20:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:17.927 20:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:17.927 20:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:17.927 20:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:17.927 20:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.927 20:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:17.927 [2024-11-26 20:36:21.502104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912f10 is same with the state(6) to be set 00:06:17.927 [2024-11-26 20:36:21.502182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912f10 is same with the state(6) to be set 00:06:17.927 [2024-11-26 20:36:21.502197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912f10 is same with the state(6) to be set 00:06:17.927 [2024-11-26 20:36:21.502210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912f10 is same with the state(6) to be set 00:06:17.927 [2024-11-26 20:36:21.502222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912f10 is same with the state(6) to be set 00:06:17.927 [2024-11-26 20:36:21.502234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912f10 is same with the state(6) to be set 00:06:17.927 [2024-11-26 20:36:21.502247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912f10 is same with the state(6) to be set 00:06:17.927 [2024-11-26 20:36:21.502259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912f10 is same with the state(6) to be set 00:06:17.927 [2024-11-26 20:36:21.502276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912f10 is same with the state(6) to be set 00:06:17.927 [2024-11-26 20:36:21.502289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912f10 is same with the state(6) to be set 00:06:17.927 [2024-11-26 20:36:21.502324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912f10 is same with the state(6) to be set 00:06:17.927 [2024-11-26 20:36:21.502343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912f10 is same with the state(6) to be set 00:06:17.927 [2024-11-26 20:36:21.502355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912f10 is same with the state(6) to be set 00:06:17.927 [2024-11-26 20:36:21.502367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912f10 is same with the state(6) to be set 00:06:17.927 [2024-11-26 20:36:21.502379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912f10 is same with the state(6) to be set 00:06:17.927 [2024-11-26 20:36:21.502391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912f10 is same with the state(6) to be set 00:06:17.927 [2024-11-26 20:36:21.502406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912f10 is same with the state(6) to be set 00:06:17.927 [2024-11-26 20:36:21.502418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912f10 is same with the state(6) to be set 00:06:17.927 [2024-11-26 20:36:21.502429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912f10 is same with the state(6) to be set 00:06:17.927 [2024-11-26 20:36:21.502442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912f10 is same with the state(6) to be set 00:06:17.927 [2024-11-26 20:36:21.502454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912f10 is same with the state(6) to be set 00:06:17.927 [2024-11-26 20:36:21.502465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912f10 is same with the state(6) to be set 00:06:17.927 [2024-11-26 20:36:21.502477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912f10 is same with the state(6) to be set 00:06:17.927 [2024-11-26 20:36:21.502490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912f10 is same with the state(6) to be set 00:06:17.927 [2024-11-26 20:36:21.502502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912f10 is same with the state(6) to be set 00:06:17.927 [2024-11-26 20:36:21.502514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912f10 is same with the state(6) to be set 00:06:17.927 [2024-11-26 20:36:21.502526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912f10 is same with the state(6) to be set 00:06:17.927 [2024-11-26 20:36:21.502538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912f10 is same with the state(6) to be set 00:06:17.928 [2024-11-26 20:36:21.502550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912f10 is same with the state(6) to be set 00:06:17.928 [2024-11-26 20:36:21.502561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912f10 is same with the state(6) to be set 00:06:17.928 [2024-11-26 20:36:21.502574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912f10 is same with the state(6) to be set 00:06:17.928 [2024-11-26 20:36:21.502586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912f10 is same with the state(6) to be set 00:06:17.928 [2024-11-26 20:36:21.502598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912f10 is same with the state(6) to be set 00:06:17.928 [2024-11-26 20:36:21.502609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912f10 is same with the state(6) to be set 00:06:17.928 [2024-11-26 20:36:21.502622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912f10 is same with the state(6) to be set 00:06:17.928 [2024-11-26 20:36:21.502634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912f10 is same with the state(6) to be set 00:06:17.928 [2024-11-26 20:36:21.502645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912f10 is same with the state(6) to be set 00:06:17.928 [2024-11-26 20:36:21.502671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912f10 is same with the state(6) to be set 00:06:17.928 [2024-11-26 20:36:21.502684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912f10 is same with the state(6) to be set 00:06:17.928 [2024-11-26 20:36:21.502696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912f10 is same with the state(6) to be set 00:06:17.928 [2024-11-26 20:36:21.502708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912f10 is same with the state(6) to be set 00:06:17.928 [2024-11-26 20:36:21.502720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1912f10 is same with the state(6) to be set 00:06:17.928 [2024-11-26 20:36:21.504015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.928 [2024-11-26 20:36:21.504053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.928 [2024-11-26 20:36:21.504080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.928 [2024-11-26 20:36:21.504096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.928 [2024-11-26 20:36:21.504111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.928 [2024-11-26 20:36:21.504125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.928 [2024-11-26 20:36:21.504150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.928 [2024-11-26 20:36:21.504163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.928 [2024-11-26 20:36:21.504177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.928 [2024-11-26 20:36:21.504190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.928 [2024-11-26 20:36:21.504205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.928 [2024-11-26 20:36:21.504218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.928 [2024-11-26 20:36:21.504233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.928 [2024-11-26 20:36:21.504246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.928 [2024-11-26 20:36:21.504260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.928 [2024-11-26 20:36:21.504273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.928 [2024-11-26 20:36:21.504312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.928 [2024-11-26 20:36:21.504329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.928 [2024-11-26 20:36:21.504344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.928 [2024-11-26 20:36:21.504362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.928 [2024-11-26 20:36:21.504377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.928 [2024-11-26 20:36:21.504397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.928 [2024-11-26 20:36:21.504414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.928 [2024-11-26 20:36:21.504428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.928 [2024-11-26 20:36:21.504443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.928 [2024-11-26 20:36:21.504457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.928 [2024-11-26 20:36:21.504471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.928 [2024-11-26 20:36:21.504485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.928 [2024-11-26 20:36:21.504500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.928 [2024-11-26 20:36:21.504514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.928 [2024-11-26 20:36:21.504529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.928 [2024-11-26 20:36:21.504542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.928 [2024-11-26 20:36:21.504557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.928 [2024-11-26 20:36:21.504572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.928 [2024-11-26 20:36:21.504586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.928 [2024-11-26 20:36:21.504600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.928 [2024-11-26 20:36:21.504619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.928 [2024-11-26 20:36:21.504633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.928 [2024-11-26 20:36:21.504663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.928 [2024-11-26 20:36:21.504677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.928 [2024-11-26 20:36:21.504691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.928 [2024-11-26 20:36:21.504704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.928 [2024-11-26 20:36:21.504719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.928 [2024-11-26 20:36:21.504733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.928 [2024-11-26 20:36:21.504747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.928 [2024-11-26 20:36:21.504761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.928 [2024-11-26 20:36:21.504779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.928 [2024-11-26 20:36:21.504793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.928 [2024-11-26 20:36:21.504807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.928 [2024-11-26 20:36:21.504821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.928 [2024-11-26 20:36:21.504836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.928 [2024-11-26 20:36:21.504849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.928 [2024-11-26 20:36:21.504863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.928 [2024-11-26 20:36:21.504877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.928 [2024-11-26 20:36:21.504892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.928 [2024-11-26 20:36:21.504906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.928 [2024-11-26 20:36:21.504920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.928 [2024-11-26 20:36:21.504933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.928 [2024-11-26 20:36:21.504947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.928 [2024-11-26 20:36:21.504960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.928 [2024-11-26 20:36:21.504975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.928 [2024-11-26 20:36:21.504988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.928 [2024-11-26 20:36:21.505002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.928 [2024-11-26 20:36:21.505015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.928 [2024-11-26 20:36:21.505029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.928 [2024-11-26 20:36:21.505043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.929 [2024-11-26 20:36:21.505058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.929 [2024-11-26 20:36:21.505071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.929 [2024-11-26 20:36:21.505085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.929 [2024-11-26 20:36:21.505098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.929 [2024-11-26 20:36:21.505113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.929 [2024-11-26 20:36:21.505130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.929 [2024-11-26 20:36:21.505146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.929 [2024-11-26 20:36:21.505160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.929 [2024-11-26 20:36:21.505175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.929 [2024-11-26 20:36:21.505188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.929 [2024-11-26 20:36:21.505203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.929 [2024-11-26 20:36:21.505216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.929 [2024-11-26 20:36:21.505230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.929 [2024-11-26 20:36:21.505244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.929 [2024-11-26 20:36:21.505258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.929 [2024-11-26 20:36:21.505271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.929 [2024-11-26 20:36:21.505310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.929 [2024-11-26 20:36:21.505327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.929 [2024-11-26 20:36:21.505342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.929 [2024-11-26 20:36:21.505359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.929 [2024-11-26 20:36:21.505374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.929 [2024-11-26 20:36:21.505387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.929 [2024-11-26 20:36:21.505402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.929 [2024-11-26 20:36:21.505415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.929 [2024-11-26 20:36:21.505431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.929 [2024-11-26 20:36:21.505444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.929 [2024-11-26 20:36:21.505459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.929 [2024-11-26 20:36:21.505472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.929 [2024-11-26 20:36:21.505488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.929 [2024-11-26 20:36:21.505501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.929 [2024-11-26 20:36:21.505520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.929 [2024-11-26 20:36:21.505536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.929 [2024-11-26 20:36:21.505551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.929 [2024-11-26 20:36:21.505564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.929 [2024-11-26 20:36:21.505579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.929 [2024-11-26 20:36:21.505593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.929 [2024-11-26 20:36:21.505634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.929 [2024-11-26 20:36:21.505647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.929 [2024-11-26 20:36:21.505661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.929 [2024-11-26 20:36:21.505676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.929 [2024-11-26 20:36:21.505690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.929 [2024-11-26 20:36:21.505703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.929 [2024-11-26 20:36:21.505719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.929 [2024-11-26 20:36:21.505732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.929 [2024-11-26 20:36:21.505746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.929 [2024-11-26 20:36:21.505759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.929 [2024-11-26 20:36:21.505774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.929 [2024-11-26 20:36:21.505787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.929 [2024-11-26 20:36:21.505802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.929 [2024-11-26 20:36:21.505815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.929 [2024-11-26 20:36:21.505829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.929 [2024-11-26 20:36:21.505843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.929 [2024-11-26 20:36:21.505858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.929 [2024-11-26 20:36:21.505879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.929 [2024-11-26 20:36:21.505909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.929 [2024-11-26 20:36:21.505927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.929 [2024-11-26 20:36:21.505943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.929 [2024-11-26 20:36:21.505957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.929 [2024-11-26 20:36:21.505971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.929 [2024-11-26 20:36:21.505985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.929 [2024-11-26 20:36:21.506000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:17.929 [2024-11-26 20:36:21.506014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:17.929 [2024-11-26 20:36:21.506050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:06:17.929 20:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.929 20:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:17.929 20:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.929 20:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:17.929 [2024-11-26 20:36:21.507247] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:17.929 task offset: 81920 on job bdev=Nvme0n1 fails 00:06:17.929 00:06:17.929 Latency(us) 00:06:17.929 [2024-11-26T19:36:21.626Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:17.929 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:17.929 Job: Nvme0n1 ended in about 0.40 seconds with error 00:06:17.929 Verification LBA range: start 0x0 length 0x400 00:06:17.929 Nvme0n1 : 0.40 1601.78 100.11 160.18 0.00 35245.34 2694.26 34952.53 00:06:17.929 [2024-11-26T19:36:21.626Z] =================================================================================================================== 00:06:17.929 [2024-11-26T19:36:21.626Z] Total : 1601.78 100.11 160.18 0.00 35245.34 2694.26 34952.53 00:06:17.929 [2024-11-26 20:36:21.509163] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:17.929 [2024-11-26 20:36:21.509191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1812a50 (9): Bad file descriptor 00:06:17.929 20:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.929 20:36:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:17.929 [2024-11-26 20:36:21.560682] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:06:18.862 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1558117 00:06:18.862 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1558117) - No such process 00:06:18.862 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:18.862 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:18.862 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:18.862 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:18.862 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:18.862 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:18.862 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:18.862 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:18.862 { 00:06:18.862 "params": { 00:06:18.862 "name": "Nvme$subsystem", 00:06:18.862 "trtype": "$TEST_TRANSPORT", 00:06:18.862 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:18.862 "adrfam": "ipv4", 00:06:18.862 "trsvcid": "$NVMF_PORT", 00:06:18.862 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:18.862 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:18.862 "hdgst": ${hdgst:-false}, 00:06:18.862 "ddgst": ${ddgst:-false} 00:06:18.862 }, 00:06:18.862 "method": "bdev_nvme_attach_controller" 00:06:18.862 } 00:06:18.862 EOF 00:06:18.862 )") 00:06:18.862 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:18.862 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:18.862 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:18.862 20:36:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:18.862 "params": { 00:06:18.862 "name": "Nvme0", 00:06:18.862 "trtype": "tcp", 00:06:18.862 "traddr": "10.0.0.2", 00:06:18.862 "adrfam": "ipv4", 00:06:18.862 "trsvcid": "4420", 00:06:18.862 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:18.862 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:18.862 "hdgst": false, 00:06:18.862 "ddgst": false 00:06:18.862 }, 00:06:18.862 "method": "bdev_nvme_attach_controller" 00:06:18.862 }' 00:06:19.119 [2024-11-26 20:36:22.569997] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:06:19.119 [2024-11-26 20:36:22.570090] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1558299 ] 00:06:19.120 [2024-11-26 20:36:22.642891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.120 [2024-11-26 20:36:22.703975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.685 Running I/O for 1 seconds... 00:06:20.617 1664.00 IOPS, 104.00 MiB/s 00:06:20.617 Latency(us) 00:06:20.617 [2024-11-26T19:36:24.314Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:20.617 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:20.617 Verification LBA range: start 0x0 length 0x400 00:06:20.618 Nvme0n1 : 1.01 1709.56 106.85 0.00 0.00 36819.44 5000.15 33399.09 00:06:20.618 [2024-11-26T19:36:24.315Z] =================================================================================================================== 00:06:20.618 [2024-11-26T19:36:24.315Z] Total : 1709.56 106.85 0.00 0.00 36819.44 5000.15 33399.09 00:06:20.875 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:20.875 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:20.875 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:20.875 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:20.875 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:20.875 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:20.875 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:20.875 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:20.875 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:20.875 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:20.875 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:20.875 rmmod nvme_tcp 00:06:20.875 rmmod nvme_fabrics 00:06:20.875 rmmod nvme_keyring 00:06:20.875 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:20.875 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:20.875 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:20.875 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1557959 ']' 00:06:20.875 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1557959 00:06:20.875 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1557959 ']' 00:06:20.875 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1557959 00:06:20.875 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:06:20.875 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:20.875 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1557959 00:06:20.875 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:20.875 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:20.875 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1557959' 00:06:20.875 killing process with pid 1557959 00:06:20.875 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1557959 00:06:20.875 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1557959 00:06:21.134 [2024-11-26 20:36:24.671638] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:21.134 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:21.134 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:21.134 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:21.134 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:21.134 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:21.134 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:21.134 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:21.134 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:21.134 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:21.134 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:21.134 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:21.134 20:36:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:23.676 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:23.676 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:23.676 00:06:23.676 real 0m9.005s 00:06:23.676 user 0m20.193s 00:06:23.676 sys 0m2.880s 00:06:23.676 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.676 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:23.676 ************************************ 00:06:23.676 END TEST nvmf_host_management 00:06:23.676 ************************************ 00:06:23.676 20:36:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:23.676 20:36:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:23.676 20:36:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.676 20:36:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:23.676 ************************************ 00:06:23.676 START TEST nvmf_lvol 00:06:23.676 ************************************ 00:06:23.676 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:23.676 * Looking for test storage... 00:06:23.676 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:23.676 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:23.676 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:06:23.676 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:23.676 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:23.676 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.676 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:23.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.677 --rc genhtml_branch_coverage=1 00:06:23.677 --rc genhtml_function_coverage=1 00:06:23.677 --rc genhtml_legend=1 00:06:23.677 --rc geninfo_all_blocks=1 00:06:23.677 --rc geninfo_unexecuted_blocks=1 00:06:23.677 00:06:23.677 ' 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:23.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.677 --rc genhtml_branch_coverage=1 00:06:23.677 --rc genhtml_function_coverage=1 00:06:23.677 --rc genhtml_legend=1 00:06:23.677 --rc geninfo_all_blocks=1 00:06:23.677 --rc geninfo_unexecuted_blocks=1 00:06:23.677 00:06:23.677 ' 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:23.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.677 --rc genhtml_branch_coverage=1 00:06:23.677 --rc genhtml_function_coverage=1 00:06:23.677 --rc genhtml_legend=1 00:06:23.677 --rc geninfo_all_blocks=1 00:06:23.677 --rc geninfo_unexecuted_blocks=1 00:06:23.677 00:06:23.677 ' 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:23.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.677 --rc genhtml_branch_coverage=1 00:06:23.677 --rc genhtml_function_coverage=1 00:06:23.677 --rc genhtml_legend=1 00:06:23.677 --rc geninfo_all_blocks=1 00:06:23.677 --rc geninfo_unexecuted_blocks=1 00:06:23.677 00:06:23.677 ' 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:23.677 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:23.677 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:23.678 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:23.678 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:23.678 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:23.678 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:23.678 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:23.678 20:36:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:25.577 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:25.577 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:25.577 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:25.577 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:25.577 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:25.577 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:06:25.578 Found 0000:09:00.0 (0x8086 - 0x159b) 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:06:25.578 Found 0000:09:00.1 (0x8086 - 0x159b) 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:06:25.578 Found net devices under 0000:09:00.0: cvl_0_0 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:06:25.578 Found net devices under 0000:09:00.1: cvl_0_1 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:25.578 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:25.578 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:06:25.578 00:06:25.578 --- 10.0.0.2 ping statistics --- 00:06:25.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:25.578 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:25.578 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:25.578 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:06:25.578 00:06:25.578 --- 10.0.0.1 ping statistics --- 00:06:25.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:25.578 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:25.578 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:25.579 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:25.579 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1560521 00:06:25.579 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:25.579 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1560521 00:06:25.579 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1560521 ']' 00:06:25.579 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.579 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:25.579 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.579 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:25.579 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:25.836 [2024-11-26 20:36:29.320408] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:06:25.836 [2024-11-26 20:36:29.320504] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:25.836 [2024-11-26 20:36:29.392984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:25.836 [2024-11-26 20:36:29.449523] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:25.836 [2024-11-26 20:36:29.449573] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:25.836 [2024-11-26 20:36:29.449610] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:25.836 [2024-11-26 20:36:29.449622] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:25.836 [2024-11-26 20:36:29.449631] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:25.836 [2024-11-26 20:36:29.451024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.836 [2024-11-26 20:36:29.451130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:25.836 [2024-11-26 20:36:29.451140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.093 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:26.093 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:06:26.093 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:26.093 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:26.093 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:26.093 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:26.093 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:26.351 [2024-11-26 20:36:29.847168] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:26.351 20:36:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:26.609 20:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:26.609 20:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:26.867 20:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:26.867 20:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:27.124 20:36:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:27.381 20:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=9d63ba48-ceda-48ec-b0b9-2f5479f4b8ec 00:06:27.381 20:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9d63ba48-ceda-48ec-b0b9-2f5479f4b8ec lvol 20 00:06:27.947 20:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=014db881-0811-4f2c-bc71-632a75e79af1 00:06:27.947 20:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:27.947 20:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 014db881-0811-4f2c-bc71-632a75e79af1 00:06:28.204 20:36:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:28.461 [2024-11-26 20:36:32.140429] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:28.717 20:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:28.974 20:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1560923 00:06:28.974 20:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:28.974 20:36:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:29.907 20:36:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 014db881-0811-4f2c-bc71-632a75e79af1 MY_SNAPSHOT 00:06:30.165 20:36:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=e75c2f95-bba1-4d21-84de-b23811e61ddc 00:06:30.165 20:36:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 014db881-0811-4f2c-bc71-632a75e79af1 30 00:06:30.423 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone e75c2f95-bba1-4d21-84de-b23811e61ddc MY_CLONE 00:06:30.681 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=179132fa-e540-408d-8431-fe065a9bc469 00:06:30.681 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 179132fa-e540-408d-8431-fe065a9bc469 00:06:31.247 20:36:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1560923 00:06:39.347 Initializing NVMe Controllers 00:06:39.347 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:39.347 Controller IO queue size 128, less than required. 00:06:39.347 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:39.347 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:06:39.347 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:06:39.347 Initialization complete. Launching workers. 00:06:39.347 ======================================================== 00:06:39.347 Latency(us) 00:06:39.347 Device Information : IOPS MiB/s Average min max 00:06:39.347 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10420.09 40.70 12284.53 1056.30 74258.26 00:06:39.347 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10215.29 39.90 12531.06 6231.83 75030.73 00:06:39.347 ======================================================== 00:06:39.347 Total : 20635.38 80.61 12406.57 1056.30 75030.73 00:06:39.347 00:06:39.347 20:36:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:39.605 20:36:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 014db881-0811-4f2c-bc71-632a75e79af1 00:06:39.862 20:36:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9d63ba48-ceda-48ec-b0b9-2f5479f4b8ec 00:06:40.120 20:36:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:06:40.120 20:36:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:06:40.120 20:36:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:06:40.120 20:36:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:40.120 20:36:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:06:40.120 20:36:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:40.120 20:36:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:06:40.120 20:36:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:40.120 20:36:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:40.120 rmmod nvme_tcp 00:06:40.120 rmmod nvme_fabrics 00:06:40.120 rmmod nvme_keyring 00:06:40.120 20:36:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:40.120 20:36:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:06:40.120 20:36:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:06:40.120 20:36:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1560521 ']' 00:06:40.120 20:36:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1560521 00:06:40.120 20:36:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1560521 ']' 00:06:40.120 20:36:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1560521 00:06:40.120 20:36:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:06:40.120 20:36:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:40.120 20:36:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1560521 00:06:40.120 20:36:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:40.120 20:36:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:40.120 20:36:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1560521' 00:06:40.120 killing process with pid 1560521 00:06:40.120 20:36:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1560521 00:06:40.120 20:36:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1560521 00:06:40.379 20:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:40.379 20:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:40.379 20:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:40.379 20:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:06:40.379 20:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:06:40.379 20:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:40.379 20:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:06:40.379 20:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:40.379 20:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:40.379 20:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:40.379 20:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:40.379 20:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:42.912 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:42.912 00:06:42.912 real 0m19.286s 00:06:42.912 user 1m5.277s 00:06:42.912 sys 0m5.743s 00:06:42.912 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.912 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:42.912 ************************************ 00:06:42.912 END TEST nvmf_lvol 00:06:42.912 ************************************ 00:06:42.912 20:36:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:42.912 20:36:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:42.912 20:36:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.912 20:36:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:42.912 ************************************ 00:06:42.912 START TEST nvmf_lvs_grow 00:06:42.912 ************************************ 00:06:42.912 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:42.912 * Looking for test storage... 00:06:42.912 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:42.912 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:42.912 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:06:42.912 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:42.912 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:42.912 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:42.912 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:42.912 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:42.912 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:42.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.913 --rc genhtml_branch_coverage=1 00:06:42.913 --rc genhtml_function_coverage=1 00:06:42.913 --rc genhtml_legend=1 00:06:42.913 --rc geninfo_all_blocks=1 00:06:42.913 --rc geninfo_unexecuted_blocks=1 00:06:42.913 00:06:42.913 ' 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:42.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.913 --rc genhtml_branch_coverage=1 00:06:42.913 --rc genhtml_function_coverage=1 00:06:42.913 --rc genhtml_legend=1 00:06:42.913 --rc geninfo_all_blocks=1 00:06:42.913 --rc geninfo_unexecuted_blocks=1 00:06:42.913 00:06:42.913 ' 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:42.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.913 --rc genhtml_branch_coverage=1 00:06:42.913 --rc genhtml_function_coverage=1 00:06:42.913 --rc genhtml_legend=1 00:06:42.913 --rc geninfo_all_blocks=1 00:06:42.913 --rc geninfo_unexecuted_blocks=1 00:06:42.913 00:06:42.913 ' 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:42.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:42.913 --rc genhtml_branch_coverage=1 00:06:42.913 --rc genhtml_function_coverage=1 00:06:42.913 --rc genhtml_legend=1 00:06:42.913 --rc geninfo_all_blocks=1 00:06:42.913 --rc geninfo_unexecuted_blocks=1 00:06:42.913 00:06:42.913 ' 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:42.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:42.913 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:42.914 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:42.914 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:42.914 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:42.914 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:42.914 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:42.914 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:06:42.914 20:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:44.852 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:44.852 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:06:44.852 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:06:44.853 Found 0000:09:00.0 (0x8086 - 0x159b) 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:06:44.853 Found 0000:09:00.1 (0x8086 - 0x159b) 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:06:44.853 Found net devices under 0000:09:00.0: cvl_0_0 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:06:44.853 Found net devices under 0000:09:00.1: cvl_0_1 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:44.853 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:45.111 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:45.111 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:45.111 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:45.111 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:45.111 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:45.111 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:45.111 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:45.111 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:45.111 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:45.111 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:45.111 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:45.111 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:45.111 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.333 ms 00:06:45.111 00:06:45.111 --- 10.0.0.2 ping statistics --- 00:06:45.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:45.111 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:06:45.111 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:45.111 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:45.111 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:06:45.111 00:06:45.111 --- 10.0.0.1 ping statistics --- 00:06:45.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:45.112 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:06:45.112 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:45.112 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:06:45.112 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:45.112 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:45.112 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:45.112 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:45.112 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:45.112 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:45.112 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:45.112 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:06:45.112 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:45.112 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:45.112 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:45.112 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1564323 00:06:45.112 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:06:45.112 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1564323 00:06:45.112 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1564323 ']' 00:06:45.112 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.112 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.112 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.112 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.112 20:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:45.112 [2024-11-26 20:36:48.761081] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:06:45.112 [2024-11-26 20:36:48.761196] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:45.369 [2024-11-26 20:36:48.836735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.369 [2024-11-26 20:36:48.889811] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:45.369 [2024-11-26 20:36:48.889867] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:45.369 [2024-11-26 20:36:48.889886] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:45.369 [2024-11-26 20:36:48.889896] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:45.369 [2024-11-26 20:36:48.889905] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:45.369 [2024-11-26 20:36:48.890480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.369 20:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:45.369 20:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:06:45.369 20:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:45.369 20:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:45.369 20:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:45.369 20:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:45.369 20:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:45.625 [2024-11-26 20:36:49.279842] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:45.625 20:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:06:45.625 20:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:45.625 20:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.625 20:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:45.883 ************************************ 00:06:45.883 START TEST lvs_grow_clean 00:06:45.883 ************************************ 00:06:45.883 20:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:06:45.883 20:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:06:45.883 20:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:06:45.883 20:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:06:45.883 20:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:06:45.883 20:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:06:45.883 20:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:06:45.883 20:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:45.883 20:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:45.883 20:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:46.140 20:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:06:46.140 20:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:06:46.397 20:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=73e7c988-8e21-4e95-a1b2-19b63285e88b 00:06:46.397 20:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 73e7c988-8e21-4e95-a1b2-19b63285e88b 00:06:46.397 20:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:06:46.655 20:36:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:06:46.655 20:36:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:06:46.655 20:36:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 73e7c988-8e21-4e95-a1b2-19b63285e88b lvol 150 00:06:46.912 20:36:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=86713c1f-b139-44e6-8019-8526433a1f40 00:06:46.912 20:36:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:46.912 20:36:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:06:47.170 [2024-11-26 20:36:50.706791] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:06:47.170 [2024-11-26 20:36:50.706874] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:06:47.170 true 00:06:47.170 20:36:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 73e7c988-8e21-4e95-a1b2-19b63285e88b 00:06:47.170 20:36:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:06:47.435 20:36:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:06:47.435 20:36:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:47.760 20:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 86713c1f-b139-44e6-8019-8526433a1f40 00:06:48.020 20:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:48.278 [2024-11-26 20:36:51.798089] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:48.278 20:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:48.536 20:36:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1564765 00:06:48.536 20:36:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:06:48.536 20:36:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:48.536 20:36:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1564765 /var/tmp/bdevperf.sock 00:06:48.536 20:36:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1564765 ']' 00:06:48.536 20:36:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:48.536 20:36:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.536 20:36:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:48.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:48.536 20:36:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.536 20:36:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:06:48.536 [2024-11-26 20:36:52.128624] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:06:48.536 [2024-11-26 20:36:52.128691] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1564765 ] 00:06:48.536 [2024-11-26 20:36:52.193392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.794 [2024-11-26 20:36:52.252445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.794 20:36:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.794 20:36:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:06:48.794 20:36:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:06:49.358 Nvme0n1 00:06:49.358 20:36:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:06:49.617 [ 00:06:49.617 { 00:06:49.617 "name": "Nvme0n1", 00:06:49.617 "aliases": [ 00:06:49.617 "86713c1f-b139-44e6-8019-8526433a1f40" 00:06:49.617 ], 00:06:49.617 "product_name": "NVMe disk", 00:06:49.617 "block_size": 4096, 00:06:49.617 "num_blocks": 38912, 00:06:49.617 "uuid": "86713c1f-b139-44e6-8019-8526433a1f40", 00:06:49.617 "numa_id": 0, 00:06:49.617 "assigned_rate_limits": { 00:06:49.617 "rw_ios_per_sec": 0, 00:06:49.617 "rw_mbytes_per_sec": 0, 00:06:49.617 "r_mbytes_per_sec": 0, 00:06:49.617 "w_mbytes_per_sec": 0 00:06:49.617 }, 00:06:49.617 "claimed": false, 00:06:49.617 "zoned": false, 00:06:49.617 "supported_io_types": { 00:06:49.617 "read": true, 00:06:49.617 "write": true, 00:06:49.617 "unmap": true, 00:06:49.617 "flush": true, 00:06:49.617 "reset": true, 00:06:49.617 "nvme_admin": true, 00:06:49.617 "nvme_io": true, 00:06:49.617 "nvme_io_md": false, 00:06:49.617 "write_zeroes": true, 00:06:49.617 "zcopy": false, 00:06:49.617 "get_zone_info": false, 00:06:49.617 "zone_management": false, 00:06:49.617 "zone_append": false, 00:06:49.617 "compare": true, 00:06:49.617 "compare_and_write": true, 00:06:49.617 "abort": true, 00:06:49.617 "seek_hole": false, 00:06:49.617 "seek_data": false, 00:06:49.617 "copy": true, 00:06:49.617 "nvme_iov_md": false 00:06:49.617 }, 00:06:49.617 "memory_domains": [ 00:06:49.617 { 00:06:49.617 "dma_device_id": "system", 00:06:49.617 "dma_device_type": 1 00:06:49.617 } 00:06:49.617 ], 00:06:49.617 "driver_specific": { 00:06:49.617 "nvme": [ 00:06:49.617 { 00:06:49.617 "trid": { 00:06:49.617 "trtype": "TCP", 00:06:49.617 "adrfam": "IPv4", 00:06:49.617 "traddr": "10.0.0.2", 00:06:49.617 "trsvcid": "4420", 00:06:49.617 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:06:49.617 }, 00:06:49.617 "ctrlr_data": { 00:06:49.617 "cntlid": 1, 00:06:49.617 "vendor_id": "0x8086", 00:06:49.617 "model_number": "SPDK bdev Controller", 00:06:49.617 "serial_number": "SPDK0", 00:06:49.617 "firmware_revision": "25.01", 00:06:49.617 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:49.617 "oacs": { 00:06:49.617 "security": 0, 00:06:49.617 "format": 0, 00:06:49.617 "firmware": 0, 00:06:49.617 "ns_manage": 0 00:06:49.617 }, 00:06:49.617 "multi_ctrlr": true, 00:06:49.617 "ana_reporting": false 00:06:49.617 }, 00:06:49.617 "vs": { 00:06:49.617 "nvme_version": "1.3" 00:06:49.617 }, 00:06:49.617 "ns_data": { 00:06:49.617 "id": 1, 00:06:49.617 "can_share": true 00:06:49.617 } 00:06:49.617 } 00:06:49.617 ], 00:06:49.617 "mp_policy": "active_passive" 00:06:49.617 } 00:06:49.617 } 00:06:49.617 ] 00:06:49.617 20:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1564843 00:06:49.617 20:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:06:49.617 20:36:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:06:49.617 Running I/O for 10 seconds... 00:06:50.548 Latency(us) 00:06:50.548 [2024-11-26T19:36:54.245Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:50.548 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:50.548 Nvme0n1 : 1.00 15241.00 59.54 0.00 0.00 0.00 0.00 0.00 00:06:50.548 [2024-11-26T19:36:54.245Z] =================================================================================================================== 00:06:50.548 [2024-11-26T19:36:54.245Z] Total : 15241.00 59.54 0.00 0.00 0.00 0.00 0.00 00:06:50.548 00:06:51.502 20:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 73e7c988-8e21-4e95-a1b2-19b63285e88b 00:06:51.502 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:51.502 Nvme0n1 : 2.00 15410.00 60.20 0.00 0.00 0.00 0.00 0.00 00:06:51.502 [2024-11-26T19:36:55.199Z] =================================================================================================================== 00:06:51.502 [2024-11-26T19:36:55.199Z] Total : 15410.00 60.20 0.00 0.00 0.00 0.00 0.00 00:06:51.502 00:06:51.760 true 00:06:51.760 20:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 73e7c988-8e21-4e95-a1b2-19b63285e88b 00:06:51.760 20:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:06:52.017 20:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:06:52.017 20:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:06:52.017 20:36:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1564843 00:06:52.581 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:52.581 Nvme0n1 : 3.00 15523.67 60.64 0.00 0.00 0.00 0.00 0.00 00:06:52.581 [2024-11-26T19:36:56.278Z] =================================================================================================================== 00:06:52.581 [2024-11-26T19:36:56.278Z] Total : 15523.67 60.64 0.00 0.00 0.00 0.00 0.00 00:06:52.581 00:06:53.513 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:53.513 Nvme0n1 : 4.00 15627.50 61.04 0.00 0.00 0.00 0.00 0.00 00:06:53.513 [2024-11-26T19:36:57.210Z] =================================================================================================================== 00:06:53.513 [2024-11-26T19:36:57.210Z] Total : 15627.50 61.04 0.00 0.00 0.00 0.00 0.00 00:06:53.513 00:06:54.882 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:54.882 Nvme0n1 : 5.00 15715.20 61.39 0.00 0.00 0.00 0.00 0.00 00:06:54.882 [2024-11-26T19:36:58.579Z] =================================================================================================================== 00:06:54.882 [2024-11-26T19:36:58.579Z] Total : 15715.20 61.39 0.00 0.00 0.00 0.00 0.00 00:06:54.882 00:06:55.815 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:55.815 Nvme0n1 : 6.00 15773.83 61.62 0.00 0.00 0.00 0.00 0.00 00:06:55.815 [2024-11-26T19:36:59.512Z] =================================================================================================================== 00:06:55.815 [2024-11-26T19:36:59.512Z] Total : 15773.83 61.62 0.00 0.00 0.00 0.00 0.00 00:06:55.815 00:06:56.745 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:56.745 Nvme0n1 : 7.00 15806.43 61.74 0.00 0.00 0.00 0.00 0.00 00:06:56.745 [2024-11-26T19:37:00.442Z] =================================================================================================================== 00:06:56.745 [2024-11-26T19:37:00.442Z] Total : 15806.43 61.74 0.00 0.00 0.00 0.00 0.00 00:06:56.745 00:06:57.692 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:57.692 Nvme0n1 : 8.00 15854.75 61.93 0.00 0.00 0.00 0.00 0.00 00:06:57.692 [2024-11-26T19:37:01.389Z] =================================================================================================================== 00:06:57.692 [2024-11-26T19:37:01.389Z] Total : 15854.75 61.93 0.00 0.00 0.00 0.00 0.00 00:06:57.692 00:06:58.624 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:58.624 Nvme0n1 : 9.00 15906.33 62.13 0.00 0.00 0.00 0.00 0.00 00:06:58.624 [2024-11-26T19:37:02.321Z] =================================================================================================================== 00:06:58.624 [2024-11-26T19:37:02.321Z] Total : 15906.33 62.13 0.00 0.00 0.00 0.00 0.00 00:06:58.624 00:06:59.556 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:59.556 Nvme0n1 : 10.00 15941.30 62.27 0.00 0.00 0.00 0.00 0.00 00:06:59.556 [2024-11-26T19:37:03.253Z] =================================================================================================================== 00:06:59.556 [2024-11-26T19:37:03.253Z] Total : 15941.30 62.27 0.00 0.00 0.00 0.00 0.00 00:06:59.556 00:06:59.556 00:06:59.556 Latency(us) 00:06:59.556 [2024-11-26T19:37:03.253Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:59.556 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:59.556 Nvme0n1 : 10.00 15947.78 62.30 0.00 0.00 8021.63 4247.70 16117.00 00:06:59.556 [2024-11-26T19:37:03.253Z] =================================================================================================================== 00:06:59.556 [2024-11-26T19:37:03.253Z] Total : 15947.78 62.30 0.00 0.00 8021.63 4247.70 16117.00 00:06:59.556 { 00:06:59.556 "results": [ 00:06:59.556 { 00:06:59.556 "job": "Nvme0n1", 00:06:59.556 "core_mask": "0x2", 00:06:59.556 "workload": "randwrite", 00:06:59.556 "status": "finished", 00:06:59.556 "queue_depth": 128, 00:06:59.556 "io_size": 4096, 00:06:59.556 "runtime": 10.003966, 00:06:59.556 "iops": 15947.775112390425, 00:06:59.556 "mibps": 62.2959965327751, 00:06:59.556 "io_failed": 0, 00:06:59.556 "io_timeout": 0, 00:06:59.556 "avg_latency_us": 8021.6272918861905, 00:06:59.556 "min_latency_us": 4247.7037037037035, 00:06:59.556 "max_latency_us": 16117.001481481482 00:06:59.556 } 00:06:59.556 ], 00:06:59.556 "core_count": 1 00:06:59.556 } 00:06:59.556 20:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1564765 00:06:59.556 20:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1564765 ']' 00:06:59.556 20:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1564765 00:06:59.556 20:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:06:59.556 20:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:59.556 20:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1564765 00:06:59.813 20:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:59.813 20:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:59.813 20:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1564765' 00:06:59.813 killing process with pid 1564765 00:06:59.813 20:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1564765 00:06:59.813 Received shutdown signal, test time was about 10.000000 seconds 00:06:59.813 00:06:59.813 Latency(us) 00:06:59.813 [2024-11-26T19:37:03.510Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:59.813 [2024-11-26T19:37:03.510Z] =================================================================================================================== 00:06:59.813 [2024-11-26T19:37:03.510Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:06:59.813 20:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1564765 00:06:59.813 20:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:00.377 20:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:00.377 20:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 73e7c988-8e21-4e95-a1b2-19b63285e88b 00:07:00.377 20:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:00.635 20:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:00.635 20:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:00.635 20:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:00.892 [2024-11-26 20:37:04.567168] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:01.150 20:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 73e7c988-8e21-4e95-a1b2-19b63285e88b 00:07:01.150 20:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:01.150 20:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 73e7c988-8e21-4e95-a1b2-19b63285e88b 00:07:01.150 20:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:01.150 20:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:01.150 20:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:01.150 20:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:01.150 20:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:01.150 20:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:01.150 20:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:01.150 20:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:01.150 20:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 73e7c988-8e21-4e95-a1b2-19b63285e88b 00:07:01.407 request: 00:07:01.407 { 00:07:01.407 "uuid": "73e7c988-8e21-4e95-a1b2-19b63285e88b", 00:07:01.407 "method": "bdev_lvol_get_lvstores", 00:07:01.407 "req_id": 1 00:07:01.407 } 00:07:01.407 Got JSON-RPC error response 00:07:01.407 response: 00:07:01.407 { 00:07:01.407 "code": -19, 00:07:01.407 "message": "No such device" 00:07:01.407 } 00:07:01.408 20:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:01.408 20:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:01.408 20:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:01.408 20:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:01.408 20:37:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:01.665 aio_bdev 00:07:01.665 20:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 86713c1f-b139-44e6-8019-8526433a1f40 00:07:01.665 20:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=86713c1f-b139-44e6-8019-8526433a1f40 00:07:01.665 20:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:01.665 20:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:01.665 20:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:01.665 20:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:01.665 20:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:01.922 20:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 86713c1f-b139-44e6-8019-8526433a1f40 -t 2000 00:07:02.179 [ 00:07:02.179 { 00:07:02.179 "name": "86713c1f-b139-44e6-8019-8526433a1f40", 00:07:02.179 "aliases": [ 00:07:02.179 "lvs/lvol" 00:07:02.179 ], 00:07:02.179 "product_name": "Logical Volume", 00:07:02.179 "block_size": 4096, 00:07:02.179 "num_blocks": 38912, 00:07:02.179 "uuid": "86713c1f-b139-44e6-8019-8526433a1f40", 00:07:02.179 "assigned_rate_limits": { 00:07:02.179 "rw_ios_per_sec": 0, 00:07:02.179 "rw_mbytes_per_sec": 0, 00:07:02.179 "r_mbytes_per_sec": 0, 00:07:02.179 "w_mbytes_per_sec": 0 00:07:02.179 }, 00:07:02.179 "claimed": false, 00:07:02.179 "zoned": false, 00:07:02.179 "supported_io_types": { 00:07:02.179 "read": true, 00:07:02.179 "write": true, 00:07:02.179 "unmap": true, 00:07:02.179 "flush": false, 00:07:02.179 "reset": true, 00:07:02.179 "nvme_admin": false, 00:07:02.179 "nvme_io": false, 00:07:02.179 "nvme_io_md": false, 00:07:02.179 "write_zeroes": true, 00:07:02.179 "zcopy": false, 00:07:02.179 "get_zone_info": false, 00:07:02.179 "zone_management": false, 00:07:02.179 "zone_append": false, 00:07:02.179 "compare": false, 00:07:02.179 "compare_and_write": false, 00:07:02.179 "abort": false, 00:07:02.179 "seek_hole": true, 00:07:02.179 "seek_data": true, 00:07:02.179 "copy": false, 00:07:02.179 "nvme_iov_md": false 00:07:02.179 }, 00:07:02.179 "driver_specific": { 00:07:02.179 "lvol": { 00:07:02.179 "lvol_store_uuid": "73e7c988-8e21-4e95-a1b2-19b63285e88b", 00:07:02.179 "base_bdev": "aio_bdev", 00:07:02.179 "thin_provision": false, 00:07:02.179 "num_allocated_clusters": 38, 00:07:02.179 "snapshot": false, 00:07:02.179 "clone": false, 00:07:02.179 "esnap_clone": false 00:07:02.179 } 00:07:02.179 } 00:07:02.179 } 00:07:02.179 ] 00:07:02.179 20:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:02.179 20:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 73e7c988-8e21-4e95-a1b2-19b63285e88b 00:07:02.179 20:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:02.436 20:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:02.436 20:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 73e7c988-8e21-4e95-a1b2-19b63285e88b 00:07:02.436 20:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:02.693 20:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:02.693 20:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 86713c1f-b139-44e6-8019-8526433a1f40 00:07:02.949 20:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 73e7c988-8e21-4e95-a1b2-19b63285e88b 00:07:03.205 20:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:03.461 20:37:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:03.461 00:07:03.462 real 0m17.720s 00:07:03.462 user 0m17.253s 00:07:03.462 sys 0m1.818s 00:07:03.462 20:37:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.462 20:37:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:03.462 ************************************ 00:07:03.462 END TEST lvs_grow_clean 00:07:03.462 ************************************ 00:07:03.462 20:37:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:03.462 20:37:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:03.462 20:37:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.462 20:37:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:03.462 ************************************ 00:07:03.462 START TEST lvs_grow_dirty 00:07:03.462 ************************************ 00:07:03.462 20:37:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:03.462 20:37:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:03.462 20:37:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:03.462 20:37:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:03.462 20:37:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:03.462 20:37:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:03.462 20:37:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:03.462 20:37:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:03.462 20:37:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:03.462 20:37:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:03.718 20:37:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:03.718 20:37:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:03.975 20:37:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=42b14b00-6a43-4e14-b18a-e935f6a74cba 00:07:03.975 20:37:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42b14b00-6a43-4e14-b18a-e935f6a74cba 00:07:03.975 20:37:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:04.539 20:37:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:04.539 20:37:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:04.539 20:37:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 42b14b00-6a43-4e14-b18a-e935f6a74cba lvol 150 00:07:04.795 20:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=12e5eca4-a8d6-4416-a47e-311c31a3f4cb 00:07:04.795 20:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:04.796 20:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:05.052 [2024-11-26 20:37:08.496809] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:05.052 [2024-11-26 20:37:08.496890] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:05.052 true 00:07:05.052 20:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42b14b00-6a43-4e14-b18a-e935f6a74cba 00:07:05.052 20:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:05.308 20:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:05.308 20:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:05.567 20:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 12e5eca4-a8d6-4416-a47e-311c31a3f4cb 00:07:05.852 20:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:06.109 [2024-11-26 20:37:09.588101] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:06.109 20:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:06.367 20:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1566842 00:07:06.367 20:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:06.367 20:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:06.367 20:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1566842 /var/tmp/bdevperf.sock 00:07:06.367 20:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1566842 ']' 00:07:06.367 20:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:06.368 20:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.368 20:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:06.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:06.368 20:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.368 20:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:06.368 [2024-11-26 20:37:09.918880] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:07:06.368 [2024-11-26 20:37:09.918946] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1566842 ] 00:07:06.368 [2024-11-26 20:37:09.988006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.368 [2024-11-26 20:37:10.052986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.625 20:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.625 20:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:06.625 20:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:06.883 Nvme0n1 00:07:06.883 20:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:07.141 [ 00:07:07.141 { 00:07:07.141 "name": "Nvme0n1", 00:07:07.141 "aliases": [ 00:07:07.141 "12e5eca4-a8d6-4416-a47e-311c31a3f4cb" 00:07:07.141 ], 00:07:07.141 "product_name": "NVMe disk", 00:07:07.141 "block_size": 4096, 00:07:07.141 "num_blocks": 38912, 00:07:07.141 "uuid": "12e5eca4-a8d6-4416-a47e-311c31a3f4cb", 00:07:07.141 "numa_id": 0, 00:07:07.141 "assigned_rate_limits": { 00:07:07.141 "rw_ios_per_sec": 0, 00:07:07.141 "rw_mbytes_per_sec": 0, 00:07:07.141 "r_mbytes_per_sec": 0, 00:07:07.141 "w_mbytes_per_sec": 0 00:07:07.141 }, 00:07:07.141 "claimed": false, 00:07:07.141 "zoned": false, 00:07:07.141 "supported_io_types": { 00:07:07.141 "read": true, 00:07:07.141 "write": true, 00:07:07.141 "unmap": true, 00:07:07.141 "flush": true, 00:07:07.141 "reset": true, 00:07:07.141 "nvme_admin": true, 00:07:07.141 "nvme_io": true, 00:07:07.141 "nvme_io_md": false, 00:07:07.141 "write_zeroes": true, 00:07:07.141 "zcopy": false, 00:07:07.141 "get_zone_info": false, 00:07:07.141 "zone_management": false, 00:07:07.141 "zone_append": false, 00:07:07.141 "compare": true, 00:07:07.141 "compare_and_write": true, 00:07:07.141 "abort": true, 00:07:07.141 "seek_hole": false, 00:07:07.141 "seek_data": false, 00:07:07.141 "copy": true, 00:07:07.141 "nvme_iov_md": false 00:07:07.141 }, 00:07:07.141 "memory_domains": [ 00:07:07.141 { 00:07:07.141 "dma_device_id": "system", 00:07:07.141 "dma_device_type": 1 00:07:07.141 } 00:07:07.141 ], 00:07:07.141 "driver_specific": { 00:07:07.141 "nvme": [ 00:07:07.141 { 00:07:07.141 "trid": { 00:07:07.141 "trtype": "TCP", 00:07:07.141 "adrfam": "IPv4", 00:07:07.141 "traddr": "10.0.0.2", 00:07:07.141 "trsvcid": "4420", 00:07:07.141 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:07.141 }, 00:07:07.141 "ctrlr_data": { 00:07:07.141 "cntlid": 1, 00:07:07.141 "vendor_id": "0x8086", 00:07:07.141 "model_number": "SPDK bdev Controller", 00:07:07.141 "serial_number": "SPDK0", 00:07:07.141 "firmware_revision": "25.01", 00:07:07.141 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:07.141 "oacs": { 00:07:07.141 "security": 0, 00:07:07.141 "format": 0, 00:07:07.141 "firmware": 0, 00:07:07.141 "ns_manage": 0 00:07:07.141 }, 00:07:07.141 "multi_ctrlr": true, 00:07:07.141 "ana_reporting": false 00:07:07.141 }, 00:07:07.141 "vs": { 00:07:07.141 "nvme_version": "1.3" 00:07:07.141 }, 00:07:07.141 "ns_data": { 00:07:07.141 "id": 1, 00:07:07.141 "can_share": true 00:07:07.141 } 00:07:07.141 } 00:07:07.141 ], 00:07:07.141 "mp_policy": "active_passive" 00:07:07.141 } 00:07:07.141 } 00:07:07.141 ] 00:07:07.141 20:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1566978 00:07:07.141 20:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:07.141 20:37:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:07.398 Running I/O for 10 seconds... 00:07:08.331 Latency(us) 00:07:08.331 [2024-11-26T19:37:12.028Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:08.331 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:08.331 Nvme0n1 : 1.00 15114.00 59.04 0.00 0.00 0.00 0.00 0.00 00:07:08.331 [2024-11-26T19:37:12.028Z] =================================================================================================================== 00:07:08.331 [2024-11-26T19:37:12.028Z] Total : 15114.00 59.04 0.00 0.00 0.00 0.00 0.00 00:07:08.331 00:07:09.263 20:37:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 42b14b00-6a43-4e14-b18a-e935f6a74cba 00:07:09.263 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:09.263 Nvme0n1 : 2.00 15304.00 59.78 0.00 0.00 0.00 0.00 0.00 00:07:09.263 [2024-11-26T19:37:12.960Z] =================================================================================================================== 00:07:09.263 [2024-11-26T19:37:12.960Z] Total : 15304.00 59.78 0.00 0.00 0.00 0.00 0.00 00:07:09.263 00:07:09.521 true 00:07:09.521 20:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42b14b00-6a43-4e14-b18a-e935f6a74cba 00:07:09.521 20:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:09.778 20:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:09.778 20:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:09.778 20:37:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1566978 00:07:10.343 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:10.343 Nvme0n1 : 3.00 15367.33 60.03 0.00 0.00 0.00 0.00 0.00 00:07:10.343 [2024-11-26T19:37:14.040Z] =================================================================================================================== 00:07:10.343 [2024-11-26T19:37:14.040Z] Total : 15367.33 60.03 0.00 0.00 0.00 0.00 0.00 00:07:10.343 00:07:11.275 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:11.275 Nvme0n1 : 4.00 15462.50 60.40 0.00 0.00 0.00 0.00 0.00 00:07:11.275 [2024-11-26T19:37:14.972Z] =================================================================================================================== 00:07:11.275 [2024-11-26T19:37:14.972Z] Total : 15462.50 60.40 0.00 0.00 0.00 0.00 0.00 00:07:11.275 00:07:12.647 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:12.647 Nvme0n1 : 5.00 15532.40 60.67 0.00 0.00 0.00 0.00 0.00 00:07:12.647 [2024-11-26T19:37:16.344Z] =================================================================================================================== 00:07:12.647 [2024-11-26T19:37:16.344Z] Total : 15532.40 60.67 0.00 0.00 0.00 0.00 0.00 00:07:12.647 00:07:13.581 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:13.581 Nvme0n1 : 6.00 15578.83 60.85 0.00 0.00 0.00 0.00 0.00 00:07:13.581 [2024-11-26T19:37:17.278Z] =================================================================================================================== 00:07:13.581 [2024-11-26T19:37:17.278Z] Total : 15578.83 60.85 0.00 0.00 0.00 0.00 0.00 00:07:13.581 00:07:14.513 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:14.513 Nvme0n1 : 7.00 15626.00 61.04 0.00 0.00 0.00 0.00 0.00 00:07:14.513 [2024-11-26T19:37:18.210Z] =================================================================================================================== 00:07:14.513 [2024-11-26T19:37:18.210Z] Total : 15626.00 61.04 0.00 0.00 0.00 0.00 0.00 00:07:14.513 00:07:15.446 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:15.446 Nvme0n1 : 8.00 15673.00 61.22 0.00 0.00 0.00 0.00 0.00 00:07:15.446 [2024-11-26T19:37:19.143Z] =================================================================================================================== 00:07:15.446 [2024-11-26T19:37:19.143Z] Total : 15673.00 61.22 0.00 0.00 0.00 0.00 0.00 00:07:15.446 00:07:16.380 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:16.380 Nvme0n1 : 9.00 15709.56 61.37 0.00 0.00 0.00 0.00 0.00 00:07:16.380 [2024-11-26T19:37:20.077Z] =================================================================================================================== 00:07:16.380 [2024-11-26T19:37:20.077Z] Total : 15709.56 61.37 0.00 0.00 0.00 0.00 0.00 00:07:16.380 00:07:17.314 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:17.314 Nvme0n1 : 10.00 15713.40 61.38 0.00 0.00 0.00 0.00 0.00 00:07:17.314 [2024-11-26T19:37:21.011Z] =================================================================================================================== 00:07:17.314 [2024-11-26T19:37:21.011Z] Total : 15713.40 61.38 0.00 0.00 0.00 0.00 0.00 00:07:17.314 00:07:17.314 00:07:17.314 Latency(us) 00:07:17.314 [2024-11-26T19:37:21.011Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:17.314 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:17.314 Nvme0n1 : 10.01 15713.24 61.38 0.00 0.00 8141.26 2318.03 15728.64 00:07:17.314 [2024-11-26T19:37:21.011Z] =================================================================================================================== 00:07:17.314 [2024-11-26T19:37:21.011Z] Total : 15713.24 61.38 0.00 0.00 8141.26 2318.03 15728.64 00:07:17.314 { 00:07:17.314 "results": [ 00:07:17.314 { 00:07:17.314 "job": "Nvme0n1", 00:07:17.314 "core_mask": "0x2", 00:07:17.314 "workload": "randwrite", 00:07:17.314 "status": "finished", 00:07:17.314 "queue_depth": 128, 00:07:17.314 "io_size": 4096, 00:07:17.314 "runtime": 10.008249, 00:07:17.314 "iops": 15713.238149850189, 00:07:17.314 "mibps": 61.3798365228523, 00:07:17.314 "io_failed": 0, 00:07:17.314 "io_timeout": 0, 00:07:17.314 "avg_latency_us": 8141.256672888884, 00:07:17.314 "min_latency_us": 2318.0325925925927, 00:07:17.314 "max_latency_us": 15728.64 00:07:17.314 } 00:07:17.314 ], 00:07:17.314 "core_count": 1 00:07:17.314 } 00:07:17.314 20:37:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1566842 00:07:17.314 20:37:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1566842 ']' 00:07:17.314 20:37:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1566842 00:07:17.314 20:37:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:07:17.314 20:37:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:17.314 20:37:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1566842 00:07:17.314 20:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:17.574 20:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:17.574 20:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1566842' 00:07:17.574 killing process with pid 1566842 00:07:17.574 20:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1566842 00:07:17.574 Received shutdown signal, test time was about 10.000000 seconds 00:07:17.574 00:07:17.574 Latency(us) 00:07:17.574 [2024-11-26T19:37:21.271Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:17.574 [2024-11-26T19:37:21.271Z] =================================================================================================================== 00:07:17.574 [2024-11-26T19:37:21.271Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:17.574 20:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1566842 00:07:17.574 20:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:17.843 20:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:18.112 20:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42b14b00-6a43-4e14-b18a-e935f6a74cba 00:07:18.112 20:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:18.370 20:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:18.370 20:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:18.370 20:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1564323 00:07:18.370 20:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1564323 00:07:18.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1564323 Killed "${NVMF_APP[@]}" "$@" 00:07:18.628 20:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:18.629 20:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:18.629 20:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:18.629 20:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:18.629 20:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:18.629 20:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1568313 00:07:18.629 20:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:18.629 20:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1568313 00:07:18.629 20:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1568313 ']' 00:07:18.629 20:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.629 20:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:18.629 20:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.629 20:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:18.629 20:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:18.629 [2024-11-26 20:37:22.141040] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:07:18.629 [2024-11-26 20:37:22.141117] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:18.629 [2024-11-26 20:37:22.215686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.629 [2024-11-26 20:37:22.274268] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:18.629 [2024-11-26 20:37:22.274342] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:18.629 [2024-11-26 20:37:22.274365] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:18.629 [2024-11-26 20:37:22.274376] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:18.629 [2024-11-26 20:37:22.274386] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:18.629 [2024-11-26 20:37:22.274952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.887 20:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:18.887 20:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:18.887 20:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:18.887 20:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:18.887 20:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:18.887 20:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:18.887 20:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:19.145 [2024-11-26 20:37:22.676902] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:19.145 [2024-11-26 20:37:22.677025] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:19.145 [2024-11-26 20:37:22.677069] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:19.145 20:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:19.145 20:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 12e5eca4-a8d6-4416-a47e-311c31a3f4cb 00:07:19.145 20:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=12e5eca4-a8d6-4416-a47e-311c31a3f4cb 00:07:19.145 20:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:19.145 20:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:19.145 20:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:19.145 20:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:19.145 20:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:19.402 20:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 12e5eca4-a8d6-4416-a47e-311c31a3f4cb -t 2000 00:07:19.659 [ 00:07:19.659 { 00:07:19.659 "name": "12e5eca4-a8d6-4416-a47e-311c31a3f4cb", 00:07:19.659 "aliases": [ 00:07:19.659 "lvs/lvol" 00:07:19.659 ], 00:07:19.659 "product_name": "Logical Volume", 00:07:19.659 "block_size": 4096, 00:07:19.659 "num_blocks": 38912, 00:07:19.659 "uuid": "12e5eca4-a8d6-4416-a47e-311c31a3f4cb", 00:07:19.659 "assigned_rate_limits": { 00:07:19.659 "rw_ios_per_sec": 0, 00:07:19.659 "rw_mbytes_per_sec": 0, 00:07:19.659 "r_mbytes_per_sec": 0, 00:07:19.659 "w_mbytes_per_sec": 0 00:07:19.659 }, 00:07:19.659 "claimed": false, 00:07:19.659 "zoned": false, 00:07:19.659 "supported_io_types": { 00:07:19.659 "read": true, 00:07:19.659 "write": true, 00:07:19.659 "unmap": true, 00:07:19.659 "flush": false, 00:07:19.659 "reset": true, 00:07:19.659 "nvme_admin": false, 00:07:19.659 "nvme_io": false, 00:07:19.659 "nvme_io_md": false, 00:07:19.659 "write_zeroes": true, 00:07:19.659 "zcopy": false, 00:07:19.659 "get_zone_info": false, 00:07:19.659 "zone_management": false, 00:07:19.659 "zone_append": false, 00:07:19.659 "compare": false, 00:07:19.659 "compare_and_write": false, 00:07:19.659 "abort": false, 00:07:19.659 "seek_hole": true, 00:07:19.659 "seek_data": true, 00:07:19.659 "copy": false, 00:07:19.659 "nvme_iov_md": false 00:07:19.659 }, 00:07:19.659 "driver_specific": { 00:07:19.659 "lvol": { 00:07:19.659 "lvol_store_uuid": "42b14b00-6a43-4e14-b18a-e935f6a74cba", 00:07:19.659 "base_bdev": "aio_bdev", 00:07:19.659 "thin_provision": false, 00:07:19.659 "num_allocated_clusters": 38, 00:07:19.659 "snapshot": false, 00:07:19.659 "clone": false, 00:07:19.659 "esnap_clone": false 00:07:19.659 } 00:07:19.659 } 00:07:19.659 } 00:07:19.659 ] 00:07:19.659 20:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:19.659 20:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42b14b00-6a43-4e14-b18a-e935f6a74cba 00:07:19.660 20:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:19.917 20:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:19.917 20:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42b14b00-6a43-4e14-b18a-e935f6a74cba 00:07:19.917 20:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:20.175 20:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:20.175 20:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:20.433 [2024-11-26 20:37:24.046641] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:20.433 20:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42b14b00-6a43-4e14-b18a-e935f6a74cba 00:07:20.433 20:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:07:20.433 20:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42b14b00-6a43-4e14-b18a-e935f6a74cba 00:07:20.433 20:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:20.433 20:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:20.433 20:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:20.433 20:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:20.433 20:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:20.433 20:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:20.433 20:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:20.433 20:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:20.433 20:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42b14b00-6a43-4e14-b18a-e935f6a74cba 00:07:20.692 request: 00:07:20.692 { 00:07:20.692 "uuid": "42b14b00-6a43-4e14-b18a-e935f6a74cba", 00:07:20.692 "method": "bdev_lvol_get_lvstores", 00:07:20.692 "req_id": 1 00:07:20.692 } 00:07:20.692 Got JSON-RPC error response 00:07:20.692 response: 00:07:20.692 { 00:07:20.692 "code": -19, 00:07:20.692 "message": "No such device" 00:07:20.692 } 00:07:20.692 20:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:07:20.692 20:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:20.692 20:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:20.692 20:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:20.692 20:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:20.950 aio_bdev 00:07:20.950 20:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 12e5eca4-a8d6-4416-a47e-311c31a3f4cb 00:07:20.950 20:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=12e5eca4-a8d6-4416-a47e-311c31a3f4cb 00:07:20.950 20:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:20.950 20:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:20.950 20:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:20.950 20:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:20.950 20:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:21.233 20:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 12e5eca4-a8d6-4416-a47e-311c31a3f4cb -t 2000 00:07:21.491 [ 00:07:21.491 { 00:07:21.491 "name": "12e5eca4-a8d6-4416-a47e-311c31a3f4cb", 00:07:21.491 "aliases": [ 00:07:21.491 "lvs/lvol" 00:07:21.491 ], 00:07:21.491 "product_name": "Logical Volume", 00:07:21.491 "block_size": 4096, 00:07:21.491 "num_blocks": 38912, 00:07:21.491 "uuid": "12e5eca4-a8d6-4416-a47e-311c31a3f4cb", 00:07:21.491 "assigned_rate_limits": { 00:07:21.491 "rw_ios_per_sec": 0, 00:07:21.491 "rw_mbytes_per_sec": 0, 00:07:21.491 "r_mbytes_per_sec": 0, 00:07:21.491 "w_mbytes_per_sec": 0 00:07:21.491 }, 00:07:21.491 "claimed": false, 00:07:21.491 "zoned": false, 00:07:21.491 "supported_io_types": { 00:07:21.491 "read": true, 00:07:21.491 "write": true, 00:07:21.491 "unmap": true, 00:07:21.491 "flush": false, 00:07:21.491 "reset": true, 00:07:21.491 "nvme_admin": false, 00:07:21.491 "nvme_io": false, 00:07:21.491 "nvme_io_md": false, 00:07:21.491 "write_zeroes": true, 00:07:21.491 "zcopy": false, 00:07:21.491 "get_zone_info": false, 00:07:21.491 "zone_management": false, 00:07:21.491 "zone_append": false, 00:07:21.491 "compare": false, 00:07:21.491 "compare_and_write": false, 00:07:21.491 "abort": false, 00:07:21.491 "seek_hole": true, 00:07:21.491 "seek_data": true, 00:07:21.491 "copy": false, 00:07:21.491 "nvme_iov_md": false 00:07:21.491 }, 00:07:21.491 "driver_specific": { 00:07:21.491 "lvol": { 00:07:21.491 "lvol_store_uuid": "42b14b00-6a43-4e14-b18a-e935f6a74cba", 00:07:21.491 "base_bdev": "aio_bdev", 00:07:21.491 "thin_provision": false, 00:07:21.491 "num_allocated_clusters": 38, 00:07:21.491 "snapshot": false, 00:07:21.491 "clone": false, 00:07:21.491 "esnap_clone": false 00:07:21.491 } 00:07:21.491 } 00:07:21.491 } 00:07:21.491 ] 00:07:21.491 20:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:21.491 20:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42b14b00-6a43-4e14-b18a-e935f6a74cba 00:07:21.491 20:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:21.749 20:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:21.749 20:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42b14b00-6a43-4e14-b18a-e935f6a74cba 00:07:21.749 20:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:22.007 20:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:22.007 20:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 12e5eca4-a8d6-4416-a47e-311c31a3f4cb 00:07:22.573 20:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 42b14b00-6a43-4e14-b18a-e935f6a74cba 00:07:22.573 20:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:22.831 20:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:23.088 00:07:23.088 real 0m19.447s 00:07:23.088 user 0m49.225s 00:07:23.088 sys 0m4.610s 00:07:23.088 20:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.088 20:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:23.088 ************************************ 00:07:23.088 END TEST lvs_grow_dirty 00:07:23.088 ************************************ 00:07:23.088 20:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:23.088 20:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:07:23.088 20:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:07:23.088 20:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:07:23.088 20:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:23.088 20:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:07:23.088 20:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:07:23.088 20:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:07:23.088 20:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:23.088 nvmf_trace.0 00:07:23.088 20:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:07:23.088 20:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:23.088 20:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:23.088 20:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:23.088 20:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:23.088 20:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:23.088 20:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:23.088 20:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:23.088 rmmod nvme_tcp 00:07:23.088 rmmod nvme_fabrics 00:07:23.088 rmmod nvme_keyring 00:07:23.088 20:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:23.088 20:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:23.088 20:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:23.088 20:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1568313 ']' 00:07:23.088 20:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1568313 00:07:23.088 20:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1568313 ']' 00:07:23.088 20:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1568313 00:07:23.088 20:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:07:23.088 20:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:23.088 20:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1568313 00:07:23.088 20:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:23.088 20:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:23.088 20:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1568313' 00:07:23.088 killing process with pid 1568313 00:07:23.088 20:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1568313 00:07:23.088 20:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1568313 00:07:23.346 20:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:23.346 20:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:23.346 20:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:23.346 20:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:23.346 20:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:23.346 20:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:23.346 20:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:23.346 20:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:23.346 20:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:23.346 20:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:23.346 20:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:23.346 20:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:25.878 20:37:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:25.878 00:07:25.878 real 0m42.835s 00:07:25.878 user 1m12.558s 00:07:25.878 sys 0m8.554s 00:07:25.878 20:37:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.878 20:37:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:25.878 ************************************ 00:07:25.878 END TEST nvmf_lvs_grow 00:07:25.878 ************************************ 00:07:25.878 20:37:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:25.878 20:37:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:25.878 20:37:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.878 20:37:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:25.878 ************************************ 00:07:25.878 START TEST nvmf_bdev_io_wait 00:07:25.878 ************************************ 00:07:25.878 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:25.878 * Looking for test storage... 00:07:25.878 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:25.878 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:25.878 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:07:25.878 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:25.878 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:25.878 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:25.878 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:25.878 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:25.878 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:25.878 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:25.878 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:25.878 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:25.878 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:25.878 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:25.878 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:25.878 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:25.878 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:25.878 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:25.878 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:25.878 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:25.878 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:25.878 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:25.878 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:25.878 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:25.878 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:25.878 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:25.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.879 --rc genhtml_branch_coverage=1 00:07:25.879 --rc genhtml_function_coverage=1 00:07:25.879 --rc genhtml_legend=1 00:07:25.879 --rc geninfo_all_blocks=1 00:07:25.879 --rc geninfo_unexecuted_blocks=1 00:07:25.879 00:07:25.879 ' 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:25.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.879 --rc genhtml_branch_coverage=1 00:07:25.879 --rc genhtml_function_coverage=1 00:07:25.879 --rc genhtml_legend=1 00:07:25.879 --rc geninfo_all_blocks=1 00:07:25.879 --rc geninfo_unexecuted_blocks=1 00:07:25.879 00:07:25.879 ' 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:25.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.879 --rc genhtml_branch_coverage=1 00:07:25.879 --rc genhtml_function_coverage=1 00:07:25.879 --rc genhtml_legend=1 00:07:25.879 --rc geninfo_all_blocks=1 00:07:25.879 --rc geninfo_unexecuted_blocks=1 00:07:25.879 00:07:25.879 ' 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:25.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.879 --rc genhtml_branch_coverage=1 00:07:25.879 --rc genhtml_function_coverage=1 00:07:25.879 --rc genhtml_legend=1 00:07:25.879 --rc geninfo_all_blocks=1 00:07:25.879 --rc geninfo_unexecuted_blocks=1 00:07:25.879 00:07:25.879 ' 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:25.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:25.879 20:37:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:27.818 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:27.818 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:07:27.818 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:27.818 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:27.818 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:27.818 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:27.818 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:27.818 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:07:27.818 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:27.818 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:07:27.818 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:07:27.818 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:07:27.818 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:07:27.818 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:27.819 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:27.819 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:27.819 Found net devices under 0000:09:00.0: cvl_0_0 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:27.819 Found net devices under 0000:09:00.1: cvl_0_1 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:27.819 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:27.820 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:27.820 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:27.820 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:27.820 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:27.820 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:27.820 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:27.820 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:27.820 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:28.078 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:28.078 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:28.078 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:28.078 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:28.078 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:28.078 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:07:28.078 00:07:28.078 --- 10.0.0.2 ping statistics --- 00:07:28.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:28.078 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:07:28.078 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:28.078 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:28.078 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:07:28.078 00:07:28.078 --- 10.0.0.1 ping statistics --- 00:07:28.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:28.078 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:07:28.078 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:28.078 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:07:28.078 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:28.078 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:28.078 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:28.078 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:28.078 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:28.078 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:28.078 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:28.078 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:28.078 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:28.078 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:28.078 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:28.078 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1570971 00:07:28.078 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:28.079 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1570971 00:07:28.079 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1570971 ']' 00:07:28.079 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.079 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:28.079 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.079 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:28.079 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:28.079 [2024-11-26 20:37:31.620224] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:07:28.079 [2024-11-26 20:37:31.620340] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:28.079 [2024-11-26 20:37:31.693733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:28.079 [2024-11-26 20:37:31.751760] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:28.079 [2024-11-26 20:37:31.751811] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:28.079 [2024-11-26 20:37:31.751834] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:28.079 [2024-11-26 20:37:31.751845] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:28.079 [2024-11-26 20:37:31.751854] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:28.079 [2024-11-26 20:37:31.753404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.079 [2024-11-26 20:37:31.753462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:28.079 [2024-11-26 20:37:31.753530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:28.079 [2024-11-26 20:37:31.753534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.336 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:28.336 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:07:28.336 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:28.336 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:28.336 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:28.336 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:28.336 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:28.336 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.336 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:28.336 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.336 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:28.336 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.336 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:28.336 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.336 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:28.336 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.336 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:28.336 [2024-11-26 20:37:31.956274] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:28.336 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.336 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:28.336 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.336 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:28.336 Malloc0 00:07:28.336 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.336 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:28.336 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.336 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:28.336 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.336 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:28.336 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.336 20:37:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:28.336 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.336 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:28.336 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.336 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:28.336 [2024-11-26 20:37:32.007726] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:28.336 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.336 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1571009 00:07:28.336 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:28.336 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:28.336 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1571011 00:07:28.336 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:28.336 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:28.336 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:28.336 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:28.336 { 00:07:28.336 "params": { 00:07:28.336 "name": "Nvme$subsystem", 00:07:28.336 "trtype": "$TEST_TRANSPORT", 00:07:28.336 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:28.336 "adrfam": "ipv4", 00:07:28.336 "trsvcid": "$NVMF_PORT", 00:07:28.336 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:28.336 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:28.336 "hdgst": ${hdgst:-false}, 00:07:28.336 "ddgst": ${ddgst:-false} 00:07:28.336 }, 00:07:28.336 "method": "bdev_nvme_attach_controller" 00:07:28.337 } 00:07:28.337 EOF 00:07:28.337 )") 00:07:28.337 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:28.337 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:28.337 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1571013 00:07:28.337 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:28.337 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:28.337 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:28.337 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:28.337 { 00:07:28.337 "params": { 00:07:28.337 "name": "Nvme$subsystem", 00:07:28.337 "trtype": "$TEST_TRANSPORT", 00:07:28.337 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:28.337 "adrfam": "ipv4", 00:07:28.337 "trsvcid": "$NVMF_PORT", 00:07:28.337 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:28.337 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:28.337 "hdgst": ${hdgst:-false}, 00:07:28.337 "ddgst": ${ddgst:-false} 00:07:28.337 }, 00:07:28.337 "method": "bdev_nvme_attach_controller" 00:07:28.337 } 00:07:28.337 EOF 00:07:28.337 )") 00:07:28.337 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:28.337 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:28.337 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:28.337 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1571016 00:07:28.337 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:28.337 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:28.337 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:28.337 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:28.337 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:28.337 { 00:07:28.337 "params": { 00:07:28.337 "name": "Nvme$subsystem", 00:07:28.337 "trtype": "$TEST_TRANSPORT", 00:07:28.337 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:28.337 "adrfam": "ipv4", 00:07:28.337 "trsvcid": "$NVMF_PORT", 00:07:28.337 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:28.337 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:28.337 "hdgst": ${hdgst:-false}, 00:07:28.337 "ddgst": ${ddgst:-false} 00:07:28.337 }, 00:07:28.337 "method": "bdev_nvme_attach_controller" 00:07:28.337 } 00:07:28.337 EOF 00:07:28.337 )") 00:07:28.337 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:28.337 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:28.337 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:28.337 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:28.337 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:28.337 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:28.337 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:28.337 { 00:07:28.337 "params": { 00:07:28.337 "name": "Nvme$subsystem", 00:07:28.337 "trtype": "$TEST_TRANSPORT", 00:07:28.337 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:28.337 "adrfam": "ipv4", 00:07:28.337 "trsvcid": "$NVMF_PORT", 00:07:28.337 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:28.337 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:28.337 "hdgst": ${hdgst:-false}, 00:07:28.337 "ddgst": ${ddgst:-false} 00:07:28.337 }, 00:07:28.337 "method": "bdev_nvme_attach_controller" 00:07:28.337 } 00:07:28.337 EOF 00:07:28.337 )") 00:07:28.337 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:28.337 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1571009 00:07:28.337 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:28.337 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:28.337 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:28.337 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:28.337 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:28.337 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:28.337 "params": { 00:07:28.337 "name": "Nvme1", 00:07:28.337 "trtype": "tcp", 00:07:28.337 "traddr": "10.0.0.2", 00:07:28.337 "adrfam": "ipv4", 00:07:28.337 "trsvcid": "4420", 00:07:28.337 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:28.337 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:28.337 "hdgst": false, 00:07:28.337 "ddgst": false 00:07:28.337 }, 00:07:28.337 "method": "bdev_nvme_attach_controller" 00:07:28.337 }' 00:07:28.337 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:28.337 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:28.337 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:28.337 "params": { 00:07:28.337 "name": "Nvme1", 00:07:28.337 "trtype": "tcp", 00:07:28.337 "traddr": "10.0.0.2", 00:07:28.337 "adrfam": "ipv4", 00:07:28.337 "trsvcid": "4420", 00:07:28.337 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:28.337 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:28.337 "hdgst": false, 00:07:28.337 "ddgst": false 00:07:28.337 }, 00:07:28.337 "method": "bdev_nvme_attach_controller" 00:07:28.337 }' 00:07:28.337 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:28.337 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:28.337 "params": { 00:07:28.337 "name": "Nvme1", 00:07:28.337 "trtype": "tcp", 00:07:28.337 "traddr": "10.0.0.2", 00:07:28.337 "adrfam": "ipv4", 00:07:28.337 "trsvcid": "4420", 00:07:28.337 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:28.337 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:28.337 "hdgst": false, 00:07:28.337 "ddgst": false 00:07:28.337 }, 00:07:28.337 "method": "bdev_nvme_attach_controller" 00:07:28.337 }' 00:07:28.337 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:28.337 20:37:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:28.337 "params": { 00:07:28.337 "name": "Nvme1", 00:07:28.337 "trtype": "tcp", 00:07:28.337 "traddr": "10.0.0.2", 00:07:28.337 "adrfam": "ipv4", 00:07:28.337 "trsvcid": "4420", 00:07:28.337 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:28.337 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:28.337 "hdgst": false, 00:07:28.337 "ddgst": false 00:07:28.337 }, 00:07:28.337 "method": "bdev_nvme_attach_controller" 00:07:28.337 }' 00:07:28.594 [2024-11-26 20:37:32.057503] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:07:28.594 [2024-11-26 20:37:32.057503] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:07:28.594 [2024-11-26 20:37:32.057503] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:07:28.594 [2024-11-26 20:37:32.057609] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-26 20:37:32.057610] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-26 20:37:32.057609] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:28.594 --proc-type=auto ] 00:07:28.594 --proc-type=auto ] 00:07:28.594 [2024-11-26 20:37:32.058969] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:07:28.594 [2024-11-26 20:37:32.059037] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:28.594 [2024-11-26 20:37:32.244828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.852 [2024-11-26 20:37:32.300681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:07:28.852 [2024-11-26 20:37:32.349310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.852 [2024-11-26 20:37:32.404976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:07:28.852 [2024-11-26 20:37:32.452253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.852 [2024-11-26 20:37:32.510756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:28.852 [2024-11-26 20:37:32.530160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.109 [2024-11-26 20:37:32.583163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:07:29.109 Running I/O for 1 seconds... 00:07:29.109 Running I/O for 1 seconds... 00:07:29.366 Running I/O for 1 seconds... 00:07:29.366 Running I/O for 1 seconds... 00:07:30.297 6542.00 IOPS, 25.55 MiB/s 00:07:30.297 Latency(us) 00:07:30.297 [2024-11-26T19:37:33.994Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:30.297 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:30.297 Nvme1n1 : 1.02 6551.66 25.59 0.00 0.00 19340.64 8107.05 28350.39 00:07:30.297 [2024-11-26T19:37:33.994Z] =================================================================================================================== 00:07:30.297 [2024-11-26T19:37:33.994Z] Total : 6551.66 25.59 0.00 0.00 19340.64 8107.05 28350.39 00:07:30.297 187672.00 IOPS, 733.09 MiB/s 00:07:30.297 Latency(us) 00:07:30.297 [2024-11-26T19:37:33.994Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:30.297 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:30.297 Nvme1n1 : 1.00 187314.55 731.70 0.00 0.00 679.54 295.82 1881.13 00:07:30.297 [2024-11-26T19:37:33.994Z] =================================================================================================================== 00:07:30.297 [2024-11-26T19:37:33.994Z] Total : 187314.55 731.70 0.00 0.00 679.54 295.82 1881.13 00:07:30.297 6195.00 IOPS, 24.20 MiB/s 00:07:30.297 Latency(us) 00:07:30.297 [2024-11-26T19:37:33.994Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:30.297 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:30.297 Nvme1n1 : 1.01 6275.92 24.52 0.00 0.00 20311.06 6650.69 37282.70 00:07:30.297 [2024-11-26T19:37:33.994Z] =================================================================================================================== 00:07:30.297 [2024-11-26T19:37:33.994Z] Total : 6275.92 24.52 0.00 0.00 20311.06 6650.69 37282.70 00:07:30.298 9687.00 IOPS, 37.84 MiB/s 00:07:30.298 Latency(us) 00:07:30.298 [2024-11-26T19:37:33.995Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:30.298 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:30.298 Nvme1n1 : 1.01 9755.32 38.11 0.00 0.00 13072.78 5048.70 24855.13 00:07:30.298 [2024-11-26T19:37:33.995Z] =================================================================================================================== 00:07:30.298 [2024-11-26T19:37:33.995Z] Total : 9755.32 38.11 0.00 0.00 13072.78 5048.70 24855.13 00:07:30.298 20:37:33 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1571011 00:07:30.555 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1571013 00:07:30.555 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1571016 00:07:30.555 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:30.555 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.555 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:30.555 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.555 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:30.555 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:30.555 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:30.555 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:30.555 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:30.555 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:30.555 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:30.555 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:30.555 rmmod nvme_tcp 00:07:30.555 rmmod nvme_fabrics 00:07:30.555 rmmod nvme_keyring 00:07:30.555 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:30.555 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:30.555 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:30.555 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1570971 ']' 00:07:30.555 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1570971 00:07:30.555 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1570971 ']' 00:07:30.555 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1570971 00:07:30.555 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:07:30.555 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:30.555 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1570971 00:07:30.555 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:30.555 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:30.555 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1570971' 00:07:30.555 killing process with pid 1570971 00:07:30.555 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1570971 00:07:30.555 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1570971 00:07:30.812 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:30.812 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:30.812 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:30.812 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:07:30.812 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:07:30.812 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:07:30.812 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:30.812 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:30.812 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:30.812 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.812 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:30.812 20:37:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:33.348 00:07:33.348 real 0m7.394s 00:07:33.348 user 0m16.408s 00:07:33.348 sys 0m3.693s 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:33.348 ************************************ 00:07:33.348 END TEST nvmf_bdev_io_wait 00:07:33.348 ************************************ 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:33.348 ************************************ 00:07:33.348 START TEST nvmf_queue_depth 00:07:33.348 ************************************ 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:33.348 * Looking for test storage... 00:07:33.348 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:33.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.348 --rc genhtml_branch_coverage=1 00:07:33.348 --rc genhtml_function_coverage=1 00:07:33.348 --rc genhtml_legend=1 00:07:33.348 --rc geninfo_all_blocks=1 00:07:33.348 --rc geninfo_unexecuted_blocks=1 00:07:33.348 00:07:33.348 ' 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:33.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.348 --rc genhtml_branch_coverage=1 00:07:33.348 --rc genhtml_function_coverage=1 00:07:33.348 --rc genhtml_legend=1 00:07:33.348 --rc geninfo_all_blocks=1 00:07:33.348 --rc geninfo_unexecuted_blocks=1 00:07:33.348 00:07:33.348 ' 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:33.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.348 --rc genhtml_branch_coverage=1 00:07:33.348 --rc genhtml_function_coverage=1 00:07:33.348 --rc genhtml_legend=1 00:07:33.348 --rc geninfo_all_blocks=1 00:07:33.348 --rc geninfo_unexecuted_blocks=1 00:07:33.348 00:07:33.348 ' 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:33.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:33.348 --rc genhtml_branch_coverage=1 00:07:33.348 --rc genhtml_function_coverage=1 00:07:33.348 --rc genhtml_legend=1 00:07:33.348 --rc geninfo_all_blocks=1 00:07:33.348 --rc geninfo_unexecuted_blocks=1 00:07:33.348 00:07:33.348 ' 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:33.348 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.349 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.349 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.349 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:33.349 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.349 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:07:33.349 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:33.349 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:33.349 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:33.349 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:33.349 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:33.349 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:33.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:33.349 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:33.349 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:33.349 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:33.349 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:33.349 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:33.349 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:33.349 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:33.349 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:33.349 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:33.349 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:33.349 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:33.349 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:33.349 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.349 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:33.349 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.349 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:33.349 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:33.349 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:07:33.349 20:37:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:35.251 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:35.251 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:35.251 Found net devices under 0000:09:00.0: cvl_0_0 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:35.251 Found net devices under 0000:09:00.1: cvl_0_1 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:35.251 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:35.252 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:35.252 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:35.252 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:35.252 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:35.252 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:35.252 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:35.252 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:35.252 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:35.252 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:35.252 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:35.252 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:35.252 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:35.252 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:35.252 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:35.252 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:35.252 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:35.510 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:35.510 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:35.510 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:35.510 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:35.510 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:35.510 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:07:35.510 00:07:35.510 --- 10.0.0.2 ping statistics --- 00:07:35.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:35.510 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:07:35.510 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:35.510 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:35.510 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:07:35.510 00:07:35.510 --- 10.0.0.1 ping statistics --- 00:07:35.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:35.510 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:07:35.510 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:35.510 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:07:35.510 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:35.510 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:35.510 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:35.510 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:35.510 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:35.510 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:35.510 20:37:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:35.510 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:07:35.510 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:35.510 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:35.510 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:35.510 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1573237 00:07:35.510 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:35.510 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1573237 00:07:35.510 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1573237 ']' 00:07:35.510 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.510 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:35.510 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.510 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:35.510 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:35.510 [2024-11-26 20:37:39.077390] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:07:35.510 [2024-11-26 20:37:39.077487] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:35.510 [2024-11-26 20:37:39.151831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.767 [2024-11-26 20:37:39.206495] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:35.767 [2024-11-26 20:37:39.206551] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:35.767 [2024-11-26 20:37:39.206567] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:35.767 [2024-11-26 20:37:39.206591] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:35.767 [2024-11-26 20:37:39.206615] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:35.767 [2024-11-26 20:37:39.207349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.767 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:35.768 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:35.768 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:35.768 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:35.768 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:35.768 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:35.768 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:35.768 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.768 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:35.768 [2024-11-26 20:37:39.354039] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:35.768 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.768 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:35.768 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.768 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:35.768 Malloc0 00:07:35.768 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.768 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:35.768 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.768 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:35.768 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.768 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:35.768 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.768 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:35.768 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.768 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:35.768 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.768 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:35.768 [2024-11-26 20:37:39.403205] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:35.768 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.768 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1573277 00:07:35.768 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:35.768 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:07:35.768 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1573277 /var/tmp/bdevperf.sock 00:07:35.768 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1573277 ']' 00:07:35.768 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:35.768 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:35.768 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:35.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:35.768 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:35.768 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:35.768 [2024-11-26 20:37:39.460878] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:07:35.768 [2024-11-26 20:37:39.460965] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1573277 ] 00:07:36.025 [2024-11-26 20:37:39.532504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.025 [2024-11-26 20:37:39.594066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.025 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:36.025 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:36.025 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:07:36.025 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.025 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:36.283 NVMe0n1 00:07:36.283 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.283 20:37:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:36.283 Running I/O for 10 seconds... 00:07:38.594 8192.00 IOPS, 32.00 MiB/s [2024-11-26T19:37:43.223Z] 8495.50 IOPS, 33.19 MiB/s [2024-11-26T19:37:44.155Z] 8529.00 IOPS, 33.32 MiB/s [2024-11-26T19:37:45.087Z] 8570.25 IOPS, 33.48 MiB/s [2024-11-26T19:37:46.020Z] 8595.80 IOPS, 33.58 MiB/s [2024-11-26T19:37:46.951Z] 8628.50 IOPS, 33.71 MiB/s [2024-11-26T19:37:48.321Z] 8630.00 IOPS, 33.71 MiB/s [2024-11-26T19:37:49.254Z] 8696.50 IOPS, 33.97 MiB/s [2024-11-26T19:37:50.217Z] 8689.11 IOPS, 33.94 MiB/s [2024-11-26T19:37:50.217Z] 8697.60 IOPS, 33.98 MiB/s 00:07:46.520 Latency(us) 00:07:46.520 [2024-11-26T19:37:50.217Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:46.520 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:07:46.520 Verification LBA range: start 0x0 length 0x4000 00:07:46.520 NVMe0n1 : 10.07 8739.52 34.14 0.00 0.00 116713.64 20000.62 71070.15 00:07:46.520 [2024-11-26T19:37:50.217Z] =================================================================================================================== 00:07:46.520 [2024-11-26T19:37:50.217Z] Total : 8739.52 34.14 0.00 0.00 116713.64 20000.62 71070.15 00:07:46.520 { 00:07:46.520 "results": [ 00:07:46.520 { 00:07:46.520 "job": "NVMe0n1", 00:07:46.520 "core_mask": "0x1", 00:07:46.520 "workload": "verify", 00:07:46.520 "status": "finished", 00:07:46.520 "verify_range": { 00:07:46.520 "start": 0, 00:07:46.520 "length": 16384 00:07:46.520 }, 00:07:46.520 "queue_depth": 1024, 00:07:46.520 "io_size": 4096, 00:07:46.520 "runtime": 10.069202, 00:07:46.520 "iops": 8739.520768378667, 00:07:46.520 "mibps": 34.13875300147917, 00:07:46.520 "io_failed": 0, 00:07:46.520 "io_timeout": 0, 00:07:46.520 "avg_latency_us": 116713.63501252525, 00:07:46.520 "min_latency_us": 20000.616296296295, 00:07:46.520 "max_latency_us": 71070.15111111112 00:07:46.520 } 00:07:46.520 ], 00:07:46.520 "core_count": 1 00:07:46.520 } 00:07:46.520 20:37:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1573277 00:07:46.520 20:37:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1573277 ']' 00:07:46.520 20:37:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1573277 00:07:46.520 20:37:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:07:46.520 20:37:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:46.520 20:37:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1573277 00:07:46.520 20:37:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:46.520 20:37:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:46.520 20:37:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1573277' 00:07:46.520 killing process with pid 1573277 00:07:46.520 20:37:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1573277 00:07:46.520 Received shutdown signal, test time was about 10.000000 seconds 00:07:46.520 00:07:46.520 Latency(us) 00:07:46.520 [2024-11-26T19:37:50.217Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:46.520 [2024-11-26T19:37:50.217Z] =================================================================================================================== 00:07:46.520 [2024-11-26T19:37:50.217Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:46.520 20:37:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1573277 00:07:46.777 20:37:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:07:46.777 20:37:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:07:46.777 20:37:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:46.777 20:37:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:07:46.777 20:37:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:46.777 20:37:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:07:46.777 20:37:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:46.777 20:37:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:46.777 rmmod nvme_tcp 00:07:46.777 rmmod nvme_fabrics 00:07:46.777 rmmod nvme_keyring 00:07:46.777 20:37:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:46.777 20:37:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:07:46.777 20:37:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:07:46.777 20:37:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1573237 ']' 00:07:46.777 20:37:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1573237 00:07:46.777 20:37:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1573237 ']' 00:07:46.777 20:37:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1573237 00:07:46.777 20:37:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:07:46.777 20:37:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:46.777 20:37:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1573237 00:07:46.777 20:37:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:46.777 20:37:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:46.777 20:37:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1573237' 00:07:46.777 killing process with pid 1573237 00:07:46.777 20:37:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1573237 00:07:46.777 20:37:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1573237 00:07:47.036 20:37:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:47.036 20:37:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:47.036 20:37:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:47.036 20:37:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:07:47.036 20:37:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:07:47.036 20:37:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:47.036 20:37:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:07:47.036 20:37:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:47.036 20:37:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:47.036 20:37:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:47.036 20:37:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:47.036 20:37:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:49.567 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:49.567 00:07:49.567 real 0m16.212s 00:07:49.567 user 0m22.539s 00:07:49.567 sys 0m3.262s 00:07:49.567 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.567 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:49.567 ************************************ 00:07:49.567 END TEST nvmf_queue_depth 00:07:49.567 ************************************ 00:07:49.567 20:37:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:49.567 20:37:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:49.567 20:37:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.567 20:37:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:49.567 ************************************ 00:07:49.567 START TEST nvmf_target_multipath 00:07:49.567 ************************************ 00:07:49.567 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:07:49.567 * Looking for test storage... 00:07:49.567 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:49.567 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:49.567 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:07:49.567 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:49.567 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:49.567 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:49.567 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:49.567 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:49.567 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:07:49.567 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:07:49.567 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:07:49.567 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:07:49.567 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:07:49.567 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:07:49.567 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:07:49.567 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:49.567 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:07:49.567 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:07:49.567 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:49.567 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:49.567 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:07:49.567 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:07:49.567 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:49.567 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:07:49.567 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:07:49.567 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:07:49.567 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:07:49.567 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:49.567 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:07:49.567 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:07:49.567 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:49.567 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:49.567 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:07:49.567 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:49.567 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:49.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.567 --rc genhtml_branch_coverage=1 00:07:49.567 --rc genhtml_function_coverage=1 00:07:49.567 --rc genhtml_legend=1 00:07:49.567 --rc geninfo_all_blocks=1 00:07:49.567 --rc geninfo_unexecuted_blocks=1 00:07:49.567 00:07:49.567 ' 00:07:49.567 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:49.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.567 --rc genhtml_branch_coverage=1 00:07:49.567 --rc genhtml_function_coverage=1 00:07:49.567 --rc genhtml_legend=1 00:07:49.567 --rc geninfo_all_blocks=1 00:07:49.567 --rc geninfo_unexecuted_blocks=1 00:07:49.567 00:07:49.567 ' 00:07:49.567 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:49.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.567 --rc genhtml_branch_coverage=1 00:07:49.567 --rc genhtml_function_coverage=1 00:07:49.567 --rc genhtml_legend=1 00:07:49.567 --rc geninfo_all_blocks=1 00:07:49.567 --rc geninfo_unexecuted_blocks=1 00:07:49.567 00:07:49.567 ' 00:07:49.567 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:49.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.567 --rc genhtml_branch_coverage=1 00:07:49.567 --rc genhtml_function_coverage=1 00:07:49.567 --rc genhtml_legend=1 00:07:49.567 --rc geninfo_all_blocks=1 00:07:49.567 --rc geninfo_unexecuted_blocks=1 00:07:49.567 00:07:49.567 ' 00:07:49.567 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:49.567 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:07:49.567 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:49.567 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:49.567 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:49.567 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:49.567 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:49.567 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:49.567 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:49.567 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:49.567 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:49.567 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:49.567 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:49.568 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:49.568 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:49.568 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:49.568 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:49.568 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:49.568 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:49.568 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:07:49.568 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:49.568 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:49.568 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:49.568 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.568 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.568 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.568 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:07:49.568 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.568 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:07:49.568 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:49.568 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:49.568 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:49.568 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:49.568 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:49.568 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:49.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:49.568 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:49.568 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:49.568 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:49.568 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:49.568 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:49.568 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:07:49.568 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:49.568 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:07:49.568 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:49.568 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:49.568 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:49.568 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:49.568 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:49.568 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:49.568 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:49.568 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:49.568 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:49.568 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:49.568 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:07:49.568 20:37:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:07:51.505 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:51.505 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:07:51.505 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:51.505 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:51.505 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:51.505 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:51.505 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:51.505 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:07:51.505 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:51.505 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:07:51.505 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:07:51.505 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:07:51.505 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:07:51.505 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:07:51.505 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:07:51.505 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:51.505 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:51.505 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:51.505 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:51.505 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:51.505 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:51.505 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:51.505 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:51.505 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:51.506 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:51.506 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:51.506 Found net devices under 0000:09:00.0: cvl_0_0 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:51.506 Found net devices under 0000:09:00.1: cvl_0_1 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:51.506 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:51.764 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:51.764 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:51.764 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:51.765 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:51.765 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:51.765 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.328 ms 00:07:51.765 00:07:51.765 --- 10.0.0.2 ping statistics --- 00:07:51.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:51.765 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:07:51.765 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:51.765 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:51.765 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:07:51.765 00:07:51.765 --- 10.0.0.1 ping statistics --- 00:07:51.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:51.765 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:07:51.765 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:51.765 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:07:51.765 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:51.765 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:51.765 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:51.765 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:51.765 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:51.765 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:51.765 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:51.765 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:07:51.765 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:07:51.765 only one NIC for nvmf test 00:07:51.765 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:07:51.765 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:51.765 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:07:51.765 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:51.765 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:07:51.765 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:51.765 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:51.765 rmmod nvme_tcp 00:07:51.765 rmmod nvme_fabrics 00:07:51.765 rmmod nvme_keyring 00:07:51.765 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:51.765 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:07:51.765 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:07:51.765 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:07:51.765 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:51.765 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:51.765 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:51.765 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:07:51.765 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:07:51.765 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:51.765 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:07:51.765 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:51.765 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:51.765 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.765 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:51.765 20:37:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:54.304 00:07:54.304 real 0m4.664s 00:07:54.304 user 0m0.890s 00:07:54.304 sys 0m1.768s 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:07:54.304 ************************************ 00:07:54.304 END TEST nvmf_target_multipath 00:07:54.304 ************************************ 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:54.304 ************************************ 00:07:54.304 START TEST nvmf_zcopy 00:07:54.304 ************************************ 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:07:54.304 * Looking for test storage... 00:07:54.304 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:54.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.304 --rc genhtml_branch_coverage=1 00:07:54.304 --rc genhtml_function_coverage=1 00:07:54.304 --rc genhtml_legend=1 00:07:54.304 --rc geninfo_all_blocks=1 00:07:54.304 --rc geninfo_unexecuted_blocks=1 00:07:54.304 00:07:54.304 ' 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:54.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.304 --rc genhtml_branch_coverage=1 00:07:54.304 --rc genhtml_function_coverage=1 00:07:54.304 --rc genhtml_legend=1 00:07:54.304 --rc geninfo_all_blocks=1 00:07:54.304 --rc geninfo_unexecuted_blocks=1 00:07:54.304 00:07:54.304 ' 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:54.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.304 --rc genhtml_branch_coverage=1 00:07:54.304 --rc genhtml_function_coverage=1 00:07:54.304 --rc genhtml_legend=1 00:07:54.304 --rc geninfo_all_blocks=1 00:07:54.304 --rc geninfo_unexecuted_blocks=1 00:07:54.304 00:07:54.304 ' 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:54.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.304 --rc genhtml_branch_coverage=1 00:07:54.304 --rc genhtml_function_coverage=1 00:07:54.304 --rc genhtml_legend=1 00:07:54.304 --rc geninfo_all_blocks=1 00:07:54.304 --rc geninfo_unexecuted_blocks=1 00:07:54.304 00:07:54.304 ' 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:54.304 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:54.305 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:54.305 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:54.305 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:54.305 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:54.305 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:54.305 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:54.305 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:54.305 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:54.305 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:54.305 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:54.305 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:54.305 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:54.305 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:54.305 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:07:54.305 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:54.305 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:54.305 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:54.305 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.305 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.305 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.305 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:07:54.305 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.305 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:07:54.305 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:54.305 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:54.305 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:54.305 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:54.305 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:54.305 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:54.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:54.305 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:54.305 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:54.305 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:54.305 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:07:54.305 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:54.305 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:54.305 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:54.305 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:54.305 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:54.305 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:54.305 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:54.305 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:54.305 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:54.305 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:54.305 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:07:54.305 20:37:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:56.208 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:56.208 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:56.208 Found net devices under 0000:09:00.0: cvl_0_0 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:56.208 Found net devices under 0000:09:00.1: cvl_0_1 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:56.208 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:56.209 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:56.209 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:56.209 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:56.209 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:56.209 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:56.209 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:56.467 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:56.467 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:56.467 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:56.467 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:56.467 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:56.467 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:07:56.467 00:07:56.467 --- 10.0.0.2 ping statistics --- 00:07:56.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.467 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:07:56.467 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:56.467 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:56.467 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:07:56.467 00:07:56.467 --- 10.0.0.1 ping statistics --- 00:07:56.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.467 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:07:56.467 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:56.467 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:07:56.467 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:56.467 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:56.467 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:56.467 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:56.467 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:56.467 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:56.467 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:56.467 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:07:56.467 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:56.467 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:56.467 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:56.467 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1578591 00:07:56.467 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:56.467 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1578591 00:07:56.467 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1578591 ']' 00:07:56.467 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.467 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:56.467 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.467 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:56.467 20:37:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:56.467 [2024-11-26 20:38:00.022334] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:07:56.467 [2024-11-26 20:38:00.022482] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:56.467 [2024-11-26 20:38:00.097837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.467 [2024-11-26 20:38:00.155854] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:56.467 [2024-11-26 20:38:00.155907] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:56.467 [2024-11-26 20:38:00.155922] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:56.467 [2024-11-26 20:38:00.155933] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:56.467 [2024-11-26 20:38:00.155942] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:56.467 [2024-11-26 20:38:00.156504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:56.726 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:56.726 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:07:56.726 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:56.726 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:56.726 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:56.726 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:56.726 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:07:56.726 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:07:56.726 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.726 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:56.726 [2024-11-26 20:38:00.309768] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:56.726 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.726 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:56.726 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.726 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:56.726 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.726 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:56.726 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.726 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:56.726 [2024-11-26 20:38:00.325970] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:56.726 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.726 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:56.726 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.726 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:56.726 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.726 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:07:56.726 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.726 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:56.726 malloc0 00:07:56.726 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.726 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:07:56.726 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.726 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:56.726 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.726 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:07:56.726 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:07:56.726 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:07:56.726 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:07:56.726 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:56.726 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:56.726 { 00:07:56.726 "params": { 00:07:56.726 "name": "Nvme$subsystem", 00:07:56.726 "trtype": "$TEST_TRANSPORT", 00:07:56.726 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:56.726 "adrfam": "ipv4", 00:07:56.726 "trsvcid": "$NVMF_PORT", 00:07:56.726 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:56.726 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:56.726 "hdgst": ${hdgst:-false}, 00:07:56.726 "ddgst": ${ddgst:-false} 00:07:56.726 }, 00:07:56.726 "method": "bdev_nvme_attach_controller" 00:07:56.726 } 00:07:56.726 EOF 00:07:56.726 )") 00:07:56.726 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:07:56.726 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:07:56.726 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:07:56.726 20:38:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:56.726 "params": { 00:07:56.726 "name": "Nvme1", 00:07:56.726 "trtype": "tcp", 00:07:56.726 "traddr": "10.0.0.2", 00:07:56.726 "adrfam": "ipv4", 00:07:56.726 "trsvcid": "4420", 00:07:56.726 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:56.726 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:56.726 "hdgst": false, 00:07:56.726 "ddgst": false 00:07:56.726 }, 00:07:56.726 "method": "bdev_nvme_attach_controller" 00:07:56.726 }' 00:07:56.726 [2024-11-26 20:38:00.411413] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:07:56.726 [2024-11-26 20:38:00.411490] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1578616 ] 00:07:56.983 [2024-11-26 20:38:00.480397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.983 [2024-11-26 20:38:00.540569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.240 Running I/O for 10 seconds... 00:07:59.103 5741.00 IOPS, 44.85 MiB/s [2024-11-26T19:38:04.170Z] 5805.00 IOPS, 45.35 MiB/s [2024-11-26T19:38:05.144Z] 5822.67 IOPS, 45.49 MiB/s [2024-11-26T19:38:06.075Z] 5834.25 IOPS, 45.58 MiB/s [2024-11-26T19:38:07.006Z] 5840.20 IOPS, 45.63 MiB/s [2024-11-26T19:38:07.940Z] 5844.50 IOPS, 45.66 MiB/s [2024-11-26T19:38:08.873Z] 5847.14 IOPS, 45.68 MiB/s [2024-11-26T19:38:09.803Z] 5856.50 IOPS, 45.75 MiB/s [2024-11-26T19:38:11.176Z] 5857.89 IOPS, 45.76 MiB/s [2024-11-26T19:38:11.176Z] 5852.20 IOPS, 45.72 MiB/s 00:08:07.479 Latency(us) 00:08:07.479 [2024-11-26T19:38:11.176Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:07.479 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:07.479 Verification LBA range: start 0x0 length 0x1000 00:08:07.479 Nvme1n1 : 10.02 5855.49 45.75 0.00 0.00 21801.36 3762.25 30292.20 00:08:07.479 [2024-11-26T19:38:11.176Z] =================================================================================================================== 00:08:07.479 [2024-11-26T19:38:11.176Z] Total : 5855.49 45.75 0.00 0.00 21801.36 3762.25 30292.20 00:08:07.479 20:38:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1579822 00:08:07.479 20:38:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:07.479 20:38:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:07.479 20:38:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:07.479 20:38:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:07.479 20:38:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:07.479 20:38:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:07.479 20:38:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:07.479 20:38:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:07.479 { 00:08:07.479 "params": { 00:08:07.479 "name": "Nvme$subsystem", 00:08:07.479 "trtype": "$TEST_TRANSPORT", 00:08:07.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:07.479 "adrfam": "ipv4", 00:08:07.479 "trsvcid": "$NVMF_PORT", 00:08:07.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:07.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:07.479 "hdgst": ${hdgst:-false}, 00:08:07.479 "ddgst": ${ddgst:-false} 00:08:07.479 }, 00:08:07.479 "method": "bdev_nvme_attach_controller" 00:08:07.479 } 00:08:07.479 EOF 00:08:07.479 )") 00:08:07.479 20:38:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:07.479 [2024-11-26 20:38:11.028881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.479 [2024-11-26 20:38:11.028925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.479 20:38:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:07.479 20:38:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:07.479 20:38:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:07.479 "params": { 00:08:07.479 "name": "Nvme1", 00:08:07.479 "trtype": "tcp", 00:08:07.479 "traddr": "10.0.0.2", 00:08:07.479 "adrfam": "ipv4", 00:08:07.479 "trsvcid": "4420", 00:08:07.479 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:07.479 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:07.479 "hdgst": false, 00:08:07.479 "ddgst": false 00:08:07.479 }, 00:08:07.479 "method": "bdev_nvme_attach_controller" 00:08:07.479 }' 00:08:07.479 [2024-11-26 20:38:11.036808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.479 [2024-11-26 20:38:11.036830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.479 [2024-11-26 20:38:11.044830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.479 [2024-11-26 20:38:11.044850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.479 [2024-11-26 20:38:11.052851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.479 [2024-11-26 20:38:11.052871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.479 [2024-11-26 20:38:11.060877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.479 [2024-11-26 20:38:11.060899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.479 [2024-11-26 20:38:11.067655] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:08:07.479 [2024-11-26 20:38:11.067726] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1579822 ] 00:08:07.479 [2024-11-26 20:38:11.068900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.479 [2024-11-26 20:38:11.068920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.479 [2024-11-26 20:38:11.076917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.479 [2024-11-26 20:38:11.076937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.479 [2024-11-26 20:38:11.084940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.479 [2024-11-26 20:38:11.084960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.479 [2024-11-26 20:38:11.092961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.479 [2024-11-26 20:38:11.092980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.479 [2024-11-26 20:38:11.100984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.479 [2024-11-26 20:38:11.101005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.479 [2024-11-26 20:38:11.109008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.479 [2024-11-26 20:38:11.109028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.479 [2024-11-26 20:38:11.117029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.479 [2024-11-26 20:38:11.117049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.479 [2024-11-26 20:38:11.125049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.480 [2024-11-26 20:38:11.125068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.480 [2024-11-26 20:38:11.133087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.480 [2024-11-26 20:38:11.133107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.480 [2024-11-26 20:38:11.135454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.480 [2024-11-26 20:38:11.141101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.480 [2024-11-26 20:38:11.141122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.480 [2024-11-26 20:38:11.149155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.480 [2024-11-26 20:38:11.149190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.480 [2024-11-26 20:38:11.157143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.480 [2024-11-26 20:38:11.157164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.480 [2024-11-26 20:38:11.165164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.480 [2024-11-26 20:38:11.165185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.480 [2024-11-26 20:38:11.173182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.480 [2024-11-26 20:38:11.173202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.737 [2024-11-26 20:38:11.181202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.737 [2024-11-26 20:38:11.181221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.738 [2024-11-26 20:38:11.189224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.738 [2024-11-26 20:38:11.189244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.738 [2024-11-26 20:38:11.197245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.738 [2024-11-26 20:38:11.197265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.738 [2024-11-26 20:38:11.199033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.738 [2024-11-26 20:38:11.205265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.738 [2024-11-26 20:38:11.205299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.738 [2024-11-26 20:38:11.213327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.738 [2024-11-26 20:38:11.213369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.738 [2024-11-26 20:38:11.221391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.738 [2024-11-26 20:38:11.221428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.738 [2024-11-26 20:38:11.229422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.738 [2024-11-26 20:38:11.229459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.738 [2024-11-26 20:38:11.237432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.738 [2024-11-26 20:38:11.237468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.738 [2024-11-26 20:38:11.245456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.738 [2024-11-26 20:38:11.245492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.738 [2024-11-26 20:38:11.253466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.738 [2024-11-26 20:38:11.253504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.738 [2024-11-26 20:38:11.261484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.738 [2024-11-26 20:38:11.261520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.738 [2024-11-26 20:38:11.269470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.738 [2024-11-26 20:38:11.269493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.738 [2024-11-26 20:38:11.277526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.738 [2024-11-26 20:38:11.277562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.738 [2024-11-26 20:38:11.285546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.738 [2024-11-26 20:38:11.285609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.738 [2024-11-26 20:38:11.293552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.738 [2024-11-26 20:38:11.293580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.738 [2024-11-26 20:38:11.301554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.738 [2024-11-26 20:38:11.301574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.738 [2024-11-26 20:38:11.309582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.738 [2024-11-26 20:38:11.309619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.738 [2024-11-26 20:38:11.317799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.738 [2024-11-26 20:38:11.317838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.738 [2024-11-26 20:38:11.325739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.738 [2024-11-26 20:38:11.325761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.738 [2024-11-26 20:38:11.333769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.738 [2024-11-26 20:38:11.333793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.738 [2024-11-26 20:38:11.341785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.738 [2024-11-26 20:38:11.341815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.738 [2024-11-26 20:38:11.349820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.738 [2024-11-26 20:38:11.349844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.738 [2024-11-26 20:38:11.357837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.738 [2024-11-26 20:38:11.357861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.738 [2024-11-26 20:38:11.365855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.738 [2024-11-26 20:38:11.365878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.738 [2024-11-26 20:38:11.373874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.738 [2024-11-26 20:38:11.373894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.738 [2024-11-26 20:38:11.381901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.738 [2024-11-26 20:38:11.381925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.738 Running I/O for 5 seconds... 00:08:07.738 [2024-11-26 20:38:11.389925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.738 [2024-11-26 20:38:11.389947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.738 [2024-11-26 20:38:11.402446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.738 [2024-11-26 20:38:11.402475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.738 [2024-11-26 20:38:11.412397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.738 [2024-11-26 20:38:11.412426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.738 [2024-11-26 20:38:11.424845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.738 [2024-11-26 20:38:11.424873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.996 [2024-11-26 20:38:11.436220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.996 [2024-11-26 20:38:11.436249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.996 [2024-11-26 20:38:11.447917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.996 [2024-11-26 20:38:11.447944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.996 [2024-11-26 20:38:11.461846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.996 [2024-11-26 20:38:11.461873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.996 [2024-11-26 20:38:11.472834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.996 [2024-11-26 20:38:11.472861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.996 [2024-11-26 20:38:11.484753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.996 [2024-11-26 20:38:11.484780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.996 [2024-11-26 20:38:11.496558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.996 [2024-11-26 20:38:11.496604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.996 [2024-11-26 20:38:11.507752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.996 [2024-11-26 20:38:11.507779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.996 [2024-11-26 20:38:11.519876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.996 [2024-11-26 20:38:11.519903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.996 [2024-11-26 20:38:11.531689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.996 [2024-11-26 20:38:11.531716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.996 [2024-11-26 20:38:11.543507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.996 [2024-11-26 20:38:11.543536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.996 [2024-11-26 20:38:11.555116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.996 [2024-11-26 20:38:11.555143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.996 [2024-11-26 20:38:11.566324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.996 [2024-11-26 20:38:11.566353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.996 [2024-11-26 20:38:11.577746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.996 [2024-11-26 20:38:11.577774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.996 [2024-11-26 20:38:11.588846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.996 [2024-11-26 20:38:11.588875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.996 [2024-11-26 20:38:11.600401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.996 [2024-11-26 20:38:11.600444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.996 [2024-11-26 20:38:11.611942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.996 [2024-11-26 20:38:11.611969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.996 [2024-11-26 20:38:11.625475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.996 [2024-11-26 20:38:11.625504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.996 [2024-11-26 20:38:11.636891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.996 [2024-11-26 20:38:11.636918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.996 [2024-11-26 20:38:11.647855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.996 [2024-11-26 20:38:11.647883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.996 [2024-11-26 20:38:11.659619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.996 [2024-11-26 20:38:11.659662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.996 [2024-11-26 20:38:11.671242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.996 [2024-11-26 20:38:11.671269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:07.996 [2024-11-26 20:38:11.682571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:07.996 [2024-11-26 20:38:11.682615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.262 [2024-11-26 20:38:11.693605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.262 [2024-11-26 20:38:11.693634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.262 [2024-11-26 20:38:11.704999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.262 [2024-11-26 20:38:11.705026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.262 [2024-11-26 20:38:11.716981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.262 [2024-11-26 20:38:11.717008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.262 [2024-11-26 20:38:11.728398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.262 [2024-11-26 20:38:11.728426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.262 [2024-11-26 20:38:11.739822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.262 [2024-11-26 20:38:11.739850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.262 [2024-11-26 20:38:11.751154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.262 [2024-11-26 20:38:11.751181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.262 [2024-11-26 20:38:11.764789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.262 [2024-11-26 20:38:11.764830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.262 [2024-11-26 20:38:11.775396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.262 [2024-11-26 20:38:11.775424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.262 [2024-11-26 20:38:11.786465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.262 [2024-11-26 20:38:11.786494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.262 [2024-11-26 20:38:11.798067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.262 [2024-11-26 20:38:11.798095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.262 [2024-11-26 20:38:11.810300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.262 [2024-11-26 20:38:11.810338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.262 [2024-11-26 20:38:11.822617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.262 [2024-11-26 20:38:11.822659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.262 [2024-11-26 20:38:11.834170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.262 [2024-11-26 20:38:11.834197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.262 [2024-11-26 20:38:11.846422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.262 [2024-11-26 20:38:11.846465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.262 [2024-11-26 20:38:11.858376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.263 [2024-11-26 20:38:11.858403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.263 [2024-11-26 20:38:11.870221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.263 [2024-11-26 20:38:11.870248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.263 [2024-11-26 20:38:11.882052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.263 [2024-11-26 20:38:11.882079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.263 [2024-11-26 20:38:11.895759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.263 [2024-11-26 20:38:11.895787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.263 [2024-11-26 20:38:11.906611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.263 [2024-11-26 20:38:11.906639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.263 [2024-11-26 20:38:11.917744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.263 [2024-11-26 20:38:11.917771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.263 [2024-11-26 20:38:11.928897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.263 [2024-11-26 20:38:11.928924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.263 [2024-11-26 20:38:11.940347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.263 [2024-11-26 20:38:11.940375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.263 [2024-11-26 20:38:11.951696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.263 [2024-11-26 20:38:11.951722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.521 [2024-11-26 20:38:11.963234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.521 [2024-11-26 20:38:11.963262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.521 [2024-11-26 20:38:11.974497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.521 [2024-11-26 20:38:11.974551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.521 [2024-11-26 20:38:11.985571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.521 [2024-11-26 20:38:11.985599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.521 [2024-11-26 20:38:11.997095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.521 [2024-11-26 20:38:11.997121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.521 [2024-11-26 20:38:12.008342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.521 [2024-11-26 20:38:12.008371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.521 [2024-11-26 20:38:12.020067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.521 [2024-11-26 20:38:12.020095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.521 [2024-11-26 20:38:12.031546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.521 [2024-11-26 20:38:12.031575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.521 [2024-11-26 20:38:12.042998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.521 [2024-11-26 20:38:12.043025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.521 [2024-11-26 20:38:12.054179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.521 [2024-11-26 20:38:12.054207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.521 [2024-11-26 20:38:12.067891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.521 [2024-11-26 20:38:12.067918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.521 [2024-11-26 20:38:12.078962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.521 [2024-11-26 20:38:12.078990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.521 [2024-11-26 20:38:12.090641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.521 [2024-11-26 20:38:12.090682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.521 [2024-11-26 20:38:12.102123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.521 [2024-11-26 20:38:12.102150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.521 [2024-11-26 20:38:12.113474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.521 [2024-11-26 20:38:12.113518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.521 [2024-11-26 20:38:12.124864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.521 [2024-11-26 20:38:12.124890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.521 [2024-11-26 20:38:12.136177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.521 [2024-11-26 20:38:12.136205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.521 [2024-11-26 20:38:12.147689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.521 [2024-11-26 20:38:12.147715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.521 [2024-11-26 20:38:12.159592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.521 [2024-11-26 20:38:12.159634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.521 [2024-11-26 20:38:12.171491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.521 [2024-11-26 20:38:12.171534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.521 [2024-11-26 20:38:12.183694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.521 [2024-11-26 20:38:12.183720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.521 [2024-11-26 20:38:12.195479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.521 [2024-11-26 20:38:12.195516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.521 [2024-11-26 20:38:12.206983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.521 [2024-11-26 20:38:12.207010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.779 [2024-11-26 20:38:12.220369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.779 [2024-11-26 20:38:12.220398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.779 [2024-11-26 20:38:12.231484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.779 [2024-11-26 20:38:12.231512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.779 [2024-11-26 20:38:12.243042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.779 [2024-11-26 20:38:12.243070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.779 [2024-11-26 20:38:12.254237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.779 [2024-11-26 20:38:12.254264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.779 [2024-11-26 20:38:12.265456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.779 [2024-11-26 20:38:12.265485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.779 [2024-11-26 20:38:12.276715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.779 [2024-11-26 20:38:12.276742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.779 [2024-11-26 20:38:12.288521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.779 [2024-11-26 20:38:12.288550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.779 [2024-11-26 20:38:12.299778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.779 [2024-11-26 20:38:12.299821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.779 [2024-11-26 20:38:12.311126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.779 [2024-11-26 20:38:12.311155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.779 [2024-11-26 20:38:12.322412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.779 [2024-11-26 20:38:12.322441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.779 [2024-11-26 20:38:12.335902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.779 [2024-11-26 20:38:12.335930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.779 [2024-11-26 20:38:12.346987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.779 [2024-11-26 20:38:12.347015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.779 [2024-11-26 20:38:12.358589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.779 [2024-11-26 20:38:12.358632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.779 [2024-11-26 20:38:12.370193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.779 [2024-11-26 20:38:12.370221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.779 [2024-11-26 20:38:12.381882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.779 [2024-11-26 20:38:12.381910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.779 10926.00 IOPS, 85.36 MiB/s [2024-11-26T19:38:12.476Z] [2024-11-26 20:38:12.394895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.779 [2024-11-26 20:38:12.394922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.779 [2024-11-26 20:38:12.405654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.779 [2024-11-26 20:38:12.405705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.779 [2024-11-26 20:38:12.416543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.779 [2024-11-26 20:38:12.416579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.779 [2024-11-26 20:38:12.428081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.779 [2024-11-26 20:38:12.428109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.779 [2024-11-26 20:38:12.441351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.779 [2024-11-26 20:38:12.441381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.779 [2024-11-26 20:38:12.452254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.779 [2024-11-26 20:38:12.452299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:08.779 [2024-11-26 20:38:12.463181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:08.779 [2024-11-26 20:38:12.463210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.038 [2024-11-26 20:38:12.474675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.038 [2024-11-26 20:38:12.474705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.038 [2024-11-26 20:38:12.485751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.038 [2024-11-26 20:38:12.485779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.038 [2024-11-26 20:38:12.497229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.038 [2024-11-26 20:38:12.497256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.038 [2024-11-26 20:38:12.508734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.038 [2024-11-26 20:38:12.508761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.038 [2024-11-26 20:38:12.519986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.038 [2024-11-26 20:38:12.520012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.038 [2024-11-26 20:38:12.531578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.038 [2024-11-26 20:38:12.531619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.038 [2024-11-26 20:38:12.542761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.038 [2024-11-26 20:38:12.542788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.038 [2024-11-26 20:38:12.554197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.038 [2024-11-26 20:38:12.554224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.038 [2024-11-26 20:38:12.565209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.038 [2024-11-26 20:38:12.565237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.038 [2024-11-26 20:38:12.576617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.038 [2024-11-26 20:38:12.576659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.038 [2024-11-26 20:38:12.588003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.038 [2024-11-26 20:38:12.588031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.038 [2024-11-26 20:38:12.599704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.038 [2024-11-26 20:38:12.599733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.038 [2024-11-26 20:38:12.610989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.038 [2024-11-26 20:38:12.611017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.038 [2024-11-26 20:38:12.622585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.038 [2024-11-26 20:38:12.622613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.038 [2024-11-26 20:38:12.634416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.038 [2024-11-26 20:38:12.634444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.038 [2024-11-26 20:38:12.646131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.038 [2024-11-26 20:38:12.646158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.038 [2024-11-26 20:38:12.658172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.038 [2024-11-26 20:38:12.658200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.038 [2024-11-26 20:38:12.671539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.038 [2024-11-26 20:38:12.671568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.038 [2024-11-26 20:38:12.682548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.038 [2024-11-26 20:38:12.682577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.038 [2024-11-26 20:38:12.694319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.038 [2024-11-26 20:38:12.694348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.038 [2024-11-26 20:38:12.705705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.038 [2024-11-26 20:38:12.705733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.038 [2024-11-26 20:38:12.717038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.038 [2024-11-26 20:38:12.717066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.038 [2024-11-26 20:38:12.728272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.038 [2024-11-26 20:38:12.728301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.296 [2024-11-26 20:38:12.739898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.296 [2024-11-26 20:38:12.739925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.296 [2024-11-26 20:38:12.751691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.296 [2024-11-26 20:38:12.751719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.296 [2024-11-26 20:38:12.763850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.296 [2024-11-26 20:38:12.763878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.296 [2024-11-26 20:38:12.775387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.296 [2024-11-26 20:38:12.775415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.296 [2024-11-26 20:38:12.788686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.296 [2024-11-26 20:38:12.788715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.296 [2024-11-26 20:38:12.799273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.296 [2024-11-26 20:38:12.799330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.296 [2024-11-26 20:38:12.811343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.296 [2024-11-26 20:38:12.811386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.296 [2024-11-26 20:38:12.822993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.296 [2024-11-26 20:38:12.823021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.296 [2024-11-26 20:38:12.834152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.296 [2024-11-26 20:38:12.834181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.296 [2024-11-26 20:38:12.845315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.296 [2024-11-26 20:38:12.845342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.296 [2024-11-26 20:38:12.856796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.296 [2024-11-26 20:38:12.856823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.296 [2024-11-26 20:38:12.868731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.296 [2024-11-26 20:38:12.868757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.296 [2024-11-26 20:38:12.880916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.296 [2024-11-26 20:38:12.880943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.296 [2024-11-26 20:38:12.892059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.296 [2024-11-26 20:38:12.892086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.296 [2024-11-26 20:38:12.904170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.296 [2024-11-26 20:38:12.904198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.296 [2024-11-26 20:38:12.915846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.296 [2024-11-26 20:38:12.915872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.296 [2024-11-26 20:38:12.927461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.296 [2024-11-26 20:38:12.927490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.296 [2024-11-26 20:38:12.939322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.296 [2024-11-26 20:38:12.939351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.296 [2024-11-26 20:38:12.950491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.296 [2024-11-26 20:38:12.950519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.296 [2024-11-26 20:38:12.962018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.296 [2024-11-26 20:38:12.962045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.296 [2024-11-26 20:38:12.973670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.296 [2024-11-26 20:38:12.973697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.296 [2024-11-26 20:38:12.985081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.296 [2024-11-26 20:38:12.985108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.554 [2024-11-26 20:38:12.998246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.554 [2024-11-26 20:38:12.998273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.554 [2024-11-26 20:38:13.009191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.554 [2024-11-26 20:38:13.009219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.554 [2024-11-26 20:38:13.021073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.554 [2024-11-26 20:38:13.021102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.554 [2024-11-26 20:38:13.032804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.554 [2024-11-26 20:38:13.032831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.554 [2024-11-26 20:38:13.046693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.554 [2024-11-26 20:38:13.046720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.554 [2024-11-26 20:38:13.057693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.554 [2024-11-26 20:38:13.057720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.554 [2024-11-26 20:38:13.068654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.554 [2024-11-26 20:38:13.068681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.554 [2024-11-26 20:38:13.080055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.554 [2024-11-26 20:38:13.080083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.554 [2024-11-26 20:38:13.091441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.554 [2024-11-26 20:38:13.091470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.554 [2024-11-26 20:38:13.103679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.554 [2024-11-26 20:38:13.103705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.554 [2024-11-26 20:38:13.114816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.554 [2024-11-26 20:38:13.114843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.554 [2024-11-26 20:38:13.126610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.554 [2024-11-26 20:38:13.126654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.554 [2024-11-26 20:38:13.138173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.554 [2024-11-26 20:38:13.138200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.554 [2024-11-26 20:38:13.149942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.554 [2024-11-26 20:38:13.149968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.554 [2024-11-26 20:38:13.162008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.554 [2024-11-26 20:38:13.162035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.554 [2024-11-26 20:38:13.173383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.554 [2024-11-26 20:38:13.173433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.554 [2024-11-26 20:38:13.185177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.554 [2024-11-26 20:38:13.185205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.555 [2024-11-26 20:38:13.198415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.555 [2024-11-26 20:38:13.198459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.555 [2024-11-26 20:38:13.209520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.555 [2024-11-26 20:38:13.209549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.555 [2024-11-26 20:38:13.220935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.555 [2024-11-26 20:38:13.220963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.555 [2024-11-26 20:38:13.232180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.555 [2024-11-26 20:38:13.232206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.555 [2024-11-26 20:38:13.243568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.555 [2024-11-26 20:38:13.243611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.812 [2024-11-26 20:38:13.256710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.812 [2024-11-26 20:38:13.256737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.812 [2024-11-26 20:38:13.267701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.812 [2024-11-26 20:38:13.267729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.813 [2024-11-26 20:38:13.279706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.813 [2024-11-26 20:38:13.279733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.813 [2024-11-26 20:38:13.291352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.813 [2024-11-26 20:38:13.291381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.813 [2024-11-26 20:38:13.304440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.813 [2024-11-26 20:38:13.304468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.813 [2024-11-26 20:38:13.315134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.813 [2024-11-26 20:38:13.315161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.813 [2024-11-26 20:38:13.327815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.813 [2024-11-26 20:38:13.327843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.813 [2024-11-26 20:38:13.339131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.813 [2024-11-26 20:38:13.339157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.813 [2024-11-26 20:38:13.350713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.813 [2024-11-26 20:38:13.350740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.813 [2024-11-26 20:38:13.362668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.813 [2024-11-26 20:38:13.362710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.813 [2024-11-26 20:38:13.374244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.813 [2024-11-26 20:38:13.374270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.813 [2024-11-26 20:38:13.385566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.813 [2024-11-26 20:38:13.385609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.813 10986.00 IOPS, 85.83 MiB/s [2024-11-26T19:38:13.510Z] [2024-11-26 20:38:13.396823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.813 [2024-11-26 20:38:13.396850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.813 [2024-11-26 20:38:13.408721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.813 [2024-11-26 20:38:13.408748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.813 [2024-11-26 20:38:13.420323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.813 [2024-11-26 20:38:13.420362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.813 [2024-11-26 20:38:13.431373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.813 [2024-11-26 20:38:13.431402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.813 [2024-11-26 20:38:13.442947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.813 [2024-11-26 20:38:13.442975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.813 [2024-11-26 20:38:13.455824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.813 [2024-11-26 20:38:13.455850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.813 [2024-11-26 20:38:13.466670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.813 [2024-11-26 20:38:13.466696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.813 [2024-11-26 20:38:13.478473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.813 [2024-11-26 20:38:13.478502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.813 [2024-11-26 20:38:13.490347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.813 [2024-11-26 20:38:13.490376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:09.813 [2024-11-26 20:38:13.501695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:09.813 [2024-11-26 20:38:13.501722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.071 [2024-11-26 20:38:13.515447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.071 [2024-11-26 20:38:13.515498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.071 [2024-11-26 20:38:13.525968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.071 [2024-11-26 20:38:13.525996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.071 [2024-11-26 20:38:13.537706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.071 [2024-11-26 20:38:13.537733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.071 [2024-11-26 20:38:13.549236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.071 [2024-11-26 20:38:13.549263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.071 [2024-11-26 20:38:13.560789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.071 [2024-11-26 20:38:13.560816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.071 [2024-11-26 20:38:13.572313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.071 [2024-11-26 20:38:13.572342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.071 [2024-11-26 20:38:13.583862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.071 [2024-11-26 20:38:13.583889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.071 [2024-11-26 20:38:13.597157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.071 [2024-11-26 20:38:13.597184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.072 [2024-11-26 20:38:13.607338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.072 [2024-11-26 20:38:13.607382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.072 [2024-11-26 20:38:13.618790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.072 [2024-11-26 20:38:13.618819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.072 [2024-11-26 20:38:13.630377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.072 [2024-11-26 20:38:13.630406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.072 [2024-11-26 20:38:13.644515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.072 [2024-11-26 20:38:13.644544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.072 [2024-11-26 20:38:13.655420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.072 [2024-11-26 20:38:13.655448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.072 [2024-11-26 20:38:13.666538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.072 [2024-11-26 20:38:13.666567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.072 [2024-11-26 20:38:13.677775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.072 [2024-11-26 20:38:13.677802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.072 [2024-11-26 20:38:13.689120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.072 [2024-11-26 20:38:13.689147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.072 [2024-11-26 20:38:13.700053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.072 [2024-11-26 20:38:13.700080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.072 [2024-11-26 20:38:13.711216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.072 [2024-11-26 20:38:13.711243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.072 [2024-11-26 20:38:13.722490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.072 [2024-11-26 20:38:13.722519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.072 [2024-11-26 20:38:13.734075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.072 [2024-11-26 20:38:13.734109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.072 [2024-11-26 20:38:13.745759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.072 [2024-11-26 20:38:13.745787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.072 [2024-11-26 20:38:13.757441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.072 [2024-11-26 20:38:13.757469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.330 [2024-11-26 20:38:13.769212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.330 [2024-11-26 20:38:13.769240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.330 [2024-11-26 20:38:13.780446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.330 [2024-11-26 20:38:13.780474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.330 [2024-11-26 20:38:13.793766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.330 [2024-11-26 20:38:13.793793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.330 [2024-11-26 20:38:13.805122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.330 [2024-11-26 20:38:13.805149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.330 [2024-11-26 20:38:13.816543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.330 [2024-11-26 20:38:13.816573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.330 [2024-11-26 20:38:13.828139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.330 [2024-11-26 20:38:13.828166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.330 [2024-11-26 20:38:13.839553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.330 [2024-11-26 20:38:13.839585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.330 [2024-11-26 20:38:13.850798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.330 [2024-11-26 20:38:13.850827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.330 [2024-11-26 20:38:13.862843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.330 [2024-11-26 20:38:13.862870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.330 [2024-11-26 20:38:13.874232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.330 [2024-11-26 20:38:13.874260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.330 [2024-11-26 20:38:13.889089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.330 [2024-11-26 20:38:13.889117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.330 [2024-11-26 20:38:13.900256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.330 [2024-11-26 20:38:13.900299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.330 [2024-11-26 20:38:13.911278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.330 [2024-11-26 20:38:13.911334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.330 [2024-11-26 20:38:13.922490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.330 [2024-11-26 20:38:13.922518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.330 [2024-11-26 20:38:13.933668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.330 [2024-11-26 20:38:13.933696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.330 [2024-11-26 20:38:13.945630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.330 [2024-11-26 20:38:13.945658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.330 [2024-11-26 20:38:13.956837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.330 [2024-11-26 20:38:13.956873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.330 [2024-11-26 20:38:13.968448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.330 [2024-11-26 20:38:13.968476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.330 [2024-11-26 20:38:13.980198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.330 [2024-11-26 20:38:13.980225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.330 [2024-11-26 20:38:13.991850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.330 [2024-11-26 20:38:13.991878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.330 [2024-11-26 20:38:14.003148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.330 [2024-11-26 20:38:14.003175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.330 [2024-11-26 20:38:14.014958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.330 [2024-11-26 20:38:14.014985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.588 [2024-11-26 20:38:14.026175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.588 [2024-11-26 20:38:14.026203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.588 [2024-11-26 20:38:14.037925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.588 [2024-11-26 20:38:14.037953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.588 [2024-11-26 20:38:14.049480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.588 [2024-11-26 20:38:14.049509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.588 [2024-11-26 20:38:14.062507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.588 [2024-11-26 20:38:14.062537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.588 [2024-11-26 20:38:14.073671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.588 [2024-11-26 20:38:14.073698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.588 [2024-11-26 20:38:14.085230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.589 [2024-11-26 20:38:14.085258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.589 [2024-11-26 20:38:14.096822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.589 [2024-11-26 20:38:14.096849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.589 [2024-11-26 20:38:14.109039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.589 [2024-11-26 20:38:14.109065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.589 [2024-11-26 20:38:14.120209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.589 [2024-11-26 20:38:14.120235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.589 [2024-11-26 20:38:14.133698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.589 [2024-11-26 20:38:14.133738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.589 [2024-11-26 20:38:14.144528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.589 [2024-11-26 20:38:14.144558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.589 [2024-11-26 20:38:14.155929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.589 [2024-11-26 20:38:14.155956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.589 [2024-11-26 20:38:14.169221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.589 [2024-11-26 20:38:14.169248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.589 [2024-11-26 20:38:14.179882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.589 [2024-11-26 20:38:14.179909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.589 [2024-11-26 20:38:14.191711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.589 [2024-11-26 20:38:14.191738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.589 [2024-11-26 20:38:14.203514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.589 [2024-11-26 20:38:14.203558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.589 [2024-11-26 20:38:14.214669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.589 [2024-11-26 20:38:14.214696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.589 [2024-11-26 20:38:14.226025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.589 [2024-11-26 20:38:14.226052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.589 [2024-11-26 20:38:14.237591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.589 [2024-11-26 20:38:14.237619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.589 [2024-11-26 20:38:14.249393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.589 [2024-11-26 20:38:14.249422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.589 [2024-11-26 20:38:14.260644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.589 [2024-11-26 20:38:14.260671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.589 [2024-11-26 20:38:14.273810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.589 [2024-11-26 20:38:14.273838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.847 [2024-11-26 20:38:14.284780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.847 [2024-11-26 20:38:14.284808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.847 [2024-11-26 20:38:14.296117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.847 [2024-11-26 20:38:14.296144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.847 [2024-11-26 20:38:14.307846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.847 [2024-11-26 20:38:14.307873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.847 [2024-11-26 20:38:14.319437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.847 [2024-11-26 20:38:14.319466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.847 [2024-11-26 20:38:14.331095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.847 [2024-11-26 20:38:14.331123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.847 [2024-11-26 20:38:14.342839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.847 [2024-11-26 20:38:14.342867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.847 [2024-11-26 20:38:14.354628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.847 [2024-11-26 20:38:14.354670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.847 [2024-11-26 20:38:14.365965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.847 [2024-11-26 20:38:14.365992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.847 [2024-11-26 20:38:14.377755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.847 [2024-11-26 20:38:14.377782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.847 [2024-11-26 20:38:14.389521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.847 [2024-11-26 20:38:14.389550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.847 11015.33 IOPS, 86.06 MiB/s [2024-11-26T19:38:14.544Z] [2024-11-26 20:38:14.400731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.847 [2024-11-26 20:38:14.400759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.847 [2024-11-26 20:38:14.412661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.847 [2024-11-26 20:38:14.412688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.847 [2024-11-26 20:38:14.424750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.847 [2024-11-26 20:38:14.424777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.847 [2024-11-26 20:38:14.436495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.847 [2024-11-26 20:38:14.436539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.847 [2024-11-26 20:38:14.447993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.847 [2024-11-26 20:38:14.448020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.847 [2024-11-26 20:38:14.460324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.847 [2024-11-26 20:38:14.460366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.847 [2024-11-26 20:38:14.471786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.847 [2024-11-26 20:38:14.471813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.847 [2024-11-26 20:38:14.483690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.847 [2024-11-26 20:38:14.483716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.847 [2024-11-26 20:38:14.496805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.847 [2024-11-26 20:38:14.496832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.847 [2024-11-26 20:38:14.507898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.847 [2024-11-26 20:38:14.507925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.847 [2024-11-26 20:38:14.519438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.847 [2024-11-26 20:38:14.519466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:10.847 [2024-11-26 20:38:14.530510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:10.847 [2024-11-26 20:38:14.530552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.106 [2024-11-26 20:38:14.542082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.106 [2024-11-26 20:38:14.542111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.106 [2024-11-26 20:38:14.553679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.106 [2024-11-26 20:38:14.553706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.106 [2024-11-26 20:38:14.565248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.106 [2024-11-26 20:38:14.565280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.106 [2024-11-26 20:38:14.576412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.106 [2024-11-26 20:38:14.576440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.106 [2024-11-26 20:38:14.588456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.106 [2024-11-26 20:38:14.588499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.106 [2024-11-26 20:38:14.600802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.106 [2024-11-26 20:38:14.600830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.106 [2024-11-26 20:38:14.612125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.106 [2024-11-26 20:38:14.612160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.106 [2024-11-26 20:38:14.623745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.106 [2024-11-26 20:38:14.623773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.106 [2024-11-26 20:38:14.635120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.106 [2024-11-26 20:38:14.635148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.106 [2024-11-26 20:38:14.647085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.106 [2024-11-26 20:38:14.647112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.106 [2024-11-26 20:38:14.658621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.106 [2024-11-26 20:38:14.658649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.106 [2024-11-26 20:38:14.669905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.106 [2024-11-26 20:38:14.669933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.106 [2024-11-26 20:38:14.681483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.106 [2024-11-26 20:38:14.681526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.106 [2024-11-26 20:38:14.692887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.106 [2024-11-26 20:38:14.692915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.106 [2024-11-26 20:38:14.704827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.106 [2024-11-26 20:38:14.704854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.106 [2024-11-26 20:38:14.716329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.106 [2024-11-26 20:38:14.716382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.106 [2024-11-26 20:38:14.727628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.106 [2024-11-26 20:38:14.727656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.106 [2024-11-26 20:38:14.739505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.106 [2024-11-26 20:38:14.739549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.106 [2024-11-26 20:38:14.751103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.106 [2024-11-26 20:38:14.751130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.106 [2024-11-26 20:38:14.762834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.106 [2024-11-26 20:38:14.762861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.106 [2024-11-26 20:38:14.774545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.106 [2024-11-26 20:38:14.774574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.106 [2024-11-26 20:38:14.786445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.106 [2024-11-26 20:38:14.786474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.106 [2024-11-26 20:38:14.798034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.106 [2024-11-26 20:38:14.798060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.364 [2024-11-26 20:38:14.809450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.364 [2024-11-26 20:38:14.809479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.364 [2024-11-26 20:38:14.820953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.364 [2024-11-26 20:38:14.820982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.364 [2024-11-26 20:38:14.832716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.364 [2024-11-26 20:38:14.832750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.364 [2024-11-26 20:38:14.844009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.364 [2024-11-26 20:38:14.844035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.364 [2024-11-26 20:38:14.855040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.364 [2024-11-26 20:38:14.855066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.364 [2024-11-26 20:38:14.866382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.364 [2024-11-26 20:38:14.866426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.364 [2024-11-26 20:38:14.879782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.364 [2024-11-26 20:38:14.879809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.364 [2024-11-26 20:38:14.890522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.364 [2024-11-26 20:38:14.890551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.364 [2024-11-26 20:38:14.901688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.364 [2024-11-26 20:38:14.901714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.364 [2024-11-26 20:38:14.913354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.364 [2024-11-26 20:38:14.913382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.364 [2024-11-26 20:38:14.924602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.364 [2024-11-26 20:38:14.924630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.364 [2024-11-26 20:38:14.936115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.364 [2024-11-26 20:38:14.936142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.364 [2024-11-26 20:38:14.947548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.364 [2024-11-26 20:38:14.947577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.364 [2024-11-26 20:38:14.959060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.364 [2024-11-26 20:38:14.959087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.364 [2024-11-26 20:38:14.970765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.364 [2024-11-26 20:38:14.970791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.364 [2024-11-26 20:38:14.981970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.364 [2024-11-26 20:38:14.981997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.364 [2024-11-26 20:38:14.993495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.364 [2024-11-26 20:38:14.993524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.364 [2024-11-26 20:38:15.004857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.364 [2024-11-26 20:38:15.004884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.364 [2024-11-26 20:38:15.016462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.364 [2024-11-26 20:38:15.016491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.364 [2024-11-26 20:38:15.027805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.365 [2024-11-26 20:38:15.027834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.365 [2024-11-26 20:38:15.040111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.365 [2024-11-26 20:38:15.040138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.365 [2024-11-26 20:38:15.051912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.365 [2024-11-26 20:38:15.051948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.622 [2024-11-26 20:38:15.063910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.622 [2024-11-26 20:38:15.063938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.622 [2024-11-26 20:38:15.075694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.623 [2024-11-26 20:38:15.075720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.623 [2024-11-26 20:38:15.087139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.623 [2024-11-26 20:38:15.087165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.623 [2024-11-26 20:38:15.098349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.623 [2024-11-26 20:38:15.098379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.623 [2024-11-26 20:38:15.112790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.623 [2024-11-26 20:38:15.112818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.623 [2024-11-26 20:38:15.124175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.623 [2024-11-26 20:38:15.124203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.623 [2024-11-26 20:38:15.135573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.623 [2024-11-26 20:38:15.135617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.623 [2024-11-26 20:38:15.146577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.623 [2024-11-26 20:38:15.146605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.623 [2024-11-26 20:38:15.157875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.623 [2024-11-26 20:38:15.157902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.623 [2024-11-26 20:38:15.169137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.623 [2024-11-26 20:38:15.169165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.623 [2024-11-26 20:38:15.180468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.623 [2024-11-26 20:38:15.180497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.623 [2024-11-26 20:38:15.191951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.623 [2024-11-26 20:38:15.191978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.623 [2024-11-26 20:38:15.203461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.623 [2024-11-26 20:38:15.203489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.623 [2024-11-26 20:38:15.215256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.623 [2024-11-26 20:38:15.215298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.623 [2024-11-26 20:38:15.227393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.623 [2024-11-26 20:38:15.227421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.623 [2024-11-26 20:38:15.238954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.623 [2024-11-26 20:38:15.238982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.623 [2024-11-26 20:38:15.250086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.623 [2024-11-26 20:38:15.250113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.623 [2024-11-26 20:38:15.261807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.623 [2024-11-26 20:38:15.261835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.623 [2024-11-26 20:38:15.273085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.623 [2024-11-26 20:38:15.273125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.623 [2024-11-26 20:38:15.284560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.623 [2024-11-26 20:38:15.284588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.623 [2024-11-26 20:38:15.296359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.623 [2024-11-26 20:38:15.296387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.623 [2024-11-26 20:38:15.307659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.623 [2024-11-26 20:38:15.307687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.883 [2024-11-26 20:38:15.318776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.883 [2024-11-26 20:38:15.318804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.883 [2024-11-26 20:38:15.330421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.883 [2024-11-26 20:38:15.330450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.883 [2024-11-26 20:38:15.341830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.883 [2024-11-26 20:38:15.341859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.883 [2024-11-26 20:38:15.353776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.883 [2024-11-26 20:38:15.353803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.883 [2024-11-26 20:38:15.365357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.883 [2024-11-26 20:38:15.365385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.883 [2024-11-26 20:38:15.377016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.883 [2024-11-26 20:38:15.377059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.883 [2024-11-26 20:38:15.390511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.883 [2024-11-26 20:38:15.390540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.883 11012.00 IOPS, 86.03 MiB/s [2024-11-26T19:38:15.580Z] [2024-11-26 20:38:15.401440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.883 [2024-11-26 20:38:15.401469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.883 [2024-11-26 20:38:15.413115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.883 [2024-11-26 20:38:15.413142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.884 [2024-11-26 20:38:15.424541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.884 [2024-11-26 20:38:15.424571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.884 [2024-11-26 20:38:15.438139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.884 [2024-11-26 20:38:15.438167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.884 [2024-11-26 20:38:15.448975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.884 [2024-11-26 20:38:15.449003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.884 [2024-11-26 20:38:15.460706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.884 [2024-11-26 20:38:15.460733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.884 [2024-11-26 20:38:15.472622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.884 [2024-11-26 20:38:15.472650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.884 [2024-11-26 20:38:15.484192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.884 [2024-11-26 20:38:15.484219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.884 [2024-11-26 20:38:15.495779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.884 [2024-11-26 20:38:15.495807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.884 [2024-11-26 20:38:15.507423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.884 [2024-11-26 20:38:15.507452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.884 [2024-11-26 20:38:15.519477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.884 [2024-11-26 20:38:15.519506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.884 [2024-11-26 20:38:15.530984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.884 [2024-11-26 20:38:15.531011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.884 [2024-11-26 20:38:15.542361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.884 [2024-11-26 20:38:15.542391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.884 [2024-11-26 20:38:15.553899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.884 [2024-11-26 20:38:15.553926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.884 [2024-11-26 20:38:15.565065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.884 [2024-11-26 20:38:15.565092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:11.884 [2024-11-26 20:38:15.576353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:11.884 [2024-11-26 20:38:15.576383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.180 [2024-11-26 20:38:15.587430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.180 [2024-11-26 20:38:15.587459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.180 [2024-11-26 20:38:15.598427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.180 [2024-11-26 20:38:15.598456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.180 [2024-11-26 20:38:15.611815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.180 [2024-11-26 20:38:15.611842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.180 [2024-11-26 20:38:15.622514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.180 [2024-11-26 20:38:15.622543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.180 [2024-11-26 20:38:15.634324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.180 [2024-11-26 20:38:15.634351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.180 [2024-11-26 20:38:15.646002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.180 [2024-11-26 20:38:15.646029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.180 [2024-11-26 20:38:15.657625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.180 [2024-11-26 20:38:15.657666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.180 [2024-11-26 20:38:15.669471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.180 [2024-11-26 20:38:15.669500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.180 [2024-11-26 20:38:15.680932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.180 [2024-11-26 20:38:15.680960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.180 [2024-11-26 20:38:15.692261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.180 [2024-11-26 20:38:15.692310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.180 [2024-11-26 20:38:15.703704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.180 [2024-11-26 20:38:15.703746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.180 [2024-11-26 20:38:15.714800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.180 [2024-11-26 20:38:15.714827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.180 [2024-11-26 20:38:15.726418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.180 [2024-11-26 20:38:15.726447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.180 [2024-11-26 20:38:15.737970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.180 [2024-11-26 20:38:15.737997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.180 [2024-11-26 20:38:15.749317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.180 [2024-11-26 20:38:15.749347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.180 [2024-11-26 20:38:15.760947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.180 [2024-11-26 20:38:15.760974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.180 [2024-11-26 20:38:15.772482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.180 [2024-11-26 20:38:15.772526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.180 [2024-11-26 20:38:15.784036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.180 [2024-11-26 20:38:15.784065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.180 [2024-11-26 20:38:15.795705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.180 [2024-11-26 20:38:15.795733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.180 [2024-11-26 20:38:15.807056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.180 [2024-11-26 20:38:15.807084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.180 [2024-11-26 20:38:15.818569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.180 [2024-11-26 20:38:15.818613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.180 [2024-11-26 20:38:15.830723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.180 [2024-11-26 20:38:15.830752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.180 [2024-11-26 20:38:15.842036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.180 [2024-11-26 20:38:15.842066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.180 [2024-11-26 20:38:15.853563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.180 [2024-11-26 20:38:15.853592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.438 [2024-11-26 20:38:15.864895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.438 [2024-11-26 20:38:15.864922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.438 [2024-11-26 20:38:15.876201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.438 [2024-11-26 20:38:15.876229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.438 [2024-11-26 20:38:15.888142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.438 [2024-11-26 20:38:15.888170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.438 [2024-11-26 20:38:15.899733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.438 [2024-11-26 20:38:15.899773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.438 [2024-11-26 20:38:15.911601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.438 [2024-11-26 20:38:15.911628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.438 [2024-11-26 20:38:15.923806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.438 [2024-11-26 20:38:15.923833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.438 [2024-11-26 20:38:15.934753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.438 [2024-11-26 20:38:15.934793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.438 [2024-11-26 20:38:15.948326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.438 [2024-11-26 20:38:15.948370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.438 [2024-11-26 20:38:15.959228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.438 [2024-11-26 20:38:15.959254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.438 [2024-11-26 20:38:15.970722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.438 [2024-11-26 20:38:15.970749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.438 [2024-11-26 20:38:15.982226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.438 [2024-11-26 20:38:15.982254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.438 [2024-11-26 20:38:15.993478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.438 [2024-11-26 20:38:15.993522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.438 [2024-11-26 20:38:16.005195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.438 [2024-11-26 20:38:16.005222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.438 [2024-11-26 20:38:16.016835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.438 [2024-11-26 20:38:16.016877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.438 [2024-11-26 20:38:16.029978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.438 [2024-11-26 20:38:16.030019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.438 [2024-11-26 20:38:16.040969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.438 [2024-11-26 20:38:16.040997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.438 [2024-11-26 20:38:16.052122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.438 [2024-11-26 20:38:16.052150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.438 [2024-11-26 20:38:16.065253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.438 [2024-11-26 20:38:16.065283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.438 [2024-11-26 20:38:16.076596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.438 [2024-11-26 20:38:16.076624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.438 [2024-11-26 20:38:16.088102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.438 [2024-11-26 20:38:16.088129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.438 [2024-11-26 20:38:16.099243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.438 [2024-11-26 20:38:16.099270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.438 [2024-11-26 20:38:16.110727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.438 [2024-11-26 20:38:16.110754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.438 [2024-11-26 20:38:16.122540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.438 [2024-11-26 20:38:16.122569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.696 [2024-11-26 20:38:16.133964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.696 [2024-11-26 20:38:16.133992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.696 [2024-11-26 20:38:16.145241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.696 [2024-11-26 20:38:16.145276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.696 [2024-11-26 20:38:16.158670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.696 [2024-11-26 20:38:16.158696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.696 [2024-11-26 20:38:16.169418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.696 [2024-11-26 20:38:16.169446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.696 [2024-11-26 20:38:16.181949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.696 [2024-11-26 20:38:16.181976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.696 [2024-11-26 20:38:16.193668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.696 [2024-11-26 20:38:16.193695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.696 [2024-11-26 20:38:16.205352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.696 [2024-11-26 20:38:16.205381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.696 [2024-11-26 20:38:16.219678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.696 [2024-11-26 20:38:16.219705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.696 [2024-11-26 20:38:16.230521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.696 [2024-11-26 20:38:16.230549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.696 [2024-11-26 20:38:16.242474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.696 [2024-11-26 20:38:16.242503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.696 [2024-11-26 20:38:16.253894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.696 [2024-11-26 20:38:16.253921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.696 [2024-11-26 20:38:16.265282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.696 [2024-11-26 20:38:16.265336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.696 [2024-11-26 20:38:16.276840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.696 [2024-11-26 20:38:16.276867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.696 [2024-11-26 20:38:16.288406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.696 [2024-11-26 20:38:16.288435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.696 [2024-11-26 20:38:16.299374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.696 [2024-11-26 20:38:16.299402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.696 [2024-11-26 20:38:16.310839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.696 [2024-11-26 20:38:16.310865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.696 [2024-11-26 20:38:16.322473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.696 [2024-11-26 20:38:16.322502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.696 [2024-11-26 20:38:16.334024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.696 [2024-11-26 20:38:16.334052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.696 [2024-11-26 20:38:16.347649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.696 [2024-11-26 20:38:16.347691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.696 [2024-11-26 20:38:16.358382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.696 [2024-11-26 20:38:16.358410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.696 [2024-11-26 20:38:16.369633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.696 [2024-11-26 20:38:16.369669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.696 [2024-11-26 20:38:16.382728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.696 [2024-11-26 20:38:16.382755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.955 [2024-11-26 20:38:16.393315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.955 [2024-11-26 20:38:16.393343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.955 11020.60 IOPS, 86.10 MiB/s [2024-11-26T19:38:16.652Z] [2024-11-26 20:38:16.404642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.955 [2024-11-26 20:38:16.404686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.955 [2024-11-26 20:38:16.414797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.955 [2024-11-26 20:38:16.414824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.955 00:08:12.955 Latency(us) 00:08:12.955 [2024-11-26T19:38:16.652Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:12.955 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:12.955 Nvme1n1 : 5.01 11020.85 86.10 0.00 0.00 11598.32 4975.88 20680.25 00:08:12.955 [2024-11-26T19:38:16.652Z] =================================================================================================================== 00:08:12.955 [2024-11-26T19:38:16.652Z] Total : 11020.85 86.10 0.00 0.00 11598.32 4975.88 20680.25 00:08:12.955 [2024-11-26 20:38:16.420695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.955 [2024-11-26 20:38:16.420720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.955 [2024-11-26 20:38:16.428711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.955 [2024-11-26 20:38:16.428736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.955 [2024-11-26 20:38:16.436741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.955 [2024-11-26 20:38:16.436765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.955 [2024-11-26 20:38:16.444817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.955 [2024-11-26 20:38:16.444862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.955 [2024-11-26 20:38:16.452834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.955 [2024-11-26 20:38:16.452881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.955 [2024-11-26 20:38:16.460850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.955 [2024-11-26 20:38:16.460895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.955 [2024-11-26 20:38:16.468883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.955 [2024-11-26 20:38:16.468929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.955 [2024-11-26 20:38:16.476891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.955 [2024-11-26 20:38:16.476935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.955 [2024-11-26 20:38:16.484914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.955 [2024-11-26 20:38:16.484957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.955 [2024-11-26 20:38:16.492936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.955 [2024-11-26 20:38:16.492981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.955 [2024-11-26 20:38:16.500955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.955 [2024-11-26 20:38:16.500999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.955 [2024-11-26 20:38:16.508978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.955 [2024-11-26 20:38:16.509036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.955 [2024-11-26 20:38:16.516999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.955 [2024-11-26 20:38:16.517044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.955 [2024-11-26 20:38:16.533080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.955 [2024-11-26 20:38:16.533134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.955 [2024-11-26 20:38:16.541067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.955 [2024-11-26 20:38:16.541113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.955 [2024-11-26 20:38:16.549087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.955 [2024-11-26 20:38:16.549131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.955 [2024-11-26 20:38:16.557111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.955 [2024-11-26 20:38:16.557154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.955 [2024-11-26 20:38:16.565134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.955 [2024-11-26 20:38:16.565178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.955 [2024-11-26 20:38:16.573085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.955 [2024-11-26 20:38:16.573107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.955 [2024-11-26 20:38:16.581103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.955 [2024-11-26 20:38:16.581124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.955 [2024-11-26 20:38:16.589124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.955 [2024-11-26 20:38:16.589144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.955 [2024-11-26 20:38:16.597144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.956 [2024-11-26 20:38:16.597164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.956 [2024-11-26 20:38:16.605223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.956 [2024-11-26 20:38:16.605261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.956 [2024-11-26 20:38:16.613257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.956 [2024-11-26 20:38:16.613298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.956 [2024-11-26 20:38:16.621264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.956 [2024-11-26 20:38:16.621327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.956 [2024-11-26 20:38:16.629231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.956 [2024-11-26 20:38:16.629251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.956 [2024-11-26 20:38:16.637251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.956 [2024-11-26 20:38:16.637271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:12.956 [2024-11-26 20:38:16.645268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:12.956 [2024-11-26 20:38:16.645310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:13.214 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1579822) - No such process 00:08:13.214 20:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1579822 00:08:13.214 20:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.214 20:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.214 20:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:13.214 20:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.214 20:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:13.214 20:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.214 20:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:13.214 delay0 00:08:13.214 20:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.214 20:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:13.214 20:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.214 20:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:13.214 20:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.214 20:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:13.215 [2024-11-26 20:38:16.772108] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:19.764 [2024-11-26 20:38:22.919965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc03b0 is same with the state(6) to be set 00:08:19.764 Initializing NVMe Controllers 00:08:19.764 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:19.764 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:19.764 Initialization complete. Launching workers. 00:08:19.764 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 931 00:08:19.764 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1218, failed to submit 33 00:08:19.764 success 1039, unsuccessful 179, failed 0 00:08:19.764 20:38:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:19.764 20:38:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:08:19.764 20:38:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:19.764 20:38:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:08:19.764 20:38:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:19.764 20:38:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:08:19.764 20:38:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:19.764 20:38:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:19.764 rmmod nvme_tcp 00:08:19.764 rmmod nvme_fabrics 00:08:19.764 rmmod nvme_keyring 00:08:19.764 20:38:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:19.764 20:38:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:08:19.764 20:38:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:08:19.764 20:38:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1578591 ']' 00:08:19.764 20:38:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1578591 00:08:19.764 20:38:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1578591 ']' 00:08:19.764 20:38:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1578591 00:08:19.764 20:38:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:08:19.764 20:38:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:19.764 20:38:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1578591 00:08:19.764 20:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:19.764 20:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:19.764 20:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1578591' 00:08:19.764 killing process with pid 1578591 00:08:19.764 20:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1578591 00:08:19.764 20:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1578591 00:08:19.764 20:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:19.764 20:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:19.764 20:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:19.764 20:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:08:19.764 20:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:08:19.764 20:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:19.764 20:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:08:19.764 20:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:19.764 20:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:19.764 20:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.764 20:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:19.764 20:38:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.665 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:21.665 00:08:21.665 real 0m27.871s 00:08:21.665 user 0m41.175s 00:08:21.665 sys 0m8.031s 00:08:21.665 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.665 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:21.665 ************************************ 00:08:21.665 END TEST nvmf_zcopy 00:08:21.665 ************************************ 00:08:21.665 20:38:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:21.665 20:38:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:21.665 20:38:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.665 20:38:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:21.923 ************************************ 00:08:21.924 START TEST nvmf_nmic 00:08:21.924 ************************************ 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:21.924 * Looking for test storage... 00:08:21.924 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:21.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.924 --rc genhtml_branch_coverage=1 00:08:21.924 --rc genhtml_function_coverage=1 00:08:21.924 --rc genhtml_legend=1 00:08:21.924 --rc geninfo_all_blocks=1 00:08:21.924 --rc geninfo_unexecuted_blocks=1 00:08:21.924 00:08:21.924 ' 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:21.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.924 --rc genhtml_branch_coverage=1 00:08:21.924 --rc genhtml_function_coverage=1 00:08:21.924 --rc genhtml_legend=1 00:08:21.924 --rc geninfo_all_blocks=1 00:08:21.924 --rc geninfo_unexecuted_blocks=1 00:08:21.924 00:08:21.924 ' 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:21.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.924 --rc genhtml_branch_coverage=1 00:08:21.924 --rc genhtml_function_coverage=1 00:08:21.924 --rc genhtml_legend=1 00:08:21.924 --rc geninfo_all_blocks=1 00:08:21.924 --rc geninfo_unexecuted_blocks=1 00:08:21.924 00:08:21.924 ' 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:21.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.924 --rc genhtml_branch_coverage=1 00:08:21.924 --rc genhtml_function_coverage=1 00:08:21.924 --rc genhtml_legend=1 00:08:21.924 --rc geninfo_all_blocks=1 00:08:21.924 --rc geninfo_unexecuted_blocks=1 00:08:21.924 00:08:21.924 ' 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:21.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:21.924 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:21.925 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:21.925 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:21.925 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:21.925 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:21.925 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:21.925 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:21.925 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:21.925 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.925 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:21.925 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.925 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:21.925 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:21.925 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:08:21.925 20:38:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:24.456 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:24.456 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:08:24.456 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:24.456 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:24.456 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:24.456 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:24.456 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:24.456 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:08:24.456 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:24.456 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:08:24.456 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:08:24.456 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:08:24.456 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:08:24.456 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:08:24.456 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:08:24.456 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:24.456 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:24.456 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:24.456 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:24.456 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:24.456 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:24.456 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:24.456 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:24.456 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:24.456 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:24.457 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:24.457 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:24.457 Found net devices under 0000:09:00.0: cvl_0_0 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:24.457 Found net devices under 0000:09:00.1: cvl_0_1 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:24.457 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:24.457 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:08:24.457 00:08:24.457 --- 10.0.0.2 ping statistics --- 00:08:24.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:24.457 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:08:24.457 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:24.457 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:24.457 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:08:24.457 00:08:24.457 --- 10.0.0.1 ping statistics --- 00:08:24.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:24.457 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:08:24.458 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:24.458 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:08:24.458 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:24.458 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:24.458 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:24.458 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:24.458 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:24.458 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:24.458 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:24.458 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:08:24.458 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:24.458 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:24.458 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:24.458 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1583221 00:08:24.458 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:24.458 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1583221 00:08:24.458 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1583221 ']' 00:08:24.458 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:24.458 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:24.458 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:24.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:24.458 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:24.458 20:38:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:24.458 [2024-11-26 20:38:27.918037] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:08:24.458 [2024-11-26 20:38:27.918126] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:24.458 [2024-11-26 20:38:27.990005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:24.458 [2024-11-26 20:38:28.051761] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:24.458 [2024-11-26 20:38:28.051812] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:24.458 [2024-11-26 20:38:28.051845] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:24.458 [2024-11-26 20:38:28.051857] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:24.458 [2024-11-26 20:38:28.051866] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:24.458 [2024-11-26 20:38:28.053611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:24.458 [2024-11-26 20:38:28.053713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:24.458 [2024-11-26 20:38:28.053835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:24.458 [2024-11-26 20:38:28.053838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.716 20:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:24.716 20:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:08:24.716 20:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:24.716 20:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:24.716 20:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:24.716 20:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:24.716 20:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:24.716 20:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.716 20:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:24.716 [2024-11-26 20:38:28.217739] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:24.716 20:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.716 20:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:24.716 20:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.716 20:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:24.716 Malloc0 00:08:24.716 20:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.716 20:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:24.716 20:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.716 20:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:24.716 20:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.716 20:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:24.716 20:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.716 20:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:24.716 20:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.716 20:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:24.716 20:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.716 20:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:24.716 [2024-11-26 20:38:28.283237] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:24.716 20:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.716 20:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:08:24.716 test case1: single bdev can't be used in multiple subsystems 00:08:24.716 20:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:08:24.716 20:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.716 20:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:24.716 20:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.716 20:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:24.716 20:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.716 20:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:24.716 20:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.716 20:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:08:24.716 20:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:08:24.716 20:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.716 20:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:24.716 [2024-11-26 20:38:28.307082] bdev.c:8326:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:08:24.716 [2024-11-26 20:38:28.307111] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:08:24.716 [2024-11-26 20:38:28.307141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.716 request: 00:08:24.716 { 00:08:24.717 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:24.717 "namespace": { 00:08:24.717 "bdev_name": "Malloc0", 00:08:24.717 "no_auto_visible": false 00:08:24.717 }, 00:08:24.717 "method": "nvmf_subsystem_add_ns", 00:08:24.717 "req_id": 1 00:08:24.717 } 00:08:24.717 Got JSON-RPC error response 00:08:24.717 response: 00:08:24.717 { 00:08:24.717 "code": -32602, 00:08:24.717 "message": "Invalid parameters" 00:08:24.717 } 00:08:24.717 20:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:24.717 20:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:08:24.717 20:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:08:24.717 20:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:08:24.717 Adding namespace failed - expected result. 00:08:24.717 20:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:08:24.717 test case2: host connect to nvmf target in multiple paths 00:08:24.717 20:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:08:24.717 20:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.717 20:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:24.717 [2024-11-26 20:38:28.315194] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:08:24.717 20:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.717 20:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:25.282 20:38:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:08:25.846 20:38:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:08:25.846 20:38:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:08:25.846 20:38:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:25.846 20:38:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:08:25.846 20:38:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:08:28.371 20:38:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:28.371 20:38:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:28.371 20:38:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:28.371 20:38:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:08:28.371 20:38:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:28.371 20:38:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:08:28.371 20:38:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:28.371 [global] 00:08:28.371 thread=1 00:08:28.371 invalidate=1 00:08:28.371 rw=write 00:08:28.371 time_based=1 00:08:28.371 runtime=1 00:08:28.371 ioengine=libaio 00:08:28.371 direct=1 00:08:28.371 bs=4096 00:08:28.371 iodepth=1 00:08:28.371 norandommap=0 00:08:28.371 numjobs=1 00:08:28.371 00:08:28.371 verify_dump=1 00:08:28.371 verify_backlog=512 00:08:28.371 verify_state_save=0 00:08:28.371 do_verify=1 00:08:28.371 verify=crc32c-intel 00:08:28.371 [job0] 00:08:28.371 filename=/dev/nvme0n1 00:08:28.371 Could not set queue depth (nvme0n1) 00:08:28.371 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:28.371 fio-3.35 00:08:28.371 Starting 1 thread 00:08:29.301 00:08:29.301 job0: (groupid=0, jobs=1): err= 0: pid=1583740: Tue Nov 26 20:38:32 2024 00:08:29.301 read: IOPS=2284, BW=9139KiB/s (9358kB/s)(9148KiB/1001msec) 00:08:29.301 slat (nsec): min=4779, max=63906, avg=13994.26, stdev=7006.33 00:08:29.301 clat (usec): min=173, max=587, avg=231.03, stdev=41.25 00:08:29.301 lat (usec): min=185, max=622, avg=245.03, stdev=43.43 00:08:29.301 clat percentiles (usec): 00:08:29.301 | 1.00th=[ 186], 5.00th=[ 192], 10.00th=[ 194], 20.00th=[ 200], 00:08:29.301 | 30.00th=[ 206], 40.00th=[ 215], 50.00th=[ 223], 60.00th=[ 235], 00:08:29.301 | 70.00th=[ 243], 80.00th=[ 255], 90.00th=[ 273], 95.00th=[ 289], 00:08:29.301 | 99.00th=[ 441], 99.50th=[ 474], 99.90th=[ 506], 99.95th=[ 578], 00:08:29.301 | 99.99th=[ 586] 00:08:29.301 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:08:29.301 slat (nsec): min=5666, max=42992, avg=14806.94, stdev=4326.51 00:08:29.301 clat (usec): min=120, max=290, avg=149.38, stdev=18.39 00:08:29.301 lat (usec): min=127, max=307, avg=164.18, stdev=19.40 00:08:29.301 clat percentiles (usec): 00:08:29.301 | 1.00th=[ 124], 5.00th=[ 130], 10.00th=[ 133], 20.00th=[ 137], 00:08:29.301 | 30.00th=[ 139], 40.00th=[ 143], 50.00th=[ 145], 60.00th=[ 149], 00:08:29.301 | 70.00th=[ 153], 80.00th=[ 161], 90.00th=[ 180], 95.00th=[ 188], 00:08:29.301 | 99.00th=[ 202], 99.50th=[ 212], 99.90th=[ 253], 99.95th=[ 277], 00:08:29.301 | 99.99th=[ 289] 00:08:29.301 bw ( KiB/s): min=11848, max=11848, per=100.00%, avg=11848.00, stdev= 0.00, samples=1 00:08:29.301 iops : min= 2962, max= 2962, avg=2962.00, stdev= 0.00, samples=1 00:08:29.301 lat (usec) : 250=89.17%, 500=10.77%, 750=0.06% 00:08:29.301 cpu : usr=3.90%, sys=7.10%, ctx=4850, majf=0, minf=1 00:08:29.301 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:29.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:29.301 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:29.301 issued rwts: total=2287,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:29.301 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:29.301 00:08:29.301 Run status group 0 (all jobs): 00:08:29.301 READ: bw=9139KiB/s (9358kB/s), 9139KiB/s-9139KiB/s (9358kB/s-9358kB/s), io=9148KiB (9368kB), run=1001-1001msec 00:08:29.301 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:08:29.301 00:08:29.301 Disk stats (read/write): 00:08:29.301 nvme0n1: ios=2075/2288, merge=0/0, ticks=1445/337, in_queue=1782, util=98.60% 00:08:29.301 20:38:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:29.558 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:29.558 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:29.558 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:08:29.558 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:08:29.558 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:29.558 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:08:29.558 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:29.558 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:08:29.558 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:08:29.558 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:08:29.558 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:29.558 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:08:29.558 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:29.558 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:08:29.558 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:29.558 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:29.558 rmmod nvme_tcp 00:08:29.558 rmmod nvme_fabrics 00:08:29.558 rmmod nvme_keyring 00:08:29.559 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:29.559 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:08:29.559 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:08:29.559 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1583221 ']' 00:08:29.559 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1583221 00:08:29.559 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1583221 ']' 00:08:29.559 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1583221 00:08:29.559 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:08:29.559 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:29.559 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1583221 00:08:29.559 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:29.559 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:29.559 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1583221' 00:08:29.559 killing process with pid 1583221 00:08:29.559 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1583221 00:08:29.559 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1583221 00:08:29.817 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:29.817 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:29.817 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:29.817 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:08:29.817 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:08:29.817 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:29.817 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:08:29.817 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:29.817 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:29.817 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.817 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:29.817 20:38:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:32.372 00:08:32.372 real 0m10.127s 00:08:32.372 user 0m22.572s 00:08:32.372 sys 0m2.583s 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:32.372 ************************************ 00:08:32.372 END TEST nvmf_nmic 00:08:32.372 ************************************ 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:32.372 ************************************ 00:08:32.372 START TEST nvmf_fio_target 00:08:32.372 ************************************ 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:32.372 * Looking for test storage... 00:08:32.372 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:32.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.372 --rc genhtml_branch_coverage=1 00:08:32.372 --rc genhtml_function_coverage=1 00:08:32.372 --rc genhtml_legend=1 00:08:32.372 --rc geninfo_all_blocks=1 00:08:32.372 --rc geninfo_unexecuted_blocks=1 00:08:32.372 00:08:32.372 ' 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:32.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.372 --rc genhtml_branch_coverage=1 00:08:32.372 --rc genhtml_function_coverage=1 00:08:32.372 --rc genhtml_legend=1 00:08:32.372 --rc geninfo_all_blocks=1 00:08:32.372 --rc geninfo_unexecuted_blocks=1 00:08:32.372 00:08:32.372 ' 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:32.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.372 --rc genhtml_branch_coverage=1 00:08:32.372 --rc genhtml_function_coverage=1 00:08:32.372 --rc genhtml_legend=1 00:08:32.372 --rc geninfo_all_blocks=1 00:08:32.372 --rc geninfo_unexecuted_blocks=1 00:08:32.372 00:08:32.372 ' 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:32.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.372 --rc genhtml_branch_coverage=1 00:08:32.372 --rc genhtml_function_coverage=1 00:08:32.372 --rc genhtml_legend=1 00:08:32.372 --rc geninfo_all_blocks=1 00:08:32.372 --rc geninfo_unexecuted_blocks=1 00:08:32.372 00:08:32.372 ' 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.372 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.373 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.373 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:08:32.373 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.373 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:08:32.373 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:32.373 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:32.373 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:32.373 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:32.373 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:32.373 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:32.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:32.373 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:32.373 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:32.373 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:32.373 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:32.373 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:32.373 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:32.373 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:08:32.373 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:32.373 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:32.373 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:32.373 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:32.373 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:32.373 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:32.373 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:32.373 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:32.373 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:32.373 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:32.373 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:08:32.373 20:38:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:34.302 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:34.302 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:34.302 Found net devices under 0000:09:00.0: cvl_0_0 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:34.302 Found net devices under 0000:09:00.1: cvl_0_1 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:34.302 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:34.303 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:34.303 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:34.303 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:34.303 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:34.303 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:34.303 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:34.303 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:34.303 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:34.303 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:34.303 20:38:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:34.560 20:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:34.560 20:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:34.560 20:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:34.560 20:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:34.560 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:34.560 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.312 ms 00:08:34.560 00:08:34.560 --- 10.0.0.2 ping statistics --- 00:08:34.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.560 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:08:34.560 20:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:34.560 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:34.560 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:08:34.560 00:08:34.560 --- 10.0.0.1 ping statistics --- 00:08:34.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.560 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:08:34.560 20:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:34.560 20:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:08:34.560 20:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:34.560 20:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:34.560 20:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:34.560 20:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:34.560 20:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:34.560 20:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:34.560 20:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:34.560 20:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:08:34.560 20:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:34.560 20:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:34.560 20:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:34.560 20:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1585948 00:08:34.560 20:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:34.560 20:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1585948 00:08:34.560 20:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1585948 ']' 00:08:34.560 20:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.560 20:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:34.560 20:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.560 20:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:34.560 20:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:34.560 [2024-11-26 20:38:38.095078] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:08:34.560 [2024-11-26 20:38:38.095170] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:34.561 [2024-11-26 20:38:38.171593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:34.561 [2024-11-26 20:38:38.232470] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:34.561 [2024-11-26 20:38:38.232524] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:34.561 [2024-11-26 20:38:38.232538] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:34.561 [2024-11-26 20:38:38.232551] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:34.561 [2024-11-26 20:38:38.232561] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:34.561 [2024-11-26 20:38:38.234340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:34.561 [2024-11-26 20:38:38.234368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:34.561 [2024-11-26 20:38:38.234415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:34.561 [2024-11-26 20:38:38.234419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.819 20:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:34.819 20:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:08:34.819 20:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:34.819 20:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:34.819 20:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:34.819 20:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:34.819 20:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:35.077 [2024-11-26 20:38:38.685807] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:35.077 20:38:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:35.335 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:08:35.335 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:35.610 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:08:35.610 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:36.177 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:08:36.177 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:36.434 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:08:36.434 20:38:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:08:36.691 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:36.949 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:08:36.949 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:37.206 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:08:37.206 20:38:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:37.463 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:08:37.463 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:08:37.721 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:37.979 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:37.979 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:38.236 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:38.236 20:38:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:38.493 20:38:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:38.750 [2024-11-26 20:38:42.374580] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:38.750 20:38:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:08:39.037 20:38:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:08:39.323 20:38:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:40.254 20:38:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:08:40.254 20:38:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:08:40.254 20:38:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:40.254 20:38:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:08:40.254 20:38:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:08:40.254 20:38:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:08:42.152 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:42.152 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:42.152 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:42.152 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:08:42.153 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:42.153 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:08:42.153 20:38:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:42.153 [global] 00:08:42.153 thread=1 00:08:42.153 invalidate=1 00:08:42.153 rw=write 00:08:42.153 time_based=1 00:08:42.153 runtime=1 00:08:42.153 ioengine=libaio 00:08:42.153 direct=1 00:08:42.153 bs=4096 00:08:42.153 iodepth=1 00:08:42.153 norandommap=0 00:08:42.153 numjobs=1 00:08:42.153 00:08:42.153 verify_dump=1 00:08:42.153 verify_backlog=512 00:08:42.153 verify_state_save=0 00:08:42.153 do_verify=1 00:08:42.153 verify=crc32c-intel 00:08:42.153 [job0] 00:08:42.153 filename=/dev/nvme0n1 00:08:42.153 [job1] 00:08:42.153 filename=/dev/nvme0n2 00:08:42.153 [job2] 00:08:42.153 filename=/dev/nvme0n3 00:08:42.153 [job3] 00:08:42.153 filename=/dev/nvme0n4 00:08:42.153 Could not set queue depth (nvme0n1) 00:08:42.153 Could not set queue depth (nvme0n2) 00:08:42.153 Could not set queue depth (nvme0n3) 00:08:42.153 Could not set queue depth (nvme0n4) 00:08:42.411 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:42.411 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:42.411 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:42.411 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:42.411 fio-3.35 00:08:42.411 Starting 4 threads 00:08:43.783 00:08:43.783 job0: (groupid=0, jobs=1): err= 0: pid=1587029: Tue Nov 26 20:38:47 2024 00:08:43.783 read: IOPS=21, BW=86.1KiB/s (88.2kB/s)(88.0KiB/1022msec) 00:08:43.783 slat (nsec): min=14040, max=34965, avg=23647.50, stdev=9339.20 00:08:43.783 clat (usec): min=40792, max=41024, avg=40963.21, stdev=50.20 00:08:43.783 lat (usec): min=40807, max=41039, avg=40986.86, stdev=49.83 00:08:43.783 clat percentiles (usec): 00:08:43.783 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:08:43.783 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:08:43.783 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:08:43.783 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:08:43.783 | 99.99th=[41157] 00:08:43.783 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:08:43.783 slat (nsec): min=7023, max=45143, avg=14761.62, stdev=6071.88 00:08:43.783 clat (usec): min=140, max=1450, avg=215.13, stdev=70.07 00:08:43.783 lat (usec): min=156, max=1463, avg=229.90, stdev=70.49 00:08:43.783 clat percentiles (usec): 00:08:43.783 | 1.00th=[ 157], 5.00th=[ 172], 10.00th=[ 182], 20.00th=[ 190], 00:08:43.783 | 30.00th=[ 198], 40.00th=[ 204], 50.00th=[ 210], 60.00th=[ 217], 00:08:43.783 | 70.00th=[ 221], 80.00th=[ 227], 90.00th=[ 237], 95.00th=[ 255], 00:08:43.783 | 99.00th=[ 334], 99.50th=[ 611], 99.90th=[ 1450], 99.95th=[ 1450], 00:08:43.783 | 99.99th=[ 1450] 00:08:43.783 bw ( KiB/s): min= 4096, max= 4096, per=21.53%, avg=4096.00, stdev= 0.00, samples=1 00:08:43.783 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:43.783 lat (usec) : 250=90.45%, 500=4.87%, 750=0.19%, 1000=0.19% 00:08:43.783 lat (msec) : 2=0.19%, 50=4.12% 00:08:43.783 cpu : usr=0.49%, sys=0.59%, ctx=535, majf=0, minf=1 00:08:43.783 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:43.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:43.783 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:43.783 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:43.783 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:43.783 job1: (groupid=0, jobs=1): err= 0: pid=1587030: Tue Nov 26 20:38:47 2024 00:08:43.783 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:08:43.783 slat (nsec): min=4762, max=58146, avg=20438.91, stdev=10554.34 00:08:43.783 clat (usec): min=204, max=3239, avg=377.30, stdev=116.33 00:08:43.783 lat (usec): min=221, max=3272, avg=397.74, stdev=120.39 00:08:43.783 clat percentiles (usec): 00:08:43.783 | 1.00th=[ 229], 5.00th=[ 249], 10.00th=[ 277], 20.00th=[ 293], 00:08:43.783 | 30.00th=[ 318], 40.00th=[ 343], 50.00th=[ 367], 60.00th=[ 388], 00:08:43.783 | 70.00th=[ 404], 80.00th=[ 441], 90.00th=[ 519], 95.00th=[ 562], 00:08:43.783 | 99.00th=[ 594], 99.50th=[ 603], 99.90th=[ 611], 99.95th=[ 3228], 00:08:43.783 | 99.99th=[ 3228] 00:08:43.783 write: IOPS=1786, BW=7145KiB/s (7316kB/s)(7152KiB/1001msec); 0 zone resets 00:08:43.783 slat (nsec): min=5950, max=72119, avg=13293.97, stdev=6396.29 00:08:43.783 clat (usec): min=128, max=3043, avg=195.35, stdev=80.34 00:08:43.783 lat (usec): min=135, max=3073, avg=208.64, stdev=81.16 00:08:43.783 clat percentiles (usec): 00:08:43.783 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 155], 00:08:43.783 | 30.00th=[ 169], 40.00th=[ 184], 50.00th=[ 192], 60.00th=[ 200], 00:08:43.783 | 70.00th=[ 210], 80.00th=[ 223], 90.00th=[ 243], 95.00th=[ 262], 00:08:43.783 | 99.00th=[ 330], 99.50th=[ 343], 99.90th=[ 979], 99.95th=[ 3032], 00:08:43.783 | 99.99th=[ 3032] 00:08:43.783 bw ( KiB/s): min= 8192, max= 8192, per=43.07%, avg=8192.00, stdev= 0.00, samples=1 00:08:43.783 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:08:43.783 lat (usec) : 250=52.68%, 500=41.43%, 750=5.81%, 1000=0.03% 00:08:43.783 lat (msec) : 4=0.06% 00:08:43.783 cpu : usr=2.70%, sys=6.10%, ctx=3324, majf=0, minf=1 00:08:43.783 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:43.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:43.783 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:43.783 issued rwts: total=1536,1788,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:43.783 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:43.783 job2: (groupid=0, jobs=1): err= 0: pid=1587031: Tue Nov 26 20:38:47 2024 00:08:43.783 read: IOPS=1668, BW=6673KiB/s (6833kB/s)(6680KiB/1001msec) 00:08:43.783 slat (nsec): min=6103, max=53260, avg=15573.70, stdev=6270.69 00:08:43.783 clat (usec): min=200, max=399, avg=309.49, stdev=39.31 00:08:43.783 lat (usec): min=206, max=419, avg=325.07, stdev=39.10 00:08:43.783 clat percentiles (usec): 00:08:43.783 | 1.00th=[ 233], 5.00th=[ 249], 10.00th=[ 260], 20.00th=[ 273], 00:08:43.783 | 30.00th=[ 285], 40.00th=[ 293], 50.00th=[ 310], 60.00th=[ 322], 00:08:43.783 | 70.00th=[ 338], 80.00th=[ 351], 90.00th=[ 363], 95.00th=[ 371], 00:08:43.783 | 99.00th=[ 388], 99.50th=[ 392], 99.90th=[ 400], 99.95th=[ 400], 00:08:43.783 | 99.99th=[ 400] 00:08:43.783 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:08:43.783 slat (nsec): min=7698, max=60758, avg=17812.48, stdev=7903.82 00:08:43.783 clat (usec): min=135, max=1237, avg=197.49, stdev=38.91 00:08:43.783 lat (usec): min=144, max=1255, avg=215.30, stdev=40.59 00:08:43.783 clat percentiles (usec): 00:08:43.783 | 1.00th=[ 143], 5.00th=[ 151], 10.00th=[ 157], 20.00th=[ 174], 00:08:43.783 | 30.00th=[ 182], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 200], 00:08:43.783 | 70.00th=[ 208], 80.00th=[ 221], 90.00th=[ 239], 95.00th=[ 258], 00:08:43.783 | 99.00th=[ 281], 99.50th=[ 293], 99.90th=[ 371], 99.95th=[ 396], 00:08:43.783 | 99.99th=[ 1237] 00:08:43.783 bw ( KiB/s): min= 8192, max= 8192, per=43.07%, avg=8192.00, stdev= 0.00, samples=1 00:08:43.783 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:08:43.783 lat (usec) : 250=53.98%, 500=45.99% 00:08:43.783 lat (msec) : 2=0.03% 00:08:43.783 cpu : usr=4.60%, sys=7.70%, ctx=3719, majf=0, minf=1 00:08:43.783 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:43.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:43.783 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:43.783 issued rwts: total=1670,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:43.783 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:43.783 job3: (groupid=0, jobs=1): err= 0: pid=1587032: Tue Nov 26 20:38:47 2024 00:08:43.783 read: IOPS=21, BW=86.7KiB/s (88.8kB/s)(88.0KiB/1015msec) 00:08:43.783 slat (nsec): min=13953, max=39149, avg=24970.50, stdev=10281.24 00:08:43.783 clat (usec): min=40591, max=41664, avg=40999.47, stdev=178.15 00:08:43.783 lat (usec): min=40614, max=41701, avg=41024.44, stdev=179.76 00:08:43.783 clat percentiles (usec): 00:08:43.783 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:08:43.783 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:08:43.783 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:08:43.783 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:08:43.783 | 99.99th=[41681] 00:08:43.783 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:08:43.783 slat (nsec): min=9561, max=67644, avg=20039.68, stdev=8837.60 00:08:43.783 clat (usec): min=151, max=387, avg=193.31, stdev=20.69 00:08:43.783 lat (usec): min=163, max=400, avg=213.35, stdev=24.29 00:08:43.783 clat percentiles (usec): 00:08:43.783 | 1.00th=[ 157], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 176], 00:08:43.783 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 194], 60.00th=[ 198], 00:08:43.783 | 70.00th=[ 202], 80.00th=[ 208], 90.00th=[ 217], 95.00th=[ 225], 00:08:43.783 | 99.00th=[ 251], 99.50th=[ 255], 99.90th=[ 388], 99.95th=[ 388], 00:08:43.783 | 99.99th=[ 388] 00:08:43.783 bw ( KiB/s): min= 4096, max= 4096, per=21.53%, avg=4096.00, stdev= 0.00, samples=1 00:08:43.783 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:43.783 lat (usec) : 250=94.76%, 500=1.12% 00:08:43.783 lat (msec) : 50=4.12% 00:08:43.783 cpu : usr=0.99%, sys=0.89%, ctx=535, majf=0, minf=1 00:08:43.783 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:43.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:43.783 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:43.783 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:43.783 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:43.783 00:08:43.783 Run status group 0 (all jobs): 00:08:43.783 READ: bw=12.4MiB/s (13.0MB/s), 86.1KiB/s-6673KiB/s (88.2kB/s-6833kB/s), io=12.7MiB (13.3MB), run=1001-1022msec 00:08:43.784 WRITE: bw=18.6MiB/s (19.5MB/s), 2004KiB/s-8184KiB/s (2052kB/s-8380kB/s), io=19.0MiB (19.9MB), run=1001-1022msec 00:08:43.784 00:08:43.784 Disk stats (read/write): 00:08:43.784 nvme0n1: ios=72/512, merge=0/0, ticks=785/103, in_queue=888, util=87.17% 00:08:43.784 nvme0n2: ios=1345/1536, merge=0/0, ticks=544/289, in_queue=833, util=90.95% 00:08:43.784 nvme0n3: ios=1585/1629, merge=0/0, ticks=685/299, in_queue=984, util=94.89% 00:08:43.784 nvme0n4: ios=40/512, merge=0/0, ticks=1620/96, in_queue=1716, util=94.12% 00:08:43.784 20:38:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:08:43.784 [global] 00:08:43.784 thread=1 00:08:43.784 invalidate=1 00:08:43.784 rw=randwrite 00:08:43.784 time_based=1 00:08:43.784 runtime=1 00:08:43.784 ioengine=libaio 00:08:43.784 direct=1 00:08:43.784 bs=4096 00:08:43.784 iodepth=1 00:08:43.784 norandommap=0 00:08:43.784 numjobs=1 00:08:43.784 00:08:43.784 verify_dump=1 00:08:43.784 verify_backlog=512 00:08:43.784 verify_state_save=0 00:08:43.784 do_verify=1 00:08:43.784 verify=crc32c-intel 00:08:43.784 [job0] 00:08:43.784 filename=/dev/nvme0n1 00:08:43.784 [job1] 00:08:43.784 filename=/dev/nvme0n2 00:08:43.784 [job2] 00:08:43.784 filename=/dev/nvme0n3 00:08:43.784 [job3] 00:08:43.784 filename=/dev/nvme0n4 00:08:43.784 Could not set queue depth (nvme0n1) 00:08:43.784 Could not set queue depth (nvme0n2) 00:08:43.784 Could not set queue depth (nvme0n3) 00:08:43.784 Could not set queue depth (nvme0n4) 00:08:43.784 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:43.784 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:43.784 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:43.784 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:43.784 fio-3.35 00:08:43.784 Starting 4 threads 00:08:45.156 00:08:45.156 job0: (groupid=0, jobs=1): err= 0: pid=1587264: Tue Nov 26 20:38:48 2024 00:08:45.156 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:08:45.156 slat (nsec): min=6172, max=52928, avg=13821.58, stdev=6128.47 00:08:45.156 clat (usec): min=176, max=40392, avg=254.23, stdev=894.76 00:08:45.156 lat (usec): min=183, max=40399, avg=268.05, stdev=894.69 00:08:45.156 clat percentiles (usec): 00:08:45.156 | 1.00th=[ 188], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 212], 00:08:45.156 | 30.00th=[ 223], 40.00th=[ 231], 50.00th=[ 235], 60.00th=[ 239], 00:08:45.156 | 70.00th=[ 243], 80.00th=[ 247], 90.00th=[ 253], 95.00th=[ 260], 00:08:45.156 | 99.00th=[ 285], 99.50th=[ 297], 99.90th=[ 783], 99.95th=[ 5276], 00:08:45.156 | 99.99th=[40633] 00:08:45.156 write: IOPS=2260, BW=9043KiB/s (9260kB/s)(9052KiB/1001msec); 0 zone resets 00:08:45.156 slat (nsec): min=7468, max=57060, avg=14715.07, stdev=6651.20 00:08:45.156 clat (usec): min=136, max=454, avg=175.87, stdev=23.78 00:08:45.156 lat (usec): min=144, max=464, avg=190.58, stdev=26.03 00:08:45.156 clat percentiles (usec): 00:08:45.156 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 157], 00:08:45.156 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 174], 60.00th=[ 178], 00:08:45.156 | 70.00th=[ 184], 80.00th=[ 188], 90.00th=[ 198], 95.00th=[ 219], 00:08:45.156 | 99.00th=[ 273], 99.50th=[ 281], 99.90th=[ 297], 99.95th=[ 302], 00:08:45.156 | 99.99th=[ 453] 00:08:45.156 bw ( KiB/s): min= 8936, max= 8936, per=32.26%, avg=8936.00, stdev= 0.00, samples=1 00:08:45.156 iops : min= 2234, max= 2234, avg=2234.00, stdev= 0.00, samples=1 00:08:45.156 lat (usec) : 250=92.18%, 500=7.68%, 750=0.07%, 1000=0.02% 00:08:45.156 lat (msec) : 10=0.02%, 50=0.02% 00:08:45.156 cpu : usr=5.20%, sys=7.60%, ctx=4312, majf=0, minf=1 00:08:45.156 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:45.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:45.156 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:45.156 issued rwts: total=2048,2263,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:45.156 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:45.156 job1: (groupid=0, jobs=1): err= 0: pid=1587265: Tue Nov 26 20:38:48 2024 00:08:45.156 read: IOPS=1024, BW=4096KiB/s (4194kB/s)(4096KiB/1000msec) 00:08:45.156 slat (nsec): min=5679, max=45672, avg=15502.32, stdev=5491.70 00:08:45.156 clat (usec): min=197, max=41114, avg=725.95, stdev=4381.91 00:08:45.156 lat (usec): min=204, max=41123, avg=741.45, stdev=4381.82 00:08:45.156 clat percentiles (usec): 00:08:45.156 | 1.00th=[ 210], 5.00th=[ 219], 10.00th=[ 227], 20.00th=[ 233], 00:08:45.156 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 249], 00:08:45.156 | 70.00th=[ 253], 80.00th=[ 262], 90.00th=[ 289], 95.00th=[ 302], 00:08:45.156 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:08:45.156 | 99.99th=[41157] 00:08:45.156 write: IOPS=1045, BW=4180KiB/s (4280kB/s)(4180KiB/1000msec); 0 zone resets 00:08:45.156 slat (nsec): min=8309, max=54471, avg=15985.46, stdev=6480.63 00:08:45.156 clat (usec): min=155, max=315, avg=203.70, stdev=24.97 00:08:45.156 lat (usec): min=166, max=327, avg=219.68, stdev=24.08 00:08:45.156 clat percentiles (usec): 00:08:45.156 | 1.00th=[ 163], 5.00th=[ 174], 10.00th=[ 180], 20.00th=[ 184], 00:08:45.156 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 198], 60.00th=[ 204], 00:08:45.156 | 70.00th=[ 212], 80.00th=[ 227], 90.00th=[ 241], 95.00th=[ 249], 00:08:45.156 | 99.00th=[ 285], 99.50th=[ 289], 99.90th=[ 302], 99.95th=[ 318], 00:08:45.156 | 99.99th=[ 318] 00:08:45.156 bw ( KiB/s): min= 4096, max= 4096, per=14.79%, avg=4096.00, stdev= 0.00, samples=1 00:08:45.156 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:45.156 lat (usec) : 250=79.85%, 500=19.53%, 750=0.05% 00:08:45.156 lat (msec) : 50=0.58% 00:08:45.156 cpu : usr=2.20%, sys=4.60%, ctx=2072, majf=0, minf=1 00:08:45.156 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:45.156 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:45.156 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:45.156 issued rwts: total=1024,1045,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:45.156 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:45.156 job2: (groupid=0, jobs=1): err= 0: pid=1587266: Tue Nov 26 20:38:48 2024 00:08:45.156 read: IOPS=1190, BW=4761KiB/s (4875kB/s)(4804KiB/1009msec) 00:08:45.156 slat (nsec): min=4378, max=62667, avg=10880.61, stdev=8191.89 00:08:45.156 clat (usec): min=183, max=41051, avg=578.31, stdev=3698.84 00:08:45.156 lat (usec): min=188, max=41068, avg=589.19, stdev=3699.98 00:08:45.156 clat percentiles (usec): 00:08:45.156 | 1.00th=[ 192], 5.00th=[ 200], 10.00th=[ 206], 20.00th=[ 212], 00:08:45.156 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 225], 60.00th=[ 231], 00:08:45.156 | 70.00th=[ 241], 80.00th=[ 258], 90.00th=[ 302], 95.00th=[ 355], 00:08:45.156 | 99.00th=[ 515], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:08:45.156 | 99.99th=[41157] 00:08:45.157 write: IOPS=1522, BW=6089KiB/s (6235kB/s)(6144KiB/1009msec); 0 zone resets 00:08:45.157 slat (nsec): min=5626, max=55656, avg=8000.86, stdev=3028.01 00:08:45.157 clat (usec): min=141, max=375, avg=182.95, stdev=22.00 00:08:45.157 lat (usec): min=149, max=383, avg=190.95, stdev=22.09 00:08:45.157 clat percentiles (usec): 00:08:45.157 | 1.00th=[ 147], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 165], 00:08:45.157 | 30.00th=[ 169], 40.00th=[ 176], 50.00th=[ 182], 60.00th=[ 188], 00:08:45.157 | 70.00th=[ 194], 80.00th=[ 198], 90.00th=[ 206], 95.00th=[ 212], 00:08:45.157 | 99.00th=[ 265], 99.50th=[ 289], 99.90th=[ 375], 99.95th=[ 375], 00:08:45.157 | 99.99th=[ 375] 00:08:45.157 bw ( KiB/s): min= 1840, max=10448, per=22.18%, avg=6144.00, stdev=6086.78, samples=2 00:08:45.157 iops : min= 460, max= 2612, avg=1536.00, stdev=1521.69, samples=2 00:08:45.157 lat (usec) : 250=89.73%, 500=9.79%, 750=0.11% 00:08:45.157 lat (msec) : 50=0.37% 00:08:45.157 cpu : usr=1.09%, sys=2.88%, ctx=2737, majf=0, minf=2 00:08:45.157 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:45.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:45.157 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:45.157 issued rwts: total=1201,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:45.157 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:45.157 job3: (groupid=0, jobs=1): err= 0: pid=1587267: Tue Nov 26 20:38:48 2024 00:08:45.157 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:08:45.157 slat (nsec): min=5902, max=50935, avg=13015.40, stdev=6146.25 00:08:45.157 clat (usec): min=205, max=761, avg=263.08, stdev=37.87 00:08:45.157 lat (usec): min=211, max=779, avg=276.10, stdev=40.50 00:08:45.157 clat percentiles (usec): 00:08:45.157 | 1.00th=[ 217], 5.00th=[ 223], 10.00th=[ 229], 20.00th=[ 239], 00:08:45.157 | 30.00th=[ 245], 40.00th=[ 251], 50.00th=[ 258], 60.00th=[ 262], 00:08:45.157 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 306], 95.00th=[ 330], 00:08:45.157 | 99.00th=[ 383], 99.50th=[ 449], 99.90th=[ 676], 99.95th=[ 709], 00:08:45.157 | 99.99th=[ 758] 00:08:45.157 write: IOPS=2140, BW=8563KiB/s (8769kB/s)(8572KiB/1001msec); 0 zone resets 00:08:45.157 slat (nsec): min=7472, max=54940, avg=13450.46, stdev=6551.19 00:08:45.157 clat (usec): min=139, max=331, avg=181.17, stdev=16.72 00:08:45.157 lat (usec): min=147, max=357, avg=194.62, stdev=20.32 00:08:45.157 clat percentiles (usec): 00:08:45.157 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 167], 00:08:45.157 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 182], 60.00th=[ 186], 00:08:45.157 | 70.00th=[ 190], 80.00th=[ 194], 90.00th=[ 202], 95.00th=[ 208], 00:08:45.157 | 99.00th=[ 227], 99.50th=[ 237], 99.90th=[ 255], 99.95th=[ 260], 00:08:45.157 | 99.99th=[ 330] 00:08:45.157 bw ( KiB/s): min= 9584, max= 9584, per=34.60%, avg=9584.00, stdev= 0.00, samples=1 00:08:45.157 iops : min= 2396, max= 2396, avg=2396.00, stdev= 0.00, samples=1 00:08:45.157 lat (usec) : 250=69.63%, 500=30.28%, 750=0.07%, 1000=0.02% 00:08:45.157 cpu : usr=4.90%, sys=6.70%, ctx=4192, majf=0, minf=1 00:08:45.157 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:45.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:45.157 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:45.157 issued rwts: total=2048,2143,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:45.157 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:45.157 00:08:45.157 Run status group 0 (all jobs): 00:08:45.157 READ: bw=24.5MiB/s (25.7MB/s), 4096KiB/s-8184KiB/s (4194kB/s-8380kB/s), io=24.7MiB (25.9MB), run=1000-1009msec 00:08:45.157 WRITE: bw=27.0MiB/s (28.4MB/s), 4180KiB/s-9043KiB/s (4280kB/s-9260kB/s), io=27.3MiB (28.6MB), run=1000-1009msec 00:08:45.157 00:08:45.157 Disk stats (read/write): 00:08:45.157 nvme0n1: ios=1708/2048, merge=0/0, ticks=1619/344, in_queue=1963, util=97.90% 00:08:45.157 nvme0n2: ios=536/992, merge=0/0, ticks=1587/195, in_queue=1782, util=99.39% 00:08:45.157 nvme0n3: ios=1218/1536, merge=0/0, ticks=994/270, in_queue=1264, util=95.10% 00:08:45.157 nvme0n4: ios=1617/2048, merge=0/0, ticks=1319/355, in_queue=1674, util=98.32% 00:08:45.157 20:38:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:08:45.157 [global] 00:08:45.157 thread=1 00:08:45.157 invalidate=1 00:08:45.157 rw=write 00:08:45.157 time_based=1 00:08:45.157 runtime=1 00:08:45.157 ioengine=libaio 00:08:45.157 direct=1 00:08:45.157 bs=4096 00:08:45.157 iodepth=128 00:08:45.157 norandommap=0 00:08:45.157 numjobs=1 00:08:45.157 00:08:45.157 verify_dump=1 00:08:45.157 verify_backlog=512 00:08:45.157 verify_state_save=0 00:08:45.157 do_verify=1 00:08:45.157 verify=crc32c-intel 00:08:45.157 [job0] 00:08:45.157 filename=/dev/nvme0n1 00:08:45.157 [job1] 00:08:45.157 filename=/dev/nvme0n2 00:08:45.157 [job2] 00:08:45.157 filename=/dev/nvme0n3 00:08:45.157 [job3] 00:08:45.157 filename=/dev/nvme0n4 00:08:45.157 Could not set queue depth (nvme0n1) 00:08:45.157 Could not set queue depth (nvme0n2) 00:08:45.157 Could not set queue depth (nvme0n3) 00:08:45.157 Could not set queue depth (nvme0n4) 00:08:45.157 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:45.157 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:45.157 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:45.157 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:45.157 fio-3.35 00:08:45.157 Starting 4 threads 00:08:46.529 00:08:46.529 job0: (groupid=0, jobs=1): err= 0: pid=1587609: Tue Nov 26 20:38:50 2024 00:08:46.529 read: IOPS=2656, BW=10.4MiB/s (10.9MB/s)(10.4MiB/1006msec) 00:08:46.529 slat (usec): min=3, max=16889, avg=136.51, stdev=900.67 00:08:46.529 clat (usec): min=4917, max=86170, avg=15885.71, stdev=11328.83 00:08:46.529 lat (usec): min=4928, max=86186, avg=16022.21, stdev=11452.73 00:08:46.529 clat percentiles (usec): 00:08:46.529 | 1.00th=[ 5407], 5.00th=[ 8979], 10.00th=[10028], 20.00th=[10421], 00:08:46.529 | 30.00th=[10814], 40.00th=[11338], 50.00th=[11600], 60.00th=[13435], 00:08:46.529 | 70.00th=[13960], 80.00th=[16057], 90.00th=[30802], 95.00th=[42206], 00:08:46.529 | 99.00th=[65799], 99.50th=[81265], 99.90th=[86508], 99.95th=[86508], 00:08:46.529 | 99.99th=[86508] 00:08:46.529 write: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec); 0 zone resets 00:08:46.529 slat (usec): min=4, max=53016, avg=196.65, stdev=1407.56 00:08:46.529 clat (msec): min=5, max=124, avg=25.15, stdev=25.90 00:08:46.529 lat (msec): min=5, max=124, avg=25.34, stdev=26.04 00:08:46.529 clat percentiles (msec): 00:08:46.529 | 1.00th=[ 7], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 11], 00:08:46.529 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 14], 00:08:46.529 | 70.00th=[ 23], 80.00th=[ 31], 90.00th=[ 74], 95.00th=[ 90], 00:08:46.529 | 99.00th=[ 116], 99.50th=[ 122], 99.90th=[ 125], 99.95th=[ 125], 00:08:46.529 | 99.99th=[ 125] 00:08:46.529 bw ( KiB/s): min=11768, max=12688, per=18.73%, avg=12228.00, stdev=650.54, samples=2 00:08:46.529 iops : min= 2942, max= 3172, avg=3057.00, stdev=162.63, samples=2 00:08:46.529 lat (msec) : 10=6.41%, 20=69.90%, 50=14.92%, 100=7.54%, 250=1.24% 00:08:46.529 cpu : usr=3.38%, sys=6.87%, ctx=241, majf=0, minf=1 00:08:46.529 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:08:46.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:46.529 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:46.529 issued rwts: total=2672,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:46.529 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:46.529 job1: (groupid=0, jobs=1): err= 0: pid=1587610: Tue Nov 26 20:38:50 2024 00:08:46.529 read: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec) 00:08:46.529 slat (usec): min=2, max=13221, avg=84.98, stdev=662.85 00:08:46.529 clat (usec): min=1483, max=56839, avg=12576.35, stdev=4330.70 00:08:46.529 lat (usec): min=1486, max=56850, avg=12661.33, stdev=4371.36 00:08:46.529 clat percentiles (usec): 00:08:46.529 | 1.00th=[ 4080], 5.00th=[ 7111], 10.00th=[ 8586], 20.00th=[10028], 00:08:46.529 | 30.00th=[10421], 40.00th=[10945], 50.00th=[11338], 60.00th=[11863], 00:08:46.529 | 70.00th=[12911], 80.00th=[15795], 90.00th=[18482], 95.00th=[20317], 00:08:46.529 | 99.00th=[23725], 99.50th=[23987], 99.90th=[54264], 99.95th=[54264], 00:08:46.529 | 99.99th=[56886] 00:08:46.529 write: IOPS=5450, BW=21.3MiB/s (22.3MB/s)(21.4MiB/1007msec); 0 zone resets 00:08:46.529 slat (usec): min=3, max=16757, avg=78.53, stdev=451.32 00:08:46.529 clat (usec): min=590, max=27600, avg=11559.48, stdev=5237.63 00:08:46.529 lat (usec): min=595, max=38315, avg=11638.00, stdev=5283.58 00:08:46.529 clat percentiles (usec): 00:08:46.529 | 1.00th=[ 2474], 5.00th=[ 3097], 10.00th=[ 5538], 20.00th=[ 8979], 00:08:46.529 | 30.00th=[10159], 40.00th=[10814], 50.00th=[11076], 60.00th=[11207], 00:08:46.529 | 70.00th=[11863], 80.00th=[12780], 90.00th=[18220], 95.00th=[25560], 00:08:46.529 | 99.00th=[27395], 99.50th=[27395], 99.90th=[27657], 99.95th=[27657], 00:08:46.529 | 99.99th=[27657] 00:08:46.529 bw ( KiB/s): min=20521, max=22408, per=32.88%, avg=21464.50, stdev=1334.31, samples=2 00:08:46.529 iops : min= 5130, max= 5602, avg=5366.00, stdev=333.75, samples=2 00:08:46.529 lat (usec) : 750=0.15% 00:08:46.529 lat (msec) : 2=0.14%, 4=3.49%, 10=20.58%, 20=67.66%, 50=7.92% 00:08:46.529 lat (msec) : 100=0.07% 00:08:46.529 cpu : usr=3.28%, sys=6.96%, ctx=614, majf=0, minf=1 00:08:46.529 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:08:46.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:46.529 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:46.529 issued rwts: total=5120,5489,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:46.529 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:46.529 job2: (groupid=0, jobs=1): err= 0: pid=1587611: Tue Nov 26 20:38:50 2024 00:08:46.529 read: IOPS=3029, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1014msec) 00:08:46.529 slat (usec): min=2, max=24387, avg=125.87, stdev=1094.06 00:08:46.529 clat (usec): min=4390, max=61765, avg=17808.25, stdev=8016.34 00:08:46.529 lat (usec): min=4394, max=61809, avg=17934.12, stdev=8112.11 00:08:46.529 clat percentiles (usec): 00:08:46.529 | 1.00th=[ 5669], 5.00th=[10552], 10.00th=[11469], 20.00th=[13435], 00:08:46.529 | 30.00th=[13698], 40.00th=[14091], 50.00th=[15008], 60.00th=[16712], 00:08:46.529 | 70.00th=[17695], 80.00th=[20317], 90.00th=[28181], 95.00th=[37487], 00:08:46.529 | 99.00th=[45876], 99.50th=[45876], 99.90th=[45876], 99.95th=[58459], 00:08:46.529 | 99.99th=[61604] 00:08:46.529 write: IOPS=3333, BW=13.0MiB/s (13.7MB/s)(13.2MiB/1014msec); 0 zone resets 00:08:46.529 slat (usec): min=3, max=17035, avg=160.31, stdev=964.28 00:08:46.529 clat (usec): min=822, max=67233, avg=21877.74, stdev=13367.11 00:08:46.529 lat (usec): min=835, max=67239, avg=22038.06, stdev=13459.84 00:08:46.529 clat percentiles (usec): 00:08:46.529 | 1.00th=[ 7504], 5.00th=[ 9503], 10.00th=[11731], 20.00th=[12518], 00:08:46.529 | 30.00th=[13173], 40.00th=[14615], 50.00th=[15401], 60.00th=[20317], 00:08:46.529 | 70.00th=[23987], 80.00th=[27657], 90.00th=[44303], 95.00th=[54264], 00:08:46.529 | 99.00th=[64226], 99.50th=[65799], 99.90th=[67634], 99.95th=[67634], 00:08:46.529 | 99.99th=[67634] 00:08:46.529 bw ( KiB/s): min= 9792, max=16232, per=19.93%, avg=13012.00, stdev=4553.77, samples=2 00:08:46.529 iops : min= 2448, max= 4058, avg=3253.00, stdev=1138.44, samples=2 00:08:46.529 lat (usec) : 1000=0.06% 00:08:46.529 lat (msec) : 10=4.29%, 20=63.55%, 50=28.50%, 100=3.60% 00:08:46.529 cpu : usr=2.57%, sys=4.74%, ctx=246, majf=0, minf=1 00:08:46.529 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:08:46.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:46.529 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:46.529 issued rwts: total=3072,3380,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:46.529 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:46.529 job3: (groupid=0, jobs=1): err= 0: pid=1587612: Tue Nov 26 20:38:50 2024 00:08:46.529 read: IOPS=4185, BW=16.3MiB/s (17.1MB/s)(16.5MiB/1009msec) 00:08:46.529 slat (usec): min=2, max=29179, avg=111.71, stdev=787.19 00:08:46.529 clat (usec): min=1659, max=39612, avg=13592.30, stdev=3830.62 00:08:46.529 lat (usec): min=7607, max=39625, avg=13704.00, stdev=3870.12 00:08:46.529 clat percentiles (usec): 00:08:46.529 | 1.00th=[ 8717], 5.00th=[ 9765], 10.00th=[11076], 20.00th=[12125], 00:08:46.529 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12780], 60.00th=[13042], 00:08:46.529 | 70.00th=[13304], 80.00th=[14222], 90.00th=[15795], 95.00th=[17957], 00:08:46.529 | 99.00th=[32900], 99.50th=[32900], 99.90th=[39584], 99.95th=[39584], 00:08:46.529 | 99.99th=[39584] 00:08:46.529 write: IOPS=4566, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1009msec); 0 zone resets 00:08:46.529 slat (usec): min=2, max=18167, avg=109.93, stdev=752.45 00:08:46.529 clat (usec): min=5962, max=57505, avg=15245.17, stdev=7150.40 00:08:46.529 lat (usec): min=5967, max=57539, avg=15355.09, stdev=7205.79 00:08:46.529 clat percentiles (usec): 00:08:46.529 | 1.00th=[ 6718], 5.00th=[ 9110], 10.00th=[10683], 20.00th=[11994], 00:08:46.529 | 30.00th=[12256], 40.00th=[13042], 50.00th=[13435], 60.00th=[13829], 00:08:46.529 | 70.00th=[14091], 80.00th=[16909], 90.00th=[21365], 95.00th=[26084], 00:08:46.529 | 99.00th=[47449], 99.50th=[51643], 99.90th=[57410], 99.95th=[57410], 00:08:46.529 | 99.99th=[57410] 00:08:46.529 bw ( KiB/s): min=17088, max=19768, per=28.23%, avg=18428.00, stdev=1895.05, samples=2 00:08:46.529 iops : min= 4272, max= 4942, avg=4607.00, stdev=473.76, samples=2 00:08:46.529 lat (msec) : 2=0.01%, 10=6.18%, 20=84.11%, 50=9.35%, 100=0.34% 00:08:46.529 cpu : usr=3.37%, sys=5.26%, ctx=388, majf=0, minf=1 00:08:46.529 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:08:46.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:46.529 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:46.529 issued rwts: total=4223,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:46.529 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:46.529 00:08:46.529 Run status group 0 (all jobs): 00:08:46.530 READ: bw=58.1MiB/s (60.9MB/s), 10.4MiB/s-19.9MiB/s (10.9MB/s-20.8MB/s), io=58.9MiB (61.8MB), run=1006-1014msec 00:08:46.530 WRITE: bw=63.8MiB/s (66.8MB/s), 11.9MiB/s-21.3MiB/s (12.5MB/s-22.3MB/s), io=64.6MiB (67.8MB), run=1006-1014msec 00:08:46.530 00:08:46.530 Disk stats (read/write): 00:08:46.530 nvme0n1: ios=2589/2679, merge=0/0, ticks=21111/26581, in_queue=47692, util=99.00% 00:08:46.530 nvme0n2: ios=4135/4543, merge=0/0, ticks=51519/50802, in_queue=102321, util=98.58% 00:08:46.530 nvme0n3: ios=2581/2903, merge=0/0, ticks=38842/56925, in_queue=95767, util=99.27% 00:08:46.530 nvme0n4: ios=3584/3959, merge=0/0, ticks=23062/29733, in_queue=52795, util=89.66% 00:08:46.530 20:38:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:08:46.530 [global] 00:08:46.530 thread=1 00:08:46.530 invalidate=1 00:08:46.530 rw=randwrite 00:08:46.530 time_based=1 00:08:46.530 runtime=1 00:08:46.530 ioengine=libaio 00:08:46.530 direct=1 00:08:46.530 bs=4096 00:08:46.530 iodepth=128 00:08:46.530 norandommap=0 00:08:46.530 numjobs=1 00:08:46.530 00:08:46.530 verify_dump=1 00:08:46.530 verify_backlog=512 00:08:46.530 verify_state_save=0 00:08:46.530 do_verify=1 00:08:46.530 verify=crc32c-intel 00:08:46.530 [job0] 00:08:46.530 filename=/dev/nvme0n1 00:08:46.530 [job1] 00:08:46.530 filename=/dev/nvme0n2 00:08:46.530 [job2] 00:08:46.530 filename=/dev/nvme0n3 00:08:46.530 [job3] 00:08:46.530 filename=/dev/nvme0n4 00:08:46.530 Could not set queue depth (nvme0n1) 00:08:46.530 Could not set queue depth (nvme0n2) 00:08:46.530 Could not set queue depth (nvme0n3) 00:08:46.530 Could not set queue depth (nvme0n4) 00:08:46.788 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:46.788 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:46.788 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:46.788 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:46.788 fio-3.35 00:08:46.788 Starting 4 threads 00:08:48.160 00:08:48.160 job0: (groupid=0, jobs=1): err= 0: pid=1587849: Tue Nov 26 20:38:51 2024 00:08:48.160 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.7MiB/1042msec) 00:08:48.160 slat (nsec): min=1923, max=11649k, avg=99778.70, stdev=654347.80 00:08:48.160 clat (usec): min=3944, max=51790, avg=13905.19, stdev=6643.29 00:08:48.160 lat (usec): min=3951, max=51796, avg=14004.97, stdev=6658.21 00:08:48.160 clat percentiles (usec): 00:08:48.160 | 1.00th=[ 6980], 5.00th=[ 8717], 10.00th=[ 9634], 20.00th=[10552], 00:08:48.160 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11863], 60.00th=[12518], 00:08:48.160 | 70.00th=[14222], 80.00th=[16057], 90.00th=[19792], 95.00th=[22938], 00:08:48.160 | 99.00th=[48497], 99.50th=[51643], 99.90th=[51643], 99.95th=[51643], 00:08:48.160 | 99.99th=[51643] 00:08:48.160 write: IOPS=4913, BW=19.2MiB/s (20.1MB/s)(20.0MiB/1042msec); 0 zone resets 00:08:48.160 slat (usec): min=3, max=17130, avg=92.19, stdev=638.56 00:08:48.160 clat (usec): min=3237, max=38564, avg=12832.71, stdev=5380.38 00:08:48.160 lat (usec): min=3241, max=38570, avg=12924.90, stdev=5411.78 00:08:48.160 clat percentiles (usec): 00:08:48.160 | 1.00th=[ 4178], 5.00th=[ 6456], 10.00th=[ 7439], 20.00th=[10159], 00:08:48.160 | 30.00th=[10814], 40.00th=[11600], 50.00th=[11863], 60.00th=[12256], 00:08:48.160 | 70.00th=[12518], 80.00th=[13042], 90.00th=[21627], 95.00th=[23462], 00:08:48.160 | 99.00th=[34341], 99.50th=[37487], 99.90th=[38011], 99.95th=[38011], 00:08:48.160 | 99.99th=[38536] 00:08:48.160 bw ( KiB/s): min=18136, max=22778, per=30.58%, avg=20457.00, stdev=3282.39, samples=2 00:08:48.160 iops : min= 4534, max= 5694, avg=5114.00, stdev=820.24, samples=2 00:08:48.160 lat (msec) : 4=0.61%, 10=14.76%, 20=73.79%, 50=10.53%, 100=0.30% 00:08:48.160 cpu : usr=4.51%, sys=6.15%, ctx=502, majf=0, minf=1 00:08:48.160 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:08:48.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:48.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:48.160 issued rwts: total=4782,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:48.160 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:48.160 job1: (groupid=0, jobs=1): err= 0: pid=1587854: Tue Nov 26 20:38:51 2024 00:08:48.160 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:08:48.160 slat (usec): min=2, max=16126, avg=152.46, stdev=839.46 00:08:48.160 clat (usec): min=8736, max=43386, avg=19091.54, stdev=7356.06 00:08:48.160 lat (usec): min=9013, max=43396, avg=19243.99, stdev=7410.93 00:08:48.160 clat percentiles (usec): 00:08:48.160 | 1.00th=[ 9896], 5.00th=[11338], 10.00th=[11863], 20.00th=[12518], 00:08:48.160 | 30.00th=[13042], 40.00th=[14222], 50.00th=[18220], 60.00th=[20317], 00:08:48.160 | 70.00th=[22414], 80.00th=[25560], 90.00th=[29754], 95.00th=[34866], 00:08:48.160 | 99.00th=[41681], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:08:48.160 | 99.99th=[43254] 00:08:48.160 write: IOPS=3759, BW=14.7MiB/s (15.4MB/s)(14.7MiB/1003msec); 0 zone resets 00:08:48.160 slat (usec): min=2, max=8469, avg=113.16, stdev=637.09 00:08:48.160 clat (usec): min=785, max=34496, avg=15434.18, stdev=4730.23 00:08:48.160 lat (usec): min=8681, max=34503, avg=15547.34, stdev=4764.59 00:08:48.160 clat percentiles (usec): 00:08:48.160 | 1.00th=[ 8979], 5.00th=[10028], 10.00th=[10814], 20.00th=[12125], 00:08:48.160 | 30.00th=[12780], 40.00th=[13173], 50.00th=[13566], 60.00th=[14353], 00:08:48.160 | 70.00th=[16909], 80.00th=[20055], 90.00th=[20841], 95.00th=[25560], 00:08:48.160 | 99.00th=[30802], 99.50th=[32637], 99.90th=[34341], 99.95th=[34341], 00:08:48.160 | 99.99th=[34341] 00:08:48.160 bw ( KiB/s): min=13528, max=15592, per=21.77%, avg=14560.00, stdev=1459.47, samples=2 00:08:48.160 iops : min= 3382, max= 3898, avg=3640.00, stdev=364.87, samples=2 00:08:48.160 lat (usec) : 1000=0.01% 00:08:48.160 lat (msec) : 10=3.13%, 20=66.42%, 50=30.44% 00:08:48.160 cpu : usr=3.59%, sys=5.09%, ctx=339, majf=0, minf=1 00:08:48.160 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:08:48.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:48.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:48.160 issued rwts: total=3584,3771,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:48.160 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:48.160 job2: (groupid=0, jobs=1): err= 0: pid=1587855: Tue Nov 26 20:38:51 2024 00:08:48.160 read: IOPS=4523, BW=17.7MiB/s (18.5MB/s)(17.7MiB/1003msec) 00:08:48.160 slat (usec): min=2, max=8424, avg=108.83, stdev=631.99 00:08:48.160 clat (usec): min=2026, max=27074, avg=13989.25, stdev=2897.71 00:08:48.160 lat (usec): min=6808, max=29282, avg=14098.07, stdev=2934.13 00:08:48.160 clat percentiles (usec): 00:08:48.160 | 1.00th=[ 8848], 5.00th=[10421], 10.00th=[10945], 20.00th=[12518], 00:08:48.160 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13435], 60.00th=[13960], 00:08:48.160 | 70.00th=[14222], 80.00th=[15008], 90.00th=[17433], 95.00th=[20055], 00:08:48.160 | 99.00th=[23725], 99.50th=[25822], 99.90th=[26084], 99.95th=[26084], 00:08:48.160 | 99.99th=[27132] 00:08:48.160 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:08:48.160 slat (usec): min=3, max=8915, avg=102.28, stdev=526.07 00:08:48.160 clat (usec): min=7942, max=25851, avg=13727.87, stdev=2566.45 00:08:48.160 lat (usec): min=7948, max=25860, avg=13830.15, stdev=2593.76 00:08:48.160 clat percentiles (usec): 00:08:48.160 | 1.00th=[ 9372], 5.00th=[10814], 10.00th=[11600], 20.00th=[12256], 00:08:48.160 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13042], 60.00th=[13304], 00:08:48.160 | 70.00th=[13698], 80.00th=[14877], 90.00th=[17695], 95.00th=[19268], 00:08:48.160 | 99.00th=[23462], 99.50th=[24249], 99.90th=[24511], 99.95th=[25822], 00:08:48.160 | 99.99th=[25822] 00:08:48.161 bw ( KiB/s): min=16384, max=20439, per=27.52%, avg=18411.50, stdev=2867.32, samples=2 00:08:48.161 iops : min= 4096, max= 5109, avg=4602.50, stdev=716.30, samples=2 00:08:48.161 lat (msec) : 4=0.01%, 10=2.77%, 20=92.44%, 50=4.78% 00:08:48.161 cpu : usr=4.09%, sys=6.39%, ctx=488, majf=0, minf=2 00:08:48.161 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:08:48.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:48.161 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:48.161 issued rwts: total=4537,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:48.161 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:48.161 job3: (groupid=0, jobs=1): err= 0: pid=1587856: Tue Nov 26 20:38:51 2024 00:08:48.161 read: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec) 00:08:48.161 slat (usec): min=2, max=16040, avg=141.72, stdev=988.40 00:08:48.161 clat (usec): min=5546, max=43193, avg=18129.08, stdev=7531.12 00:08:48.161 lat (usec): min=5561, max=43201, avg=18270.79, stdev=7583.13 00:08:48.161 clat percentiles (usec): 00:08:48.161 | 1.00th=[ 7373], 5.00th=[10421], 10.00th=[11469], 20.00th=[12518], 00:08:48.161 | 30.00th=[13566], 40.00th=[13960], 50.00th=[15270], 60.00th=[15795], 00:08:48.161 | 70.00th=[19792], 80.00th=[23987], 90.00th=[28181], 95.00th=[35390], 00:08:48.161 | 99.00th=[41157], 99.50th=[41157], 99.90th=[43254], 99.95th=[43254], 00:08:48.161 | 99.99th=[43254] 00:08:48.161 write: IOPS=3890, BW=15.2MiB/s (15.9MB/s)(15.3MiB/1009msec); 0 zone resets 00:08:48.161 slat (usec): min=3, max=11384, avg=116.80, stdev=713.29 00:08:48.161 clat (usec): min=4442, max=30115, avg=15755.91, stdev=5258.71 00:08:48.161 lat (usec): min=4455, max=30124, avg=15872.70, stdev=5292.12 00:08:48.161 clat percentiles (usec): 00:08:48.161 | 1.00th=[ 5669], 5.00th=[ 7635], 10.00th=[10290], 20.00th=[12780], 00:08:48.161 | 30.00th=[13042], 40.00th=[13566], 50.00th=[13829], 60.00th=[14615], 00:08:48.161 | 70.00th=[18744], 80.00th=[20317], 90.00th=[23725], 95.00th=[26608], 00:08:48.161 | 99.00th=[27657], 99.50th=[27657], 99.90th=[27657], 99.95th=[30016], 00:08:48.161 | 99.99th=[30016] 00:08:48.161 bw ( KiB/s): min=12336, max=18056, per=22.72%, avg=15196.00, stdev=4044.65, samples=2 00:08:48.161 iops : min= 3084, max= 4514, avg=3799.00, stdev=1011.16, samples=2 00:08:48.161 lat (msec) : 10=6.80%, 20=66.47%, 50=26.72% 00:08:48.161 cpu : usr=4.46%, sys=5.95%, ctx=339, majf=0, minf=1 00:08:48.161 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:08:48.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:48.161 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:48.161 issued rwts: total=3584,3926,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:48.161 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:48.161 00:08:48.161 Run status group 0 (all jobs): 00:08:48.161 READ: bw=61.8MiB/s (64.8MB/s), 13.9MiB/s-17.9MiB/s (14.5MB/s-18.8MB/s), io=64.4MiB (67.5MB), run=1003-1042msec 00:08:48.161 WRITE: bw=65.3MiB/s (68.5MB/s), 14.7MiB/s-19.2MiB/s (15.4MB/s-20.1MB/s), io=68.1MiB (71.4MB), run=1003-1042msec 00:08:48.161 00:08:48.161 Disk stats (read/write): 00:08:48.161 nvme0n1: ios=4146/4524, merge=0/0, ticks=42807/42465, in_queue=85272, util=91.88% 00:08:48.161 nvme0n2: ios=2804/3072, merge=0/0, ticks=17723/13665, in_queue=31388, util=86.89% 00:08:48.161 nvme0n3: ios=3648/4096, merge=0/0, ticks=20395/19428, in_queue=39823, util=88.52% 00:08:48.161 nvme0n4: ios=3106/3183, merge=0/0, ticks=39389/32055, in_queue=71444, util=96.95% 00:08:48.161 20:38:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:08:48.161 20:38:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1587992 00:08:48.161 20:38:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:08:48.161 20:38:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:08:48.161 [global] 00:08:48.161 thread=1 00:08:48.161 invalidate=1 00:08:48.161 rw=read 00:08:48.161 time_based=1 00:08:48.161 runtime=10 00:08:48.161 ioengine=libaio 00:08:48.161 direct=1 00:08:48.161 bs=4096 00:08:48.161 iodepth=1 00:08:48.161 norandommap=1 00:08:48.161 numjobs=1 00:08:48.161 00:08:48.161 [job0] 00:08:48.161 filename=/dev/nvme0n1 00:08:48.161 [job1] 00:08:48.161 filename=/dev/nvme0n2 00:08:48.161 [job2] 00:08:48.161 filename=/dev/nvme0n3 00:08:48.161 [job3] 00:08:48.161 filename=/dev/nvme0n4 00:08:48.161 Could not set queue depth (nvme0n1) 00:08:48.161 Could not set queue depth (nvme0n2) 00:08:48.161 Could not set queue depth (nvme0n3) 00:08:48.161 Could not set queue depth (nvme0n4) 00:08:48.161 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:48.161 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:48.161 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:48.161 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:48.161 fio-3.35 00:08:48.161 Starting 4 threads 00:08:51.441 20:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:08:51.441 20:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:08:51.441 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=2240512, buflen=4096 00:08:51.441 fio: pid=1588085, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:51.441 20:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:51.441 20:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:08:51.698 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=21291008, buflen=4096 00:08:51.698 fio: pid=1588084, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:51.956 20:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:51.956 20:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:08:51.956 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=352256, buflen=4096 00:08:51.956 fio: pid=1588081, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:52.213 20:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:52.213 20:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:08:52.213 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=7172096, buflen=4096 00:08:52.213 fio: pid=1588082, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:52.213 00:08:52.213 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1588081: Tue Nov 26 20:38:55 2024 00:08:52.213 read: IOPS=24, BW=98.5KiB/s (101kB/s)(344KiB/3492msec) 00:08:52.213 slat (usec): min=12, max=14947, avg=204.28, stdev=1601.85 00:08:52.213 clat (usec): min=310, max=44997, avg=40111.29, stdev=6201.51 00:08:52.213 lat (usec): min=338, max=45015, avg=40144.14, stdev=6203.56 00:08:52.213 clat percentiles (usec): 00:08:52.213 | 1.00th=[ 310], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:08:52.213 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:08:52.214 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:08:52.214 | 99.00th=[44827], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:08:52.214 | 99.99th=[44827] 00:08:52.214 bw ( KiB/s): min= 96, max= 112, per=1.25%, avg=100.00, stdev= 6.69, samples=6 00:08:52.214 iops : min= 24, max= 28, avg=25.00, stdev= 1.67, samples=6 00:08:52.214 lat (usec) : 500=2.30% 00:08:52.214 lat (msec) : 50=96.55% 00:08:52.214 cpu : usr=0.00%, sys=0.11%, ctx=92, majf=0, minf=1 00:08:52.214 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:52.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:52.214 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:52.214 issued rwts: total=87,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:52.214 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:52.214 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1588082: Tue Nov 26 20:38:55 2024 00:08:52.214 read: IOPS=463, BW=1853KiB/s (1897kB/s)(7004KiB/3780msec) 00:08:52.214 slat (usec): min=5, max=15711, avg=30.78, stdev=495.72 00:08:52.214 clat (usec): min=158, max=41522, avg=2110.48, stdev=8614.82 00:08:52.214 lat (usec): min=164, max=50939, avg=2139.62, stdev=8653.37 00:08:52.214 clat percentiles (usec): 00:08:52.214 | 1.00th=[ 165], 5.00th=[ 172], 10.00th=[ 176], 20.00th=[ 180], 00:08:52.214 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 194], 60.00th=[ 202], 00:08:52.214 | 70.00th=[ 215], 80.00th=[ 233], 90.00th=[ 253], 95.00th=[ 375], 00:08:52.214 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:08:52.214 | 99.99th=[41681] 00:08:52.214 bw ( KiB/s): min= 96, max=10176, per=24.84%, avg=1993.43, stdev=3761.92, samples=7 00:08:52.214 iops : min= 24, max= 2544, avg=498.29, stdev=940.52, samples=7 00:08:52.214 lat (usec) : 250=89.04%, 500=6.22% 00:08:52.214 lat (msec) : 50=4.68% 00:08:52.214 cpu : usr=0.26%, sys=0.69%, ctx=1759, majf=0, minf=2 00:08:52.214 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:52.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:52.214 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:52.214 issued rwts: total=1752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:52.214 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:52.214 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1588084: Tue Nov 26 20:38:55 2024 00:08:52.214 read: IOPS=1612, BW=6447KiB/s (6602kB/s)(20.3MiB/3225msec) 00:08:52.214 slat (usec): min=4, max=11692, avg=12.66, stdev=162.55 00:08:52.214 clat (usec): min=170, max=41213, avg=600.79, stdev=3859.45 00:08:52.214 lat (usec): min=178, max=41220, avg=613.45, stdev=3863.69 00:08:52.214 clat percentiles (usec): 00:08:52.214 | 1.00th=[ 182], 5.00th=[ 188], 10.00th=[ 194], 20.00th=[ 202], 00:08:52.214 | 30.00th=[ 210], 40.00th=[ 219], 50.00th=[ 231], 60.00th=[ 241], 00:08:52.214 | 70.00th=[ 249], 80.00th=[ 258], 90.00th=[ 273], 95.00th=[ 297], 00:08:52.214 | 99.00th=[ 545], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:08:52.214 | 99.99th=[41157] 00:08:52.214 bw ( KiB/s): min= 96, max=15112, per=84.67%, avg=6793.33, stdev=6310.75, samples=6 00:08:52.214 iops : min= 24, max= 3778, avg=1698.33, stdev=1577.69, samples=6 00:08:52.214 lat (usec) : 250=72.05%, 500=26.87%, 750=0.15% 00:08:52.214 lat (msec) : 50=0.90% 00:08:52.214 cpu : usr=0.71%, sys=2.14%, ctx=5202, majf=0, minf=2 00:08:52.214 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:52.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:52.214 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:52.214 issued rwts: total=5199,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:52.214 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:52.214 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1588085: Tue Nov 26 20:38:55 2024 00:08:52.214 read: IOPS=185, BW=741KiB/s (758kB/s)(2188KiB/2954msec) 00:08:52.214 slat (nsec): min=6015, max=65928, avg=16712.79, stdev=7324.54 00:08:52.214 clat (usec): min=206, max=42360, avg=5335.39, stdev=13529.07 00:08:52.214 lat (usec): min=221, max=42369, avg=5352.10, stdev=13529.42 00:08:52.214 clat percentiles (usec): 00:08:52.214 | 1.00th=[ 225], 5.00th=[ 237], 10.00th=[ 241], 20.00th=[ 249], 00:08:52.214 | 30.00th=[ 258], 40.00th=[ 265], 50.00th=[ 277], 60.00th=[ 302], 00:08:52.214 | 70.00th=[ 326], 80.00th=[ 351], 90.00th=[41157], 95.00th=[41681], 00:08:52.214 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:08:52.214 | 99.99th=[42206] 00:08:52.214 bw ( KiB/s): min= 176, max= 1496, per=9.37%, avg=752.00, stdev=501.55, samples=5 00:08:52.214 iops : min= 44, max= 374, avg=188.00, stdev=125.39, samples=5 00:08:52.214 lat (usec) : 250=21.72%, 500=64.96%, 750=0.73%, 1000=0.18% 00:08:52.214 lat (msec) : 50=12.23% 00:08:52.214 cpu : usr=0.30%, sys=0.37%, ctx=549, majf=0, minf=1 00:08:52.214 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:52.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:52.214 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:52.214 issued rwts: total=548,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:52.214 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:52.214 00:08:52.214 Run status group 0 (all jobs): 00:08:52.214 READ: bw=8023KiB/s (8216kB/s), 98.5KiB/s-6447KiB/s (101kB/s-6602kB/s), io=29.6MiB (31.1MB), run=2954-3780msec 00:08:52.214 00:08:52.214 Disk stats (read/write): 00:08:52.214 nvme0n1: ios=123/0, merge=0/0, ticks=4369/0, in_queue=4369, util=99.34% 00:08:52.214 nvme0n2: ios=1786/0, merge=0/0, ticks=4430/0, in_queue=4430, util=98.61% 00:08:52.214 nvme0n3: ios=5195/0, merge=0/0, ticks=2949/0, in_queue=2949, util=96.42% 00:08:52.214 nvme0n4: ios=542/0, merge=0/0, ticks=2832/0, in_queue=2832, util=96.72% 00:08:52.471 20:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:52.471 20:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:08:52.729 20:38:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:52.729 20:38:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:08:52.987 20:38:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:52.987 20:38:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:08:53.245 20:38:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:53.245 20:38:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:08:53.502 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:08:53.502 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1587992 00:08:53.502 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:08:53.502 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:53.758 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:53.758 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:53.758 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:08:53.758 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:08:53.758 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:53.758 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:08:53.758 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:53.758 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:08:53.758 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:08:53.758 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:08:53.758 nvmf hotplug test: fio failed as expected 00:08:53.758 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:54.016 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:08:54.016 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:08:54.016 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:08:54.016 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:08:54.016 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:08:54.016 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:54.016 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:08:54.016 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:54.016 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:08:54.016 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:54.016 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:54.016 rmmod nvme_tcp 00:08:54.016 rmmod nvme_fabrics 00:08:54.016 rmmod nvme_keyring 00:08:54.016 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:54.016 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:08:54.016 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:08:54.016 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1585948 ']' 00:08:54.016 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1585948 00:08:54.016 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1585948 ']' 00:08:54.016 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1585948 00:08:54.016 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:08:54.016 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:54.016 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1585948 00:08:54.016 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:54.016 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:54.016 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1585948' 00:08:54.016 killing process with pid 1585948 00:08:54.016 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1585948 00:08:54.016 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1585948 00:08:54.273 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:54.273 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:54.273 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:54.273 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:08:54.273 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:08:54.273 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:54.273 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:08:54.273 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:54.273 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:54.273 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:54.273 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:54.273 20:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:56.811 20:38:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:56.811 00:08:56.811 real 0m24.389s 00:08:56.811 user 1m26.170s 00:08:56.811 sys 0m6.556s 00:08:56.811 20:38:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:56.811 20:38:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:56.811 ************************************ 00:08:56.811 END TEST nvmf_fio_target 00:08:56.811 ************************************ 00:08:56.811 20:38:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:08:56.811 20:38:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:56.811 20:38:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:56.811 20:38:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:56.811 ************************************ 00:08:56.811 START TEST nvmf_bdevio 00:08:56.811 ************************************ 00:08:56.811 20:38:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:08:56.811 * Looking for test storage... 00:08:56.811 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:56.811 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:56.811 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:08:56.811 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:56.811 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:56.811 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:56.811 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:56.811 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:56.811 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:08:56.811 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:08:56.811 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:08:56.811 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:08:56.811 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:08:56.811 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:08:56.811 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:08:56.811 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:56.811 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:08:56.811 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:08:56.811 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:56.811 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:56.811 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:08:56.811 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:08:56.811 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:56.811 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:08:56.811 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:56.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.812 --rc genhtml_branch_coverage=1 00:08:56.812 --rc genhtml_function_coverage=1 00:08:56.812 --rc genhtml_legend=1 00:08:56.812 --rc geninfo_all_blocks=1 00:08:56.812 --rc geninfo_unexecuted_blocks=1 00:08:56.812 00:08:56.812 ' 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:56.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.812 --rc genhtml_branch_coverage=1 00:08:56.812 --rc genhtml_function_coverage=1 00:08:56.812 --rc genhtml_legend=1 00:08:56.812 --rc geninfo_all_blocks=1 00:08:56.812 --rc geninfo_unexecuted_blocks=1 00:08:56.812 00:08:56.812 ' 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:56.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.812 --rc genhtml_branch_coverage=1 00:08:56.812 --rc genhtml_function_coverage=1 00:08:56.812 --rc genhtml_legend=1 00:08:56.812 --rc geninfo_all_blocks=1 00:08:56.812 --rc geninfo_unexecuted_blocks=1 00:08:56.812 00:08:56.812 ' 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:56.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.812 --rc genhtml_branch_coverage=1 00:08:56.812 --rc genhtml_function_coverage=1 00:08:56.812 --rc genhtml_legend=1 00:08:56.812 --rc geninfo_all_blocks=1 00:08:56.812 --rc geninfo_unexecuted_blocks=1 00:08:56.812 00:08:56.812 ' 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:56.812 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:08:56.812 20:39:00 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:58.713 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:58.713 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:08:58.713 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:58.713 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:58.713 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:58.713 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:58.714 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:58.714 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:58.714 Found net devices under 0000:09:00.0: cvl_0_0 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:58.714 Found net devices under 0000:09:00.1: cvl_0_1 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:58.714 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:58.714 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:08:58.714 00:08:58.714 --- 10.0.0.2 ping statistics --- 00:08:58.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:58.714 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:58.714 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:58.714 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:08:58.714 00:08:58.714 --- 10.0.0.1 ping statistics --- 00:08:58.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:58.714 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:58.714 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:58.715 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:08:58.715 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:58.715 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:58.715 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:58.715 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1590954 00:08:58.715 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:08:58.715 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1590954 00:08:58.715 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1590954 ']' 00:08:58.715 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.715 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:58.715 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.715 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:58.715 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:58.715 [2024-11-26 20:39:02.401224] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:08:58.715 [2024-11-26 20:39:02.401331] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:58.973 [2024-11-26 20:39:02.473285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:58.973 [2024-11-26 20:39:02.530399] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:58.973 [2024-11-26 20:39:02.530453] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:58.974 [2024-11-26 20:39:02.530481] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:58.974 [2024-11-26 20:39:02.530492] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:58.974 [2024-11-26 20:39:02.530503] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:58.974 [2024-11-26 20:39:02.532024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:58.974 [2024-11-26 20:39:02.532133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:58.974 [2024-11-26 20:39:02.532204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:58.974 [2024-11-26 20:39:02.532207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:58.974 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:58.974 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:08:58.974 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:58.974 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:58.974 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:59.232 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:59.232 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:59.232 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.232 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:59.232 [2024-11-26 20:39:02.681986] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:59.232 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.232 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:59.232 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.232 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:59.232 Malloc0 00:08:59.232 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.232 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:59.232 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.232 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:59.232 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.232 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:59.232 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.232 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:59.232 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.232 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:59.232 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.232 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:59.232 [2024-11-26 20:39:02.743709] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:59.232 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.232 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:08:59.232 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:08:59.232 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:08:59.232 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:08:59.232 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:59.232 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:59.232 { 00:08:59.232 "params": { 00:08:59.232 "name": "Nvme$subsystem", 00:08:59.232 "trtype": "$TEST_TRANSPORT", 00:08:59.232 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:59.232 "adrfam": "ipv4", 00:08:59.232 "trsvcid": "$NVMF_PORT", 00:08:59.232 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:59.232 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:59.232 "hdgst": ${hdgst:-false}, 00:08:59.232 "ddgst": ${ddgst:-false} 00:08:59.232 }, 00:08:59.232 "method": "bdev_nvme_attach_controller" 00:08:59.232 } 00:08:59.232 EOF 00:08:59.232 )") 00:08:59.232 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:08:59.232 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:08:59.232 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:08:59.232 20:39:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:59.232 "params": { 00:08:59.232 "name": "Nvme1", 00:08:59.232 "trtype": "tcp", 00:08:59.232 "traddr": "10.0.0.2", 00:08:59.232 "adrfam": "ipv4", 00:08:59.232 "trsvcid": "4420", 00:08:59.232 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:59.232 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:59.232 "hdgst": false, 00:08:59.232 "ddgst": false 00:08:59.232 }, 00:08:59.232 "method": "bdev_nvme_attach_controller" 00:08:59.232 }' 00:08:59.232 [2024-11-26 20:39:02.795408] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:08:59.232 [2024-11-26 20:39:02.795485] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1590979 ] 00:08:59.232 [2024-11-26 20:39:02.866494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:59.490 [2024-11-26 20:39:02.932906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:59.490 [2024-11-26 20:39:02.932956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:59.490 [2024-11-26 20:39:02.932959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.748 I/O targets: 00:08:59.748 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:08:59.748 00:08:59.748 00:08:59.748 CUnit - A unit testing framework for C - Version 2.1-3 00:08:59.748 http://cunit.sourceforge.net/ 00:08:59.748 00:08:59.748 00:08:59.748 Suite: bdevio tests on: Nvme1n1 00:08:59.748 Test: blockdev write read block ...passed 00:08:59.748 Test: blockdev write zeroes read block ...passed 00:08:59.748 Test: blockdev write zeroes read no split ...passed 00:08:59.748 Test: blockdev write zeroes read split ...passed 00:08:59.748 Test: blockdev write zeroes read split partial ...passed 00:08:59.748 Test: blockdev reset ...[2024-11-26 20:39:03.360924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:08:59.748 [2024-11-26 20:39:03.361030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1854cb0 (9): Bad file descriptor 00:08:59.748 [2024-11-26 20:39:03.415654] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:08:59.748 passed 00:08:59.748 Test: blockdev write read 8 blocks ...passed 00:09:00.006 Test: blockdev write read size > 128k ...passed 00:09:00.006 Test: blockdev write read invalid size ...passed 00:09:00.006 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:00.006 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:00.006 Test: blockdev write read max offset ...passed 00:09:00.006 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:00.006 Test: blockdev writev readv 8 blocks ...passed 00:09:00.006 Test: blockdev writev readv 30 x 1block ...passed 00:09:00.006 Test: blockdev writev readv block ...passed 00:09:00.006 Test: blockdev writev readv size > 128k ...passed 00:09:00.006 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:00.006 Test: blockdev comparev and writev ...[2024-11-26 20:39:03.631422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:00.006 [2024-11-26 20:39:03.631458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:00.006 [2024-11-26 20:39:03.631484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:00.006 [2024-11-26 20:39:03.631503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:00.006 [2024-11-26 20:39:03.631823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:00.006 [2024-11-26 20:39:03.631850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:00.006 [2024-11-26 20:39:03.631872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:00.006 [2024-11-26 20:39:03.631901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:00.006 [2024-11-26 20:39:03.632209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:00.006 [2024-11-26 20:39:03.632234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:00.006 [2024-11-26 20:39:03.632256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:00.006 [2024-11-26 20:39:03.632272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:00.006 [2024-11-26 20:39:03.632591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:00.006 [2024-11-26 20:39:03.632616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:00.006 [2024-11-26 20:39:03.632638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:00.006 [2024-11-26 20:39:03.632654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:00.006 passed 00:09:00.264 Test: blockdev nvme passthru rw ...passed 00:09:00.264 Test: blockdev nvme passthru vendor specific ...[2024-11-26 20:39:03.716549] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:00.264 [2024-11-26 20:39:03.716578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:00.264 [2024-11-26 20:39:03.716732] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:00.264 [2024-11-26 20:39:03.716757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:00.264 [2024-11-26 20:39:03.716904] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:00.264 [2024-11-26 20:39:03.716928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:00.264 [2024-11-26 20:39:03.717075] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:00.264 [2024-11-26 20:39:03.717100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:00.264 passed 00:09:00.264 Test: blockdev nvme admin passthru ...passed 00:09:00.264 Test: blockdev copy ...passed 00:09:00.264 00:09:00.264 Run Summary: Type Total Ran Passed Failed Inactive 00:09:00.264 suites 1 1 n/a 0 0 00:09:00.264 tests 23 23 23 0 0 00:09:00.264 asserts 152 152 152 0 n/a 00:09:00.264 00:09:00.264 Elapsed time = 1.057 seconds 00:09:00.522 20:39:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:00.522 20:39:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.522 20:39:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:00.522 20:39:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.522 20:39:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:00.522 20:39:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:00.522 20:39:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:00.522 20:39:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:00.522 20:39:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:00.522 20:39:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:00.522 20:39:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:00.522 20:39:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:00.522 rmmod nvme_tcp 00:09:00.522 rmmod nvme_fabrics 00:09:00.522 rmmod nvme_keyring 00:09:00.522 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:00.522 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:00.522 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:00.522 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1590954 ']' 00:09:00.522 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1590954 00:09:00.522 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1590954 ']' 00:09:00.522 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1590954 00:09:00.522 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:09:00.522 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:00.522 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1590954 00:09:00.522 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:09:00.522 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:09:00.522 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1590954' 00:09:00.522 killing process with pid 1590954 00:09:00.522 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1590954 00:09:00.522 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1590954 00:09:00.780 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:00.780 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:00.780 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:00.780 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:00.780 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:09:00.780 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:00.780 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:09:00.780 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:00.780 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:00.780 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.780 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:00.780 20:39:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:03.358 20:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:03.358 00:09:03.358 real 0m6.399s 00:09:03.358 user 0m10.431s 00:09:03.358 sys 0m2.068s 00:09:03.358 20:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:03.358 20:39:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:03.358 ************************************ 00:09:03.358 END TEST nvmf_bdevio 00:09:03.358 ************************************ 00:09:03.358 20:39:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:03.358 00:09:03.358 real 3m57.458s 00:09:03.358 user 10m22.387s 00:09:03.358 sys 1m7.244s 00:09:03.358 20:39:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:03.358 20:39:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:03.358 ************************************ 00:09:03.358 END TEST nvmf_target_core 00:09:03.358 ************************************ 00:09:03.358 20:39:06 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:03.358 20:39:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:03.358 20:39:06 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:03.358 20:39:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:03.358 ************************************ 00:09:03.358 START TEST nvmf_target_extra 00:09:03.358 ************************************ 00:09:03.358 20:39:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:03.358 * Looking for test storage... 00:09:03.358 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:03.358 20:39:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:03.358 20:39:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:09:03.358 20:39:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:03.358 20:39:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:03.358 20:39:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:03.358 20:39:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:03.358 20:39:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:03.358 20:39:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:03.358 20:39:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:03.358 20:39:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:03.358 20:39:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:03.358 20:39:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:03.358 20:39:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:03.358 20:39:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:03.358 20:39:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:03.358 20:39:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:03.358 20:39:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:03.358 20:39:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:03.358 20:39:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:03.358 20:39:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:03.358 20:39:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:03.358 20:39:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:03.358 20:39:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:03.358 20:39:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:03.358 20:39:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:03.358 20:39:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:03.358 20:39:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:03.358 20:39:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:03.358 20:39:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:03.358 20:39:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:03.358 20:39:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:03.358 20:39:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:03.358 20:39:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:03.358 20:39:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:03.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.358 --rc genhtml_branch_coverage=1 00:09:03.358 --rc genhtml_function_coverage=1 00:09:03.358 --rc genhtml_legend=1 00:09:03.358 --rc geninfo_all_blocks=1 00:09:03.358 --rc geninfo_unexecuted_blocks=1 00:09:03.358 00:09:03.358 ' 00:09:03.358 20:39:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:03.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.358 --rc genhtml_branch_coverage=1 00:09:03.358 --rc genhtml_function_coverage=1 00:09:03.358 --rc genhtml_legend=1 00:09:03.358 --rc geninfo_all_blocks=1 00:09:03.358 --rc geninfo_unexecuted_blocks=1 00:09:03.358 00:09:03.358 ' 00:09:03.358 20:39:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:03.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.358 --rc genhtml_branch_coverage=1 00:09:03.358 --rc genhtml_function_coverage=1 00:09:03.358 --rc genhtml_legend=1 00:09:03.358 --rc geninfo_all_blocks=1 00:09:03.358 --rc geninfo_unexecuted_blocks=1 00:09:03.358 00:09:03.358 ' 00:09:03.358 20:39:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:03.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.358 --rc genhtml_branch_coverage=1 00:09:03.358 --rc genhtml_function_coverage=1 00:09:03.358 --rc genhtml_legend=1 00:09:03.358 --rc geninfo_all_blocks=1 00:09:03.358 --rc geninfo_unexecuted_blocks=1 00:09:03.358 00:09:03.358 ' 00:09:03.358 20:39:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:03.358 20:39:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:03.358 20:39:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:03.358 20:39:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:03.358 20:39:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:03.359 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:03.359 ************************************ 00:09:03.359 START TEST nvmf_example 00:09:03.359 ************************************ 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:03.359 * Looking for test storage... 00:09:03.359 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:03.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.359 --rc genhtml_branch_coverage=1 00:09:03.359 --rc genhtml_function_coverage=1 00:09:03.359 --rc genhtml_legend=1 00:09:03.359 --rc geninfo_all_blocks=1 00:09:03.359 --rc geninfo_unexecuted_blocks=1 00:09:03.359 00:09:03.359 ' 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:03.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.359 --rc genhtml_branch_coverage=1 00:09:03.359 --rc genhtml_function_coverage=1 00:09:03.359 --rc genhtml_legend=1 00:09:03.359 --rc geninfo_all_blocks=1 00:09:03.359 --rc geninfo_unexecuted_blocks=1 00:09:03.359 00:09:03.359 ' 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:03.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.359 --rc genhtml_branch_coverage=1 00:09:03.359 --rc genhtml_function_coverage=1 00:09:03.359 --rc genhtml_legend=1 00:09:03.359 --rc geninfo_all_blocks=1 00:09:03.359 --rc geninfo_unexecuted_blocks=1 00:09:03.359 00:09:03.359 ' 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:03.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.359 --rc genhtml_branch_coverage=1 00:09:03.359 --rc genhtml_function_coverage=1 00:09:03.359 --rc genhtml_legend=1 00:09:03.359 --rc geninfo_all_blocks=1 00:09:03.359 --rc geninfo_unexecuted_blocks=1 00:09:03.359 00:09:03.359 ' 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:03.359 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:03.360 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:03.360 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:03.360 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:03.360 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:03.360 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:03.360 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:03.360 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:03.360 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:03.360 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:03.360 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:03.360 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:03.360 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:03.360 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:03.360 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:09:03.360 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:03.360 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:03.360 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:03.360 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.360 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.360 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.360 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:03.360 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.360 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:09:03.360 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:03.360 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:03.360 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:03.360 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:03.360 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:03.360 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:03.360 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:03.360 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:03.360 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:03.360 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:03.360 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:03.360 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:03.360 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:03.360 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:03.360 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:03.360 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:03.360 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:03.360 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:03.360 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:03.360 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:03.360 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:03.360 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:03.360 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:03.360 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:03.360 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:03.360 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:03.360 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:03.360 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:03.360 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:03.360 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:03.360 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:03.360 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:09:03.360 20:39:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:05.261 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:05.520 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:09:05.520 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:05.520 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:05.520 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:05.520 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:05.520 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:05.520 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:09:05.520 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:05.520 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:09:05.520 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:09:05.520 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:09:05.520 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:09:05.520 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:09:05.520 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:09:05.520 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:05.520 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:05.520 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:05.520 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:05.520 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:05.520 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:05.520 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:05.520 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:05.520 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:05.520 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:05.520 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:05.520 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:05.520 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:05.520 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:05.520 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:05.520 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:05.520 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:05.520 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:05.520 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:05.520 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:05.520 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:05.520 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:05.520 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:05.520 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:05.521 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:05.521 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:05.521 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:05.521 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:05.521 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:05.521 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:05.521 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:05.521 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:05.521 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:05.521 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:05.521 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:05.521 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:05.521 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:05.521 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:05.521 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:05.521 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:05.521 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:05.521 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:05.521 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:05.521 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:05.521 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:05.521 Found net devices under 0000:09:00.0: cvl_0_0 00:09:05.521 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:05.521 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:05.521 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:05.521 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:05.521 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:05.521 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:05.521 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:05.521 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:05.521 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:05.521 Found net devices under 0000:09:00.1: cvl_0_1 00:09:05.521 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:05.521 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:05.521 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:09:05.521 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:05.521 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:05.521 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:05.521 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:05.521 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:05.521 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:05.521 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:05.521 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:05.521 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:05.521 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:05.521 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:05.521 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:05.521 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:05.521 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:05.521 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:05.521 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:05.521 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:05.521 20:39:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:05.521 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:05.521 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:05.521 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:05.521 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:05.521 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:05.521 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:05.521 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:05.521 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:05.521 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:05.521 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:09:05.521 00:09:05.521 --- 10.0.0.2 ping statistics --- 00:09:05.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.521 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:09:05.521 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:05.521 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:05.521 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:09:05.521 00:09:05.521 --- 10.0.0.1 ping statistics --- 00:09:05.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.521 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:09:05.521 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:05.521 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:09:05.521 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:05.521 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:05.521 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:05.521 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:05.521 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:05.521 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:05.521 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:05.521 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:05.521 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:05.521 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:05.521 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:05.521 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:09:05.521 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:09:05.521 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1593639 00:09:05.521 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:05.521 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1593639 00:09:05.521 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:05.521 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 1593639 ']' 00:09:05.521 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.521 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:05.521 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.521 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:05.521 20:39:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:06.895 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:06.895 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:09:06.895 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:06.895 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:06.895 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:06.895 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:06.895 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.895 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:06.895 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.895 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:06.895 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.895 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:06.895 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.895 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:06.895 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:06.895 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.895 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:06.895 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.895 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:06.895 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:06.895 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.895 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:06.895 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.895 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:06.895 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.895 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:06.895 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.895 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:09:06.895 20:39:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:19.111 Initializing NVMe Controllers 00:09:19.111 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:19.111 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:19.111 Initialization complete. Launching workers. 00:09:19.111 ======================================================== 00:09:19.111 Latency(us) 00:09:19.111 Device Information : IOPS MiB/s Average min max 00:09:19.111 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14972.00 58.48 4276.13 853.60 16303.37 00:09:19.111 ======================================================== 00:09:19.111 Total : 14972.00 58.48 4276.13 853.60 16303.37 00:09:19.111 00:09:19.111 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:09:19.111 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:09:19.111 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:19.111 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:09:19.111 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:19.111 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:09:19.111 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:19.111 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:19.111 rmmod nvme_tcp 00:09:19.111 rmmod nvme_fabrics 00:09:19.111 rmmod nvme_keyring 00:09:19.111 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:19.111 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:09:19.111 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:09:19.111 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 1593639 ']' 00:09:19.111 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 1593639 00:09:19.111 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 1593639 ']' 00:09:19.111 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 1593639 00:09:19.111 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:09:19.111 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:19.111 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1593639 00:09:19.111 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:09:19.111 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:09:19.111 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1593639' 00:09:19.111 killing process with pid 1593639 00:09:19.111 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 1593639 00:09:19.111 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 1593639 00:09:19.111 nvmf threads initialize successfully 00:09:19.111 bdev subsystem init successfully 00:09:19.111 created a nvmf target service 00:09:19.111 create targets's poll groups done 00:09:19.111 all subsystems of target started 00:09:19.111 nvmf target is running 00:09:19.111 all subsystems of target stopped 00:09:19.111 destroy targets's poll groups done 00:09:19.111 destroyed the nvmf target service 00:09:19.111 bdev subsystem finish successfully 00:09:19.111 nvmf threads destroy successfully 00:09:19.111 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:19.111 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:19.111 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:19.111 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:09:19.111 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:09:19.111 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:19.111 20:39:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:09:19.111 20:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:19.111 20:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:19.111 20:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.111 20:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:19.111 20:39:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.369 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:19.369 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:09:19.369 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:19.369 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:19.628 00:09:19.628 real 0m16.392s 00:09:19.628 user 0m46.332s 00:09:19.628 sys 0m3.425s 00:09:19.628 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:19.628 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:19.628 ************************************ 00:09:19.628 END TEST nvmf_example 00:09:19.628 ************************************ 00:09:19.628 20:39:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:19.628 20:39:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:19.628 20:39:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:19.628 20:39:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:19.628 ************************************ 00:09:19.628 START TEST nvmf_filesystem 00:09:19.628 ************************************ 00:09:19.628 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:19.628 * Looking for test storage... 00:09:19.628 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:19.628 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:19.628 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:09:19.628 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:19.628 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:19.628 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:19.628 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:19.628 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:19.628 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:19.628 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:19.628 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:19.628 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:19.628 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:19.628 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:19.628 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:19.628 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:19.628 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:19.628 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:19.628 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:19.628 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:19.628 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:19.628 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:19.628 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:19.628 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:19.628 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:19.628 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:19.628 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:19.628 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:19.628 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:19.628 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:19.628 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:19.628 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:19.628 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:19.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.629 --rc genhtml_branch_coverage=1 00:09:19.629 --rc genhtml_function_coverage=1 00:09:19.629 --rc genhtml_legend=1 00:09:19.629 --rc geninfo_all_blocks=1 00:09:19.629 --rc geninfo_unexecuted_blocks=1 00:09:19.629 00:09:19.629 ' 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:19.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.629 --rc genhtml_branch_coverage=1 00:09:19.629 --rc genhtml_function_coverage=1 00:09:19.629 --rc genhtml_legend=1 00:09:19.629 --rc geninfo_all_blocks=1 00:09:19.629 --rc geninfo_unexecuted_blocks=1 00:09:19.629 00:09:19.629 ' 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:19.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.629 --rc genhtml_branch_coverage=1 00:09:19.629 --rc genhtml_function_coverage=1 00:09:19.629 --rc genhtml_legend=1 00:09:19.629 --rc geninfo_all_blocks=1 00:09:19.629 --rc geninfo_unexecuted_blocks=1 00:09:19.629 00:09:19.629 ' 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:19.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.629 --rc genhtml_branch_coverage=1 00:09:19.629 --rc genhtml_function_coverage=1 00:09:19.629 --rc genhtml_legend=1 00:09:19.629 --rc geninfo_all_blocks=1 00:09:19.629 --rc geninfo_unexecuted_blocks=1 00:09:19.629 00:09:19.629 ' 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:09:19.629 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:09:19.630 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:09:19.630 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:09:19.630 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:09:19.630 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:09:19.630 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:09:19.630 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:09:19.630 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:09:19.630 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:09:19.630 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:09:19.630 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:19.630 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:09:19.630 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:09:19.630 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:09:19.630 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:19.630 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:19.630 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:19.630 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:19.630 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:19.630 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:19.630 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:19.630 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:19.630 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:19.630 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:19.630 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:19.630 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:19.630 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:19.630 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:19.630 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:09:19.630 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:19.630 #define SPDK_CONFIG_H 00:09:19.630 #define SPDK_CONFIG_AIO_FSDEV 1 00:09:19.630 #define SPDK_CONFIG_APPS 1 00:09:19.630 #define SPDK_CONFIG_ARCH native 00:09:19.630 #undef SPDK_CONFIG_ASAN 00:09:19.630 #undef SPDK_CONFIG_AVAHI 00:09:19.630 #undef SPDK_CONFIG_CET 00:09:19.630 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:09:19.630 #define SPDK_CONFIG_COVERAGE 1 00:09:19.630 #define SPDK_CONFIG_CROSS_PREFIX 00:09:19.630 #undef SPDK_CONFIG_CRYPTO 00:09:19.630 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:19.630 #undef SPDK_CONFIG_CUSTOMOCF 00:09:19.630 #undef SPDK_CONFIG_DAOS 00:09:19.630 #define SPDK_CONFIG_DAOS_DIR 00:09:19.630 #define SPDK_CONFIG_DEBUG 1 00:09:19.630 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:19.630 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:19.630 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:19.630 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:19.630 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:19.630 #undef SPDK_CONFIG_DPDK_UADK 00:09:19.630 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:19.630 #define SPDK_CONFIG_EXAMPLES 1 00:09:19.630 #undef SPDK_CONFIG_FC 00:09:19.630 #define SPDK_CONFIG_FC_PATH 00:09:19.630 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:19.630 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:19.630 #define SPDK_CONFIG_FSDEV 1 00:09:19.630 #undef SPDK_CONFIG_FUSE 00:09:19.630 #undef SPDK_CONFIG_FUZZER 00:09:19.630 #define SPDK_CONFIG_FUZZER_LIB 00:09:19.630 #undef SPDK_CONFIG_GOLANG 00:09:19.630 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:19.630 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:19.630 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:19.630 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:19.630 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:19.630 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:19.630 #undef SPDK_CONFIG_HAVE_LZ4 00:09:19.630 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:09:19.630 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:09:19.630 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:19.630 #define SPDK_CONFIG_IDXD 1 00:09:19.630 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:19.630 #undef SPDK_CONFIG_IPSEC_MB 00:09:19.630 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:19.630 #define SPDK_CONFIG_ISAL 1 00:09:19.630 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:19.630 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:19.630 #define SPDK_CONFIG_LIBDIR 00:09:19.630 #undef SPDK_CONFIG_LTO 00:09:19.630 #define SPDK_CONFIG_MAX_LCORES 128 00:09:19.630 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:09:19.630 #define SPDK_CONFIG_NVME_CUSE 1 00:09:19.630 #undef SPDK_CONFIG_OCF 00:09:19.630 #define SPDK_CONFIG_OCF_PATH 00:09:19.630 #define SPDK_CONFIG_OPENSSL_PATH 00:09:19.630 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:19.630 #define SPDK_CONFIG_PGO_DIR 00:09:19.630 #undef SPDK_CONFIG_PGO_USE 00:09:19.630 #define SPDK_CONFIG_PREFIX /usr/local 00:09:19.630 #undef SPDK_CONFIG_RAID5F 00:09:19.630 #undef SPDK_CONFIG_RBD 00:09:19.630 #define SPDK_CONFIG_RDMA 1 00:09:19.630 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:19.630 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:19.630 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:19.630 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:19.630 #define SPDK_CONFIG_SHARED 1 00:09:19.630 #undef SPDK_CONFIG_SMA 00:09:19.630 #define SPDK_CONFIG_TESTS 1 00:09:19.630 #undef SPDK_CONFIG_TSAN 00:09:19.630 #define SPDK_CONFIG_UBLK 1 00:09:19.630 #define SPDK_CONFIG_UBSAN 1 00:09:19.630 #undef SPDK_CONFIG_UNIT_TESTS 00:09:19.630 #undef SPDK_CONFIG_URING 00:09:19.630 #define SPDK_CONFIG_URING_PATH 00:09:19.630 #undef SPDK_CONFIG_URING_ZNS 00:09:19.630 #undef SPDK_CONFIG_USDT 00:09:19.630 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:19.630 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:19.630 #define SPDK_CONFIG_VFIO_USER 1 00:09:19.630 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:19.630 #define SPDK_CONFIG_VHOST 1 00:09:19.630 #define SPDK_CONFIG_VIRTIO 1 00:09:19.630 #undef SPDK_CONFIG_VTUNE 00:09:19.630 #define SPDK_CONFIG_VTUNE_DIR 00:09:19.630 #define SPDK_CONFIG_WERROR 1 00:09:19.630 #define SPDK_CONFIG_WPDK_DIR 00:09:19.630 #undef SPDK_CONFIG_XNVME 00:09:19.630 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:19.630 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:19.630 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:19.630 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:19.630 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:19.630 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:19.630 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:19.630 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.630 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.630 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.630 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:19.630 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.631 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:19.631 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:19.631 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:19.631 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:19.631 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:19.631 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:19.631 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:09:19.631 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:09:19.631 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:19.893 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j48 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 1595453 ]] 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 1595453 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.mz03YQ 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.mz03YQ/tests/target /tmp/spdk.mz03YQ 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:09:19.894 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=50829688832 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=61988519936 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11158831104 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30982893568 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994259968 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12375265280 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12397707264 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=22441984 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=29919539200 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994259968 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=1074720768 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6198837248 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6198849536 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:09:19.895 * Looking for test storage... 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=50829688832 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=13373423616 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:19.895 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:19.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.895 --rc genhtml_branch_coverage=1 00:09:19.895 --rc genhtml_function_coverage=1 00:09:19.895 --rc genhtml_legend=1 00:09:19.895 --rc geninfo_all_blocks=1 00:09:19.895 --rc geninfo_unexecuted_blocks=1 00:09:19.895 00:09:19.895 ' 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:19.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.895 --rc genhtml_branch_coverage=1 00:09:19.895 --rc genhtml_function_coverage=1 00:09:19.895 --rc genhtml_legend=1 00:09:19.895 --rc geninfo_all_blocks=1 00:09:19.895 --rc geninfo_unexecuted_blocks=1 00:09:19.895 00:09:19.895 ' 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:19.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.895 --rc genhtml_branch_coverage=1 00:09:19.895 --rc genhtml_function_coverage=1 00:09:19.895 --rc genhtml_legend=1 00:09:19.895 --rc geninfo_all_blocks=1 00:09:19.895 --rc geninfo_unexecuted_blocks=1 00:09:19.895 00:09:19.895 ' 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:19.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.895 --rc genhtml_branch_coverage=1 00:09:19.895 --rc genhtml_function_coverage=1 00:09:19.895 --rc genhtml_legend=1 00:09:19.895 --rc geninfo_all_blocks=1 00:09:19.895 --rc geninfo_unexecuted_blocks=1 00:09:19.895 00:09:19.895 ' 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:19.895 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:19.896 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.896 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.896 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.896 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:19.896 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.896 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:09:19.896 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:19.896 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:19.896 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:19.896 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:19.896 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:19.896 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:19.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:19.896 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:19.896 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:19.896 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:19.896 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:09:19.896 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:19.896 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:09:19.896 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:19.896 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:19.896 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:19.896 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:19.896 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:19.896 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.896 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:19.896 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.896 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:19.896 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:19.896 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:09:19.896 20:39:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:22.426 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:22.426 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:22.426 Found net devices under 0000:09:00.0: cvl_0_0 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:22.426 Found net devices under 0000:09:00.1: cvl_0_1 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:09:22.426 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:22.427 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:22.427 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:22.427 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:22.427 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:22.427 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:22.427 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:22.427 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:22.427 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:22.427 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:22.427 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:22.427 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:22.427 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:22.427 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:22.427 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:22.427 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:22.427 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:22.427 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:22.427 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:22.427 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:22.427 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:22.427 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:22.427 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:22.427 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:22.427 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:22.427 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:22.427 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:22.427 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.411 ms 00:09:22.427 00:09:22.427 --- 10.0.0.2 ping statistics --- 00:09:22.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.427 rtt min/avg/max/mdev = 0.411/0.411/0.411/0.000 ms 00:09:22.427 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:22.427 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:22.427 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:09:22.427 00:09:22.427 --- 10.0.0.1 ping statistics --- 00:09:22.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:22.427 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:09:22.427 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:22.427 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:09:22.427 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:22.427 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:22.427 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:22.427 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:22.427 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:22.427 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:22.427 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:22.427 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:09:22.427 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:22.427 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:22.427 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:22.427 ************************************ 00:09:22.427 START TEST nvmf_filesystem_no_in_capsule 00:09:22.427 ************************************ 00:09:22.427 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:09:22.427 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:09:22.427 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:22.427 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:22.427 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:22.427 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:22.427 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1597103 00:09:22.427 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:22.427 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1597103 00:09:22.427 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1597103 ']' 00:09:22.427 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.427 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:22.427 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.427 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:22.427 20:39:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:22.427 [2024-11-26 20:39:25.895369] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:09:22.427 [2024-11-26 20:39:25.895475] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:22.427 [2024-11-26 20:39:25.966388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:22.427 [2024-11-26 20:39:26.026795] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:22.427 [2024-11-26 20:39:26.026845] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:22.427 [2024-11-26 20:39:26.026867] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:22.427 [2024-11-26 20:39:26.026877] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:22.427 [2024-11-26 20:39:26.026886] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:22.427 [2024-11-26 20:39:26.028346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:22.427 [2024-11-26 20:39:26.028410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:22.427 [2024-11-26 20:39:26.028478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.427 [2024-11-26 20:39:26.028475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:22.685 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:22.685 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:09:22.685 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:22.685 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:22.685 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:22.685 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:22.685 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:22.685 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:22.685 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.685 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:22.685 [2024-11-26 20:39:26.179131] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:22.685 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.685 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:22.685 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.685 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:22.685 Malloc1 00:09:22.685 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.685 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:22.686 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.686 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:22.686 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.686 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:22.686 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.686 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:22.686 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.686 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:22.686 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.686 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:22.686 [2024-11-26 20:39:26.368905] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:22.686 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.686 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:22.686 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:09:22.686 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:09:22.686 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:09:22.686 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:09:22.686 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:22.686 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.686 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:22.943 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.943 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:09:22.943 { 00:09:22.943 "name": "Malloc1", 00:09:22.943 "aliases": [ 00:09:22.943 "c81561a8-9ad7-4a8b-931d-0bfe7738b38e" 00:09:22.943 ], 00:09:22.943 "product_name": "Malloc disk", 00:09:22.943 "block_size": 512, 00:09:22.943 "num_blocks": 1048576, 00:09:22.943 "uuid": "c81561a8-9ad7-4a8b-931d-0bfe7738b38e", 00:09:22.943 "assigned_rate_limits": { 00:09:22.943 "rw_ios_per_sec": 0, 00:09:22.943 "rw_mbytes_per_sec": 0, 00:09:22.943 "r_mbytes_per_sec": 0, 00:09:22.943 "w_mbytes_per_sec": 0 00:09:22.943 }, 00:09:22.943 "claimed": true, 00:09:22.943 "claim_type": "exclusive_write", 00:09:22.943 "zoned": false, 00:09:22.943 "supported_io_types": { 00:09:22.943 "read": true, 00:09:22.943 "write": true, 00:09:22.943 "unmap": true, 00:09:22.943 "flush": true, 00:09:22.943 "reset": true, 00:09:22.943 "nvme_admin": false, 00:09:22.943 "nvme_io": false, 00:09:22.943 "nvme_io_md": false, 00:09:22.943 "write_zeroes": true, 00:09:22.943 "zcopy": true, 00:09:22.943 "get_zone_info": false, 00:09:22.943 "zone_management": false, 00:09:22.943 "zone_append": false, 00:09:22.943 "compare": false, 00:09:22.943 "compare_and_write": false, 00:09:22.943 "abort": true, 00:09:22.943 "seek_hole": false, 00:09:22.943 "seek_data": false, 00:09:22.943 "copy": true, 00:09:22.943 "nvme_iov_md": false 00:09:22.943 }, 00:09:22.943 "memory_domains": [ 00:09:22.943 { 00:09:22.943 "dma_device_id": "system", 00:09:22.943 "dma_device_type": 1 00:09:22.943 }, 00:09:22.943 { 00:09:22.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.943 "dma_device_type": 2 00:09:22.943 } 00:09:22.943 ], 00:09:22.943 "driver_specific": {} 00:09:22.943 } 00:09:22.943 ]' 00:09:22.943 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:09:22.943 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:09:22.943 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:09:22.943 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:09:22.943 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:09:22.943 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:09:22.943 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:22.943 20:39:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:23.507 20:39:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:23.507 20:39:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:09:23.507 20:39:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:23.507 20:39:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:23.507 20:39:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:09:25.403 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:25.403 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:25.403 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:25.403 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:25.403 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:25.403 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:09:25.403 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:25.403 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:25.403 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:25.403 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:25.403 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:25.403 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:25.403 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:25.403 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:25.403 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:25.403 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:25.403 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:25.669 20:39:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:26.617 20:39:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:27.549 20:39:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:09:27.549 20:39:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:27.549 20:39:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:27.549 20:39:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:27.549 20:39:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:27.549 ************************************ 00:09:27.549 START TEST filesystem_ext4 00:09:27.549 ************************************ 00:09:27.549 20:39:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:27.549 20:39:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:27.549 20:39:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:27.549 20:39:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:27.549 20:39:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:09:27.549 20:39:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:27.549 20:39:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:09:27.549 20:39:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:09:27.549 20:39:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:09:27.549 20:39:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:09:27.549 20:39:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:27.549 mke2fs 1.47.0 (5-Feb-2023) 00:09:27.806 Discarding device blocks: 0/522240 done 00:09:27.806 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:27.806 Filesystem UUID: eb19c0db-1a79-49ee-832f-daf4b3f26e2b 00:09:27.806 Superblock backups stored on blocks: 00:09:27.806 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:27.806 00:09:27.806 Allocating group tables: 0/64 done 00:09:27.806 Writing inode tables: 0/64 done 00:09:29.176 Creating journal (8192 blocks): done 00:09:29.690 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:09:29.690 00:09:29.691 20:39:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:09:29.691 20:39:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:36.241 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:36.241 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:09:36.241 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:36.241 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:09:36.241 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:36.241 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:36.241 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1597103 00:09:36.241 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:36.241 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:36.241 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:36.241 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:36.241 00:09:36.241 real 0m7.616s 00:09:36.241 user 0m0.024s 00:09:36.241 sys 0m0.063s 00:09:36.241 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.241 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:36.241 ************************************ 00:09:36.241 END TEST filesystem_ext4 00:09:36.241 ************************************ 00:09:36.241 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:36.241 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:36.241 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.241 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:36.241 ************************************ 00:09:36.241 START TEST filesystem_btrfs 00:09:36.241 ************************************ 00:09:36.241 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:36.241 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:36.241 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:36.241 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:36.241 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:09:36.241 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:36.241 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:09:36.241 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:09:36.241 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:09:36.241 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:09:36.241 20:39:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:36.241 btrfs-progs v6.8.1 00:09:36.241 See https://btrfs.readthedocs.io for more information. 00:09:36.241 00:09:36.241 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:36.241 NOTE: several default settings have changed in version 5.15, please make sure 00:09:36.241 this does not affect your deployments: 00:09:36.241 - DUP for metadata (-m dup) 00:09:36.241 - enabled no-holes (-O no-holes) 00:09:36.241 - enabled free-space-tree (-R free-space-tree) 00:09:36.241 00:09:36.241 Label: (null) 00:09:36.241 UUID: 67339ca9-afdd-4135-8187-ff85fb8dc002 00:09:36.241 Node size: 16384 00:09:36.241 Sector size: 4096 (CPU page size: 4096) 00:09:36.241 Filesystem size: 510.00MiB 00:09:36.241 Block group profiles: 00:09:36.241 Data: single 8.00MiB 00:09:36.241 Metadata: DUP 32.00MiB 00:09:36.241 System: DUP 8.00MiB 00:09:36.241 SSD detected: yes 00:09:36.241 Zoned device: no 00:09:36.241 Features: extref, skinny-metadata, no-holes, free-space-tree 00:09:36.241 Checksum: crc32c 00:09:36.241 Number of devices: 1 00:09:36.241 Devices: 00:09:36.241 ID SIZE PATH 00:09:36.241 1 510.00MiB /dev/nvme0n1p1 00:09:36.241 00:09:36.241 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:09:36.241 20:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:36.499 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:36.499 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:09:36.499 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:36.499 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:09:36.499 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:36.499 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:36.757 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1597103 00:09:36.757 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:36.757 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:36.757 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:36.757 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:36.757 00:09:36.757 real 0m1.410s 00:09:36.757 user 0m0.016s 00:09:36.757 sys 0m0.115s 00:09:36.757 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.757 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:36.757 ************************************ 00:09:36.757 END TEST filesystem_btrfs 00:09:36.757 ************************************ 00:09:36.757 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:09:36.757 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:36.757 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.757 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:36.757 ************************************ 00:09:36.757 START TEST filesystem_xfs 00:09:36.757 ************************************ 00:09:36.757 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:09:36.757 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:36.757 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:36.757 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:36.757 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:09:36.757 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:36.757 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:09:36.757 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:09:36.757 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:09:36.757 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:09:36.757 20:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:36.757 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:36.757 = sectsz=512 attr=2, projid32bit=1 00:09:36.757 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:36.757 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:36.757 data = bsize=4096 blocks=130560, imaxpct=25 00:09:36.757 = sunit=0 swidth=0 blks 00:09:36.757 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:36.757 log =internal log bsize=4096 blocks=16384, version=2 00:09:36.757 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:36.757 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:37.752 Discarding blocks...Done. 00:09:38.038 20:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:09:38.038 20:39:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:40.562 20:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:40.562 20:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:09:40.562 20:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:40.562 20:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:09:40.562 20:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:09:40.562 20:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:40.562 20:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1597103 00:09:40.562 20:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:40.562 20:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:40.562 20:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:40.562 20:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:40.562 00:09:40.562 real 0m3.597s 00:09:40.562 user 0m0.021s 00:09:40.562 sys 0m0.063s 00:09:40.562 20:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:40.562 20:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:40.562 ************************************ 00:09:40.562 END TEST filesystem_xfs 00:09:40.562 ************************************ 00:09:40.562 20:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:40.562 20:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:40.562 20:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:40.562 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.562 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:40.562 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:09:40.562 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:40.562 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:40.562 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:40.562 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:40.562 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:09:40.562 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:40.562 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.562 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:40.562 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.562 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:40.562 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1597103 00:09:40.562 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1597103 ']' 00:09:40.562 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1597103 00:09:40.562 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:09:40.562 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:40.562 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1597103 00:09:40.562 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:40.562 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:40.562 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1597103' 00:09:40.562 killing process with pid 1597103 00:09:40.562 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 1597103 00:09:40.562 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 1597103 00:09:41.128 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:41.128 00:09:41.128 real 0m18.729s 00:09:41.128 user 1m12.583s 00:09:41.128 sys 0m2.263s 00:09:41.128 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:41.128 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:41.128 ************************************ 00:09:41.128 END TEST nvmf_filesystem_no_in_capsule 00:09:41.128 ************************************ 00:09:41.128 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:09:41.128 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:41.128 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:41.128 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:41.128 ************************************ 00:09:41.128 START TEST nvmf_filesystem_in_capsule 00:09:41.128 ************************************ 00:09:41.128 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:09:41.128 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:09:41.128 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:41.128 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:41.128 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:41.128 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:41.128 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=1599609 00:09:41.128 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:41.128 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 1599609 00:09:41.128 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1599609 ']' 00:09:41.128 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.128 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:41.128 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.128 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:41.128 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:41.128 [2024-11-26 20:39:44.675996] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:09:41.128 [2024-11-26 20:39:44.676078] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:41.128 [2024-11-26 20:39:44.749246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:41.128 [2024-11-26 20:39:44.807294] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:41.128 [2024-11-26 20:39:44.807344] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:41.128 [2024-11-26 20:39:44.807371] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:41.128 [2024-11-26 20:39:44.807383] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:41.128 [2024-11-26 20:39:44.807392] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:41.128 [2024-11-26 20:39:44.808842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:41.128 [2024-11-26 20:39:44.808907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:41.128 [2024-11-26 20:39:44.808971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:41.128 [2024-11-26 20:39:44.808975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.387 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:41.387 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:09:41.387 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:41.387 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:41.387 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:41.387 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:41.387 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:41.387 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:09:41.387 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.387 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:41.387 [2024-11-26 20:39:44.960921] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:41.387 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.387 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:41.387 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.387 20:39:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:41.645 Malloc1 00:09:41.645 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.645 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:41.645 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.645 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:41.645 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.645 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:41.645 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.645 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:41.645 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.645 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:41.645 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.645 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:41.645 [2024-11-26 20:39:45.169100] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:41.645 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.645 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:41.645 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:09:41.645 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:09:41.645 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:09:41.645 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:09:41.645 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:41.645 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.645 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:41.645 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.645 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:09:41.645 { 00:09:41.645 "name": "Malloc1", 00:09:41.645 "aliases": [ 00:09:41.645 "d511d27f-5f4b-4025-8b63-109b3f06f558" 00:09:41.645 ], 00:09:41.645 "product_name": "Malloc disk", 00:09:41.645 "block_size": 512, 00:09:41.645 "num_blocks": 1048576, 00:09:41.645 "uuid": "d511d27f-5f4b-4025-8b63-109b3f06f558", 00:09:41.645 "assigned_rate_limits": { 00:09:41.645 "rw_ios_per_sec": 0, 00:09:41.645 "rw_mbytes_per_sec": 0, 00:09:41.645 "r_mbytes_per_sec": 0, 00:09:41.645 "w_mbytes_per_sec": 0 00:09:41.645 }, 00:09:41.645 "claimed": true, 00:09:41.645 "claim_type": "exclusive_write", 00:09:41.645 "zoned": false, 00:09:41.645 "supported_io_types": { 00:09:41.645 "read": true, 00:09:41.645 "write": true, 00:09:41.645 "unmap": true, 00:09:41.645 "flush": true, 00:09:41.645 "reset": true, 00:09:41.645 "nvme_admin": false, 00:09:41.645 "nvme_io": false, 00:09:41.645 "nvme_io_md": false, 00:09:41.645 "write_zeroes": true, 00:09:41.645 "zcopy": true, 00:09:41.645 "get_zone_info": false, 00:09:41.645 "zone_management": false, 00:09:41.645 "zone_append": false, 00:09:41.645 "compare": false, 00:09:41.645 "compare_and_write": false, 00:09:41.645 "abort": true, 00:09:41.645 "seek_hole": false, 00:09:41.645 "seek_data": false, 00:09:41.645 "copy": true, 00:09:41.645 "nvme_iov_md": false 00:09:41.645 }, 00:09:41.645 "memory_domains": [ 00:09:41.645 { 00:09:41.645 "dma_device_id": "system", 00:09:41.645 "dma_device_type": 1 00:09:41.645 }, 00:09:41.645 { 00:09:41.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.645 "dma_device_type": 2 00:09:41.645 } 00:09:41.645 ], 00:09:41.645 "driver_specific": {} 00:09:41.645 } 00:09:41.645 ]' 00:09:41.645 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:09:41.645 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:09:41.645 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:09:41.645 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:09:41.645 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:09:41.645 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:09:41.645 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:41.645 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:42.577 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:42.577 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:09:42.577 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:42.577 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:42.577 20:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:09:44.472 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:44.472 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:44.472 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:44.472 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:44.472 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:44.472 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:09:44.472 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:44.472 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:44.472 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:44.472 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:44.472 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:44.472 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:44.472 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:44.472 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:44.472 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:44.472 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:44.472 20:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:44.472 20:39:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:45.035 20:39:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:46.405 20:39:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:09:46.405 20:39:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:46.405 20:39:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:46.405 20:39:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:46.405 20:39:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:46.405 ************************************ 00:09:46.405 START TEST filesystem_in_capsule_ext4 00:09:46.405 ************************************ 00:09:46.405 20:39:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:46.405 20:39:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:46.405 20:39:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:46.405 20:39:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:46.405 20:39:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:09:46.405 20:39:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:46.405 20:39:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:09:46.405 20:39:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:09:46.405 20:39:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:09:46.405 20:39:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:09:46.405 20:39:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:46.405 mke2fs 1.47.0 (5-Feb-2023) 00:09:46.405 Discarding device blocks: 0/522240 done 00:09:46.405 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:46.405 Filesystem UUID: 399da38a-409f-4d56-b0e6-8a2d71bc1350 00:09:46.405 Superblock backups stored on blocks: 00:09:46.405 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:46.405 00:09:46.405 Allocating group tables: 0/64 done 00:09:46.405 Writing inode tables: 0/64 done 00:09:46.970 Creating journal (8192 blocks): done 00:09:46.970 Writing superblocks and filesystem accounting information: 0/64 done 00:09:46.970 00:09:46.970 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:09:46.970 20:39:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:52.224 20:39:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:52.224 20:39:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:09:52.224 20:39:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:52.224 20:39:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:09:52.224 20:39:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:52.224 20:39:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:52.482 20:39:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1599609 00:09:52.482 20:39:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:52.482 20:39:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:52.482 20:39:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:52.482 20:39:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:52.482 00:09:52.482 real 0m6.265s 00:09:52.482 user 0m0.024s 00:09:52.482 sys 0m0.060s 00:09:52.482 20:39:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:52.482 20:39:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:52.482 ************************************ 00:09:52.482 END TEST filesystem_in_capsule_ext4 00:09:52.482 ************************************ 00:09:52.482 20:39:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:52.482 20:39:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:52.482 20:39:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:52.482 20:39:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:52.482 ************************************ 00:09:52.482 START TEST filesystem_in_capsule_btrfs 00:09:52.482 ************************************ 00:09:52.482 20:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:52.482 20:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:52.482 20:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:52.482 20:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:52.482 20:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:09:52.482 20:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:52.482 20:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:09:52.482 20:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:09:52.482 20:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:09:52.482 20:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:09:52.482 20:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:52.739 btrfs-progs v6.8.1 00:09:52.739 See https://btrfs.readthedocs.io for more information. 00:09:52.739 00:09:52.739 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:52.739 NOTE: several default settings have changed in version 5.15, please make sure 00:09:52.739 this does not affect your deployments: 00:09:52.739 - DUP for metadata (-m dup) 00:09:52.739 - enabled no-holes (-O no-holes) 00:09:52.739 - enabled free-space-tree (-R free-space-tree) 00:09:52.739 00:09:52.739 Label: (null) 00:09:52.739 UUID: 3a4c0ac9-739a-4fcd-b54e-d08af8c55ceb 00:09:52.739 Node size: 16384 00:09:52.739 Sector size: 4096 (CPU page size: 4096) 00:09:52.739 Filesystem size: 510.00MiB 00:09:52.739 Block group profiles: 00:09:52.739 Data: single 8.00MiB 00:09:52.739 Metadata: DUP 32.00MiB 00:09:52.739 System: DUP 8.00MiB 00:09:52.739 SSD detected: yes 00:09:52.739 Zoned device: no 00:09:52.739 Features: extref, skinny-metadata, no-holes, free-space-tree 00:09:52.739 Checksum: crc32c 00:09:52.739 Number of devices: 1 00:09:52.739 Devices: 00:09:52.739 ID SIZE PATH 00:09:52.739 1 510.00MiB /dev/nvme0n1p1 00:09:52.739 00:09:52.739 20:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:09:52.739 20:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:52.996 20:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:52.996 20:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:09:52.996 20:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:52.996 20:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:09:52.996 20:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:52.996 20:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:52.996 20:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1599609 00:09:52.996 20:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:52.996 20:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:52.996 20:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:52.996 20:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:52.996 00:09:52.996 real 0m0.624s 00:09:52.996 user 0m0.024s 00:09:52.996 sys 0m0.092s 00:09:52.996 20:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:52.996 20:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:52.996 ************************************ 00:09:52.996 END TEST filesystem_in_capsule_btrfs 00:09:52.996 ************************************ 00:09:52.996 20:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:09:52.996 20:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:52.996 20:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:52.996 20:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:52.996 ************************************ 00:09:52.996 START TEST filesystem_in_capsule_xfs 00:09:52.996 ************************************ 00:09:52.996 20:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:09:52.996 20:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:52.996 20:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:52.996 20:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:52.996 20:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:09:52.996 20:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:52.996 20:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:09:52.996 20:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:09:52.996 20:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:09:52.996 20:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:09:52.996 20:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:53.253 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:53.253 = sectsz=512 attr=2, projid32bit=1 00:09:53.253 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:53.253 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:53.253 data = bsize=4096 blocks=130560, imaxpct=25 00:09:53.253 = sunit=0 swidth=0 blks 00:09:53.253 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:53.253 log =internal log bsize=4096 blocks=16384, version=2 00:09:53.253 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:53.253 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:54.184 Discarding blocks...Done. 00:09:54.184 20:39:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:09:54.184 20:39:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:56.080 20:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:56.080 20:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:09:56.080 20:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:56.080 20:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:09:56.080 20:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:09:56.080 20:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:56.080 20:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1599609 00:09:56.080 20:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:56.080 20:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:56.080 20:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:56.080 20:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:56.080 00:09:56.080 real 0m2.658s 00:09:56.080 user 0m0.017s 00:09:56.080 sys 0m0.061s 00:09:56.080 20:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:56.080 20:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:56.080 ************************************ 00:09:56.080 END TEST filesystem_in_capsule_xfs 00:09:56.080 ************************************ 00:09:56.080 20:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:56.080 20:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:56.080 20:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:56.080 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:56.080 20:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:56.080 20:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:09:56.080 20:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:56.080 20:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:56.080 20:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:56.080 20:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:56.080 20:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:09:56.080 20:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:56.080 20:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.081 20:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:56.081 20:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.081 20:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:56.081 20:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1599609 00:09:56.081 20:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1599609 ']' 00:09:56.081 20:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1599609 00:09:56.081 20:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:09:56.081 20:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:56.081 20:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1599609 00:09:56.081 20:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:56.081 20:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:56.081 20:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1599609' 00:09:56.081 killing process with pid 1599609 00:09:56.081 20:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 1599609 00:09:56.081 20:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 1599609 00:09:56.646 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:56.646 00:09:56.646 real 0m15.471s 00:09:56.646 user 0m59.848s 00:09:56.646 sys 0m1.973s 00:09:56.646 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:56.646 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:56.646 ************************************ 00:09:56.646 END TEST nvmf_filesystem_in_capsule 00:09:56.646 ************************************ 00:09:56.646 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:09:56.646 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:56.646 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:09:56.646 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:56.646 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:09:56.646 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:56.646 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:56.646 rmmod nvme_tcp 00:09:56.646 rmmod nvme_fabrics 00:09:56.646 rmmod nvme_keyring 00:09:56.646 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:56.646 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:09:56.646 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:09:56.646 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:56.646 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:56.646 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:56.646 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:56.646 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:09:56.646 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:09:56.646 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:56.646 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:09:56.646 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:56.646 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:56.646 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:56.646 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:56.646 20:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:58.548 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:58.548 00:09:58.548 real 0m39.117s 00:09:58.548 user 2m13.570s 00:09:58.548 sys 0m6.031s 00:09:58.548 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:58.548 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:58.548 ************************************ 00:09:58.548 END TEST nvmf_filesystem 00:09:58.548 ************************************ 00:09:58.808 20:40:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:09:58.808 20:40:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:58.808 20:40:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:58.808 20:40:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:58.808 ************************************ 00:09:58.808 START TEST nvmf_target_discovery 00:09:58.808 ************************************ 00:09:58.808 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:09:58.808 * Looking for test storage... 00:09:58.808 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:58.808 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:58.808 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:09:58.808 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:58.808 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:58.808 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:58.808 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:58.808 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:58.808 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:09:58.808 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:09:58.808 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:09:58.808 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:09:58.808 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:09:58.808 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:09:58.808 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:09:58.808 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:58.808 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:09:58.808 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:09:58.808 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:58.808 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:58.808 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:09:58.808 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:09:58.808 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:58.808 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:09:58.808 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:09:58.808 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:09:58.808 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:09:58.808 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:58.808 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:09:58.808 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:09:58.808 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:58.808 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:58.808 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:09:58.808 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:58.808 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:58.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.808 --rc genhtml_branch_coverage=1 00:09:58.808 --rc genhtml_function_coverage=1 00:09:58.808 --rc genhtml_legend=1 00:09:58.808 --rc geninfo_all_blocks=1 00:09:58.808 --rc geninfo_unexecuted_blocks=1 00:09:58.808 00:09:58.808 ' 00:09:58.808 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:58.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.808 --rc genhtml_branch_coverage=1 00:09:58.808 --rc genhtml_function_coverage=1 00:09:58.808 --rc genhtml_legend=1 00:09:58.808 --rc geninfo_all_blocks=1 00:09:58.808 --rc geninfo_unexecuted_blocks=1 00:09:58.808 00:09:58.808 ' 00:09:58.808 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:58.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.808 --rc genhtml_branch_coverage=1 00:09:58.808 --rc genhtml_function_coverage=1 00:09:58.808 --rc genhtml_legend=1 00:09:58.808 --rc geninfo_all_blocks=1 00:09:58.808 --rc geninfo_unexecuted_blocks=1 00:09:58.808 00:09:58.808 ' 00:09:58.808 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:58.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.808 --rc genhtml_branch_coverage=1 00:09:58.808 --rc genhtml_function_coverage=1 00:09:58.808 --rc genhtml_legend=1 00:09:58.808 --rc geninfo_all_blocks=1 00:09:58.808 --rc geninfo_unexecuted_blocks=1 00:09:58.808 00:09:58.808 ' 00:09:58.808 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:58.808 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:09:58.808 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:58.808 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:58.808 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:58.808 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:58.808 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:58.808 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:58.808 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:58.808 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:58.808 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:58.808 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:58.808 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:58.808 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:58.808 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:58.808 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:58.808 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:58.808 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:58.808 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:58.809 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:09:58.809 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:58.809 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:58.809 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:58.809 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.809 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.809 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.809 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:09:58.809 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.809 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:09:58.809 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:58.809 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:58.809 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:58.809 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:58.809 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:58.809 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:58.809 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:58.809 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:58.809 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:58.809 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:58.809 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:09:58.809 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:09:58.809 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:09:58.809 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:09:58.809 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:09:58.809 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:58.809 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:58.809 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:58.809 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:58.809 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:58.809 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:58.809 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:58.809 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:58.809 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:58.809 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:58.809 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:09:58.809 20:40:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:01.344 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:01.344 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:10:01.344 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:01.344 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:01.344 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:01.344 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:01.344 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:01.344 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:10:01.344 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:01.344 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:10:01.344 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:10:01.344 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:10:01.344 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:10:01.344 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:10:01.344 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:10:01.344 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:01.344 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:01.344 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:01.344 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:01.344 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:01.344 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:01.344 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:01.344 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:01.344 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:01.344 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:01.344 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:01.344 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:01.344 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:01.344 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:01.344 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:01.344 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:01.344 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:01.344 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:01.344 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:01.344 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:01.344 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:01.344 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:01.344 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:01.344 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:01.344 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:01.344 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:01.344 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:01.344 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:01.344 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:01.344 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:01.344 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:01.344 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:01.344 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:01.344 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:01.344 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:01.344 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:01.345 Found net devices under 0000:09:00.0: cvl_0_0 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:01.345 Found net devices under 0000:09:00.1: cvl_0_1 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:01.345 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:01.345 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.298 ms 00:10:01.345 00:10:01.345 --- 10.0.0.2 ping statistics --- 00:10:01.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:01.345 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:01.345 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:01.345 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:10:01.345 00:10:01.345 --- 10.0.0.1 ping statistics --- 00:10:01.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:01.345 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=1603512 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 1603512 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 1603512 ']' 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:01.345 20:40:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:01.345 [2024-11-26 20:40:04.911144] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:10:01.345 [2024-11-26 20:40:04.911234] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:01.345 [2024-11-26 20:40:04.990890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:01.604 [2024-11-26 20:40:05.054953] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:01.604 [2024-11-26 20:40:05.055003] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:01.604 [2024-11-26 20:40:05.055031] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:01.604 [2024-11-26 20:40:05.055043] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:01.604 [2024-11-26 20:40:05.055053] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:01.604 [2024-11-26 20:40:05.056759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:01.604 [2024-11-26 20:40:05.056822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:01.604 [2024-11-26 20:40:05.056887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:01.604 [2024-11-26 20:40:05.056890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.604 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:01.604 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:10:01.604 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:01.604 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:01.604 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:01.604 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:01.604 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:01.604 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.604 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:01.604 [2024-11-26 20:40:05.210374] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:01.604 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.604 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:10:01.604 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:01.604 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:10:01.604 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.604 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:01.604 Null1 00:10:01.604 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.604 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:01.604 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.604 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:01.604 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.604 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:10:01.604 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.604 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:01.604 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.604 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:01.604 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.604 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:01.604 [2024-11-26 20:40:05.271513] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:01.604 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.604 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:01.604 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:10:01.604 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.604 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:01.604 Null2 00:10:01.604 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.604 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:10:01.604 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.604 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:01.604 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.604 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:10:01.604 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.604 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:01.862 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.862 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:01.862 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.862 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:01.862 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.862 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:01.862 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:10:01.862 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.862 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:01.862 Null3 00:10:01.862 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.862 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:10:01.862 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.862 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:01.862 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.862 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:10:01.862 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.863 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:01.863 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.863 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:10:01.863 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.863 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:01.863 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.863 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:01.863 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:10:01.863 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.863 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:01.863 Null4 00:10:01.863 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.863 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:10:01.863 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.863 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:01.863 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.863 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:10:01.863 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.863 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:01.863 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.863 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:10:01.863 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.863 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:01.863 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.863 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:01.863 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.863 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:01.863 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.863 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:10:01.863 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.863 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:01.863 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.863 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:10:02.157 00:10:02.157 Discovery Log Number of Records 6, Generation counter 6 00:10:02.157 =====Discovery Log Entry 0====== 00:10:02.157 trtype: tcp 00:10:02.157 adrfam: ipv4 00:10:02.157 subtype: current discovery subsystem 00:10:02.157 treq: not required 00:10:02.157 portid: 0 00:10:02.157 trsvcid: 4420 00:10:02.157 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:02.157 traddr: 10.0.0.2 00:10:02.157 eflags: explicit discovery connections, duplicate discovery information 00:10:02.157 sectype: none 00:10:02.157 =====Discovery Log Entry 1====== 00:10:02.157 trtype: tcp 00:10:02.157 adrfam: ipv4 00:10:02.157 subtype: nvme subsystem 00:10:02.157 treq: not required 00:10:02.157 portid: 0 00:10:02.157 trsvcid: 4420 00:10:02.157 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:02.157 traddr: 10.0.0.2 00:10:02.157 eflags: none 00:10:02.157 sectype: none 00:10:02.157 =====Discovery Log Entry 2====== 00:10:02.157 trtype: tcp 00:10:02.157 adrfam: ipv4 00:10:02.157 subtype: nvme subsystem 00:10:02.157 treq: not required 00:10:02.157 portid: 0 00:10:02.157 trsvcid: 4420 00:10:02.157 subnqn: nqn.2016-06.io.spdk:cnode2 00:10:02.157 traddr: 10.0.0.2 00:10:02.157 eflags: none 00:10:02.157 sectype: none 00:10:02.157 =====Discovery Log Entry 3====== 00:10:02.157 trtype: tcp 00:10:02.157 adrfam: ipv4 00:10:02.157 subtype: nvme subsystem 00:10:02.157 treq: not required 00:10:02.157 portid: 0 00:10:02.157 trsvcid: 4420 00:10:02.157 subnqn: nqn.2016-06.io.spdk:cnode3 00:10:02.157 traddr: 10.0.0.2 00:10:02.157 eflags: none 00:10:02.157 sectype: none 00:10:02.157 =====Discovery Log Entry 4====== 00:10:02.157 trtype: tcp 00:10:02.157 adrfam: ipv4 00:10:02.157 subtype: nvme subsystem 00:10:02.157 treq: not required 00:10:02.157 portid: 0 00:10:02.157 trsvcid: 4420 00:10:02.157 subnqn: nqn.2016-06.io.spdk:cnode4 00:10:02.157 traddr: 10.0.0.2 00:10:02.157 eflags: none 00:10:02.157 sectype: none 00:10:02.157 =====Discovery Log Entry 5====== 00:10:02.157 trtype: tcp 00:10:02.157 adrfam: ipv4 00:10:02.157 subtype: discovery subsystem referral 00:10:02.157 treq: not required 00:10:02.157 portid: 0 00:10:02.157 trsvcid: 4430 00:10:02.157 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:02.157 traddr: 10.0.0.2 00:10:02.157 eflags: none 00:10:02.157 sectype: none 00:10:02.157 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:10:02.157 Perform nvmf subsystem discovery via RPC 00:10:02.157 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:10:02.157 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.157 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:02.157 [ 00:10:02.157 { 00:10:02.157 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:02.157 "subtype": "Discovery", 00:10:02.157 "listen_addresses": [ 00:10:02.157 { 00:10:02.157 "trtype": "TCP", 00:10:02.157 "adrfam": "IPv4", 00:10:02.157 "traddr": "10.0.0.2", 00:10:02.157 "trsvcid": "4420" 00:10:02.157 } 00:10:02.157 ], 00:10:02.157 "allow_any_host": true, 00:10:02.157 "hosts": [] 00:10:02.157 }, 00:10:02.157 { 00:10:02.157 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:02.157 "subtype": "NVMe", 00:10:02.157 "listen_addresses": [ 00:10:02.157 { 00:10:02.157 "trtype": "TCP", 00:10:02.157 "adrfam": "IPv4", 00:10:02.157 "traddr": "10.0.0.2", 00:10:02.157 "trsvcid": "4420" 00:10:02.157 } 00:10:02.157 ], 00:10:02.157 "allow_any_host": true, 00:10:02.157 "hosts": [], 00:10:02.157 "serial_number": "SPDK00000000000001", 00:10:02.157 "model_number": "SPDK bdev Controller", 00:10:02.157 "max_namespaces": 32, 00:10:02.157 "min_cntlid": 1, 00:10:02.157 "max_cntlid": 65519, 00:10:02.157 "namespaces": [ 00:10:02.157 { 00:10:02.157 "nsid": 1, 00:10:02.157 "bdev_name": "Null1", 00:10:02.157 "name": "Null1", 00:10:02.157 "nguid": "899B9FF220714373AD17819EC796771A", 00:10:02.157 "uuid": "899b9ff2-2071-4373-ad17-819ec796771a" 00:10:02.158 } 00:10:02.158 ] 00:10:02.158 }, 00:10:02.158 { 00:10:02.158 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:02.158 "subtype": "NVMe", 00:10:02.158 "listen_addresses": [ 00:10:02.158 { 00:10:02.158 "trtype": "TCP", 00:10:02.158 "adrfam": "IPv4", 00:10:02.158 "traddr": "10.0.0.2", 00:10:02.158 "trsvcid": "4420" 00:10:02.158 } 00:10:02.158 ], 00:10:02.158 "allow_any_host": true, 00:10:02.158 "hosts": [], 00:10:02.158 "serial_number": "SPDK00000000000002", 00:10:02.158 "model_number": "SPDK bdev Controller", 00:10:02.158 "max_namespaces": 32, 00:10:02.158 "min_cntlid": 1, 00:10:02.158 "max_cntlid": 65519, 00:10:02.158 "namespaces": [ 00:10:02.158 { 00:10:02.158 "nsid": 1, 00:10:02.158 "bdev_name": "Null2", 00:10:02.158 "name": "Null2", 00:10:02.158 "nguid": "DC74B7A6EB0D466589C6C7A226CE62ED", 00:10:02.158 "uuid": "dc74b7a6-eb0d-4665-89c6-c7a226ce62ed" 00:10:02.158 } 00:10:02.158 ] 00:10:02.158 }, 00:10:02.158 { 00:10:02.158 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:10:02.158 "subtype": "NVMe", 00:10:02.158 "listen_addresses": [ 00:10:02.158 { 00:10:02.158 "trtype": "TCP", 00:10:02.158 "adrfam": "IPv4", 00:10:02.158 "traddr": "10.0.0.2", 00:10:02.158 "trsvcid": "4420" 00:10:02.158 } 00:10:02.158 ], 00:10:02.158 "allow_any_host": true, 00:10:02.158 "hosts": [], 00:10:02.158 "serial_number": "SPDK00000000000003", 00:10:02.158 "model_number": "SPDK bdev Controller", 00:10:02.158 "max_namespaces": 32, 00:10:02.158 "min_cntlid": 1, 00:10:02.158 "max_cntlid": 65519, 00:10:02.158 "namespaces": [ 00:10:02.158 { 00:10:02.158 "nsid": 1, 00:10:02.158 "bdev_name": "Null3", 00:10:02.158 "name": "Null3", 00:10:02.158 "nguid": "D2A7BE56D8AF4DF3A99872460D2079D3", 00:10:02.158 "uuid": "d2a7be56-d8af-4df3-a998-72460d2079d3" 00:10:02.158 } 00:10:02.158 ] 00:10:02.158 }, 00:10:02.158 { 00:10:02.158 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:10:02.158 "subtype": "NVMe", 00:10:02.158 "listen_addresses": [ 00:10:02.158 { 00:10:02.158 "trtype": "TCP", 00:10:02.158 "adrfam": "IPv4", 00:10:02.158 "traddr": "10.0.0.2", 00:10:02.158 "trsvcid": "4420" 00:10:02.158 } 00:10:02.158 ], 00:10:02.158 "allow_any_host": true, 00:10:02.158 "hosts": [], 00:10:02.158 "serial_number": "SPDK00000000000004", 00:10:02.158 "model_number": "SPDK bdev Controller", 00:10:02.158 "max_namespaces": 32, 00:10:02.158 "min_cntlid": 1, 00:10:02.158 "max_cntlid": 65519, 00:10:02.158 "namespaces": [ 00:10:02.158 { 00:10:02.158 "nsid": 1, 00:10:02.158 "bdev_name": "Null4", 00:10:02.158 "name": "Null4", 00:10:02.158 "nguid": "2856245ECFA3438EBD5AA52F96D4E7FA", 00:10:02.158 "uuid": "2856245e-cfa3-438e-bd5a-a52f96d4e7fa" 00:10:02.158 } 00:10:02.158 ] 00:10:02.158 } 00:10:02.158 ] 00:10:02.158 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.158 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:10:02.158 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:02.158 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:02.158 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.158 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:02.158 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.158 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:10:02.158 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.158 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:02.158 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.158 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:02.158 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:10:02.158 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.158 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:02.158 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.158 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:10:02.158 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.158 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:02.158 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.158 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:02.158 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:10:02.158 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.158 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:02.158 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.158 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:10:02.158 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.158 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:02.158 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.158 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:02.158 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:10:02.158 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.158 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:02.158 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.158 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:10:02.158 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.158 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:02.158 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.158 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:10:02.158 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.158 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:02.158 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.158 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:10:02.158 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.158 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:10:02.158 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:02.158 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.159 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:10:02.159 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:10:02.159 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:10:02.159 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:10:02.159 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:02.159 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:10:02.159 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:02.159 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:10:02.159 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:02.159 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:02.159 rmmod nvme_tcp 00:10:02.159 rmmod nvme_fabrics 00:10:02.159 rmmod nvme_keyring 00:10:02.159 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:02.159 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:10:02.159 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:10:02.159 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 1603512 ']' 00:10:02.159 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 1603512 00:10:02.159 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 1603512 ']' 00:10:02.159 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 1603512 00:10:02.159 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:10:02.159 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:02.159 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1603512 00:10:02.159 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:02.159 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:02.159 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1603512' 00:10:02.159 killing process with pid 1603512 00:10:02.159 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 1603512 00:10:02.159 20:40:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 1603512 00:10:02.418 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:02.418 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:02.418 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:02.418 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:10:02.418 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:10:02.418 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:02.418 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:10:02.418 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:02.418 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:02.418 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:02.418 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:02.418 20:40:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.003 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:05.003 00:10:05.003 real 0m5.765s 00:10:05.003 user 0m4.767s 00:10:05.003 sys 0m2.018s 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:05.004 ************************************ 00:10:05.004 END TEST nvmf_target_discovery 00:10:05.004 ************************************ 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:05.004 ************************************ 00:10:05.004 START TEST nvmf_referrals 00:10:05.004 ************************************ 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:05.004 * Looking for test storage... 00:10:05.004 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:05.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.004 --rc genhtml_branch_coverage=1 00:10:05.004 --rc genhtml_function_coverage=1 00:10:05.004 --rc genhtml_legend=1 00:10:05.004 --rc geninfo_all_blocks=1 00:10:05.004 --rc geninfo_unexecuted_blocks=1 00:10:05.004 00:10:05.004 ' 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:05.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.004 --rc genhtml_branch_coverage=1 00:10:05.004 --rc genhtml_function_coverage=1 00:10:05.004 --rc genhtml_legend=1 00:10:05.004 --rc geninfo_all_blocks=1 00:10:05.004 --rc geninfo_unexecuted_blocks=1 00:10:05.004 00:10:05.004 ' 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:05.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.004 --rc genhtml_branch_coverage=1 00:10:05.004 --rc genhtml_function_coverage=1 00:10:05.004 --rc genhtml_legend=1 00:10:05.004 --rc geninfo_all_blocks=1 00:10:05.004 --rc geninfo_unexecuted_blocks=1 00:10:05.004 00:10:05.004 ' 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:05.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.004 --rc genhtml_branch_coverage=1 00:10:05.004 --rc genhtml_function_coverage=1 00:10:05.004 --rc genhtml_legend=1 00:10:05.004 --rc geninfo_all_blocks=1 00:10:05.004 --rc geninfo_unexecuted_blocks=1 00:10:05.004 00:10:05.004 ' 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:05.004 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:10:05.004 20:40:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:06.909 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:06.909 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:10:06.909 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:06.909 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:06.909 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:06.909 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:06.909 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:06.909 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:10:06.909 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:06.909 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:10:06.909 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:10:06.909 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:10:06.909 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:10:06.909 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:10:06.909 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:10:06.909 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:06.909 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:06.909 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:06.909 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:06.909 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:06.909 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:06.909 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:06.909 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:06.909 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:06.910 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:06.910 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:06.910 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:06.910 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:06.910 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:06.910 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:06.910 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:06.910 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:06.910 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:06.910 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:06.910 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:06.910 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:06.910 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:06.910 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:06.910 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.910 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.910 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:06.910 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:06.910 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:06.910 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:06.910 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:06.910 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:06.910 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.910 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.910 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:06.910 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:06.910 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:06.910 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:06.910 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:06.910 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.910 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:06.910 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.910 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:06.910 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:06.910 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.910 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:06.910 Found net devices under 0000:09:00.0: cvl_0_0 00:10:06.910 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.910 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:06.910 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.910 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:06.910 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.910 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:06.910 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:06.910 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.910 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:06.910 Found net devices under 0000:09:00.1: cvl_0_1 00:10:06.910 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.910 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:06.910 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:10:06.910 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:06.910 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:06.910 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:06.910 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:06.910 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:06.911 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:06.911 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:06.911 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:06.911 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:06.911 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:06.911 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:06.911 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:06.911 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:06.911 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:06.911 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:06.911 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:06.911 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:06.911 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:06.911 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:06.911 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:06.911 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:06.911 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:06.911 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:06.911 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:06.911 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:06.911 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:06.911 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:06.911 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:10:06.911 00:10:06.911 --- 10.0.0.2 ping statistics --- 00:10:06.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.911 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:10:06.911 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:06.911 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:06.911 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:10:06.911 00:10:06.911 --- 10.0.0.1 ping statistics --- 00:10:06.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.911 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:10:06.911 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:06.911 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:10:06.911 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:06.911 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:06.911 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:06.911 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:06.911 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:06.911 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:06.911 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:07.173 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:10:07.173 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:07.173 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:07.173 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:07.173 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=1605704 00:10:07.173 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:07.173 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 1605704 00:10:07.173 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 1605704 ']' 00:10:07.173 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.173 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:07.173 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.173 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:07.173 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:07.173 [2024-11-26 20:40:10.694326] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:10:07.173 [2024-11-26 20:40:10.694400] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:07.173 [2024-11-26 20:40:10.770463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:07.173 [2024-11-26 20:40:10.833443] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:07.173 [2024-11-26 20:40:10.833491] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:07.173 [2024-11-26 20:40:10.833505] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:07.173 [2024-11-26 20:40:10.833517] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:07.173 [2024-11-26 20:40:10.833528] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:07.173 [2024-11-26 20:40:10.835147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:07.173 [2024-11-26 20:40:10.835207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:07.173 [2024-11-26 20:40:10.835256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:07.173 [2024-11-26 20:40:10.835259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.431 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:07.431 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:10:07.431 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:07.431 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:07.431 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:07.431 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:07.431 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:07.431 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.431 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:07.431 [2024-11-26 20:40:10.990173] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:07.431 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.431 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:10:07.431 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.431 20:40:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:07.431 [2024-11-26 20:40:11.019517] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:10:07.431 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.431 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:10:07.431 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.431 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:07.431 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.431 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:10:07.431 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.431 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:07.431 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.431 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:10:07.431 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.431 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:07.431 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.431 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:07.431 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:10:07.431 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.431 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:07.431 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.431 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:10:07.431 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:10:07.431 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:07.431 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:07.431 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:07.431 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.431 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:07.431 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:07.431 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.689 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:07.689 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:07.689 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:10:07.689 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:07.689 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:07.689 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:07.689 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:07.689 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:07.689 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:07.689 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:07.689 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:10:07.689 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.689 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:07.689 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.690 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:10:07.690 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.690 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:07.690 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.690 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:10:07.690 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.690 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:07.690 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.690 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:07.690 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:10:07.690 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.690 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:07.690 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.690 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:10:07.690 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:10:07.690 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:07.690 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:07.690 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:07.690 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:07.690 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:07.948 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:07.948 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:10:07.948 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:10:07.948 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.948 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:07.948 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.948 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:07.948 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.948 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:07.948 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.948 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:10:07.948 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:07.948 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:07.948 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.948 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:07.948 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:07.948 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:07.948 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.948 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:10:07.948 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:07.948 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:10:07.948 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:07.948 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:07.948 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:07.948 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:07.948 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:08.205 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:10:08.205 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:08.205 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:10:08.205 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:10:08.205 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:08.205 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:08.205 20:40:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:08.463 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:08.463 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:10:08.463 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:08.463 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:10:08.463 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:08.463 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:08.720 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:08.720 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:08.720 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.720 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:08.720 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.720 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:10:08.720 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:08.720 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:08.721 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.721 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:08.721 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:08.721 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:08.721 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.721 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:10:08.721 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:08.721 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:10:08.721 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:08.721 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:08.721 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:08.721 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:08.721 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:08.978 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:10:08.979 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:08.979 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:10:08.979 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:08.979 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:10:08.979 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:08.979 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:08.979 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:10:08.979 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:10:08.979 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:10:08.979 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:08.979 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:08.979 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:09.237 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:09.237 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:10:09.237 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.237 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:09.237 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.237 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:09.237 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.237 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:10:09.237 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:09.237 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.237 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:10:09.237 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:10:09.237 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:09.237 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:09.237 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:09.237 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:09.237 20:40:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:09.495 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:09.495 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:10:09.495 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:10:09.495 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:10:09.495 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:09.495 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:10:09.495 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:09.495 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:10:09.495 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:09.495 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:09.495 rmmod nvme_tcp 00:10:09.495 rmmod nvme_fabrics 00:10:09.495 rmmod nvme_keyring 00:10:09.495 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:09.495 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:10:09.495 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:10:09.495 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 1605704 ']' 00:10:09.495 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 1605704 00:10:09.495 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 1605704 ']' 00:10:09.495 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 1605704 00:10:09.495 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:10:09.495 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:09.495 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1605704 00:10:09.757 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:09.757 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:09.757 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1605704' 00:10:09.757 killing process with pid 1605704 00:10:09.757 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 1605704 00:10:09.757 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 1605704 00:10:09.757 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:09.757 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:09.757 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:09.757 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:10:10.016 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:10:10.016 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:10.016 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:10:10.016 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:10.016 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:10.016 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:10.016 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:10.016 20:40:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.920 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:11.920 00:10:11.920 real 0m7.393s 00:10:11.920 user 0m11.942s 00:10:11.920 sys 0m2.394s 00:10:11.920 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:11.920 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:11.920 ************************************ 00:10:11.920 END TEST nvmf_referrals 00:10:11.920 ************************************ 00:10:11.920 20:40:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:11.920 20:40:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:11.920 20:40:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:11.920 20:40:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:11.920 ************************************ 00:10:11.920 START TEST nvmf_connect_disconnect 00:10:11.920 ************************************ 00:10:11.920 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:11.920 * Looking for test storage... 00:10:11.920 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:11.920 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:11.920 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:10:11.920 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:12.179 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:12.179 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:12.179 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:12.179 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:12.179 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:10:12.179 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:10:12.179 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:10:12.179 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:10:12.179 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:10:12.179 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:10:12.179 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:10:12.179 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:12.179 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:10:12.179 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:10:12.179 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:12.179 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:12.179 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:10:12.179 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:10:12.179 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:12.179 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:10:12.179 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:10:12.179 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:10:12.179 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:10:12.179 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:12.179 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:10:12.179 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:10:12.179 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:12.179 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:12.179 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:10:12.179 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:12.179 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:12.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.179 --rc genhtml_branch_coverage=1 00:10:12.179 --rc genhtml_function_coverage=1 00:10:12.179 --rc genhtml_legend=1 00:10:12.179 --rc geninfo_all_blocks=1 00:10:12.179 --rc geninfo_unexecuted_blocks=1 00:10:12.179 00:10:12.179 ' 00:10:12.179 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:12.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.179 --rc genhtml_branch_coverage=1 00:10:12.179 --rc genhtml_function_coverage=1 00:10:12.179 --rc genhtml_legend=1 00:10:12.179 --rc geninfo_all_blocks=1 00:10:12.179 --rc geninfo_unexecuted_blocks=1 00:10:12.179 00:10:12.179 ' 00:10:12.179 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:12.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.179 --rc genhtml_branch_coverage=1 00:10:12.179 --rc genhtml_function_coverage=1 00:10:12.179 --rc genhtml_legend=1 00:10:12.179 --rc geninfo_all_blocks=1 00:10:12.179 --rc geninfo_unexecuted_blocks=1 00:10:12.179 00:10:12.179 ' 00:10:12.179 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:12.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.179 --rc genhtml_branch_coverage=1 00:10:12.179 --rc genhtml_function_coverage=1 00:10:12.179 --rc genhtml_legend=1 00:10:12.179 --rc geninfo_all_blocks=1 00:10:12.179 --rc geninfo_unexecuted_blocks=1 00:10:12.179 00:10:12.179 ' 00:10:12.180 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:12.180 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:10:12.180 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:12.180 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:12.180 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:12.180 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:12.180 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:12.180 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:12.180 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:12.180 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:12.180 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:12.180 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:12.180 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:12.180 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:12.180 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:12.180 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:12.180 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:12.180 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:12.180 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:12.180 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:10:12.180 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:12.180 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:12.180 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:12.180 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.180 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.180 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.180 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:10:12.180 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.180 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:10:12.180 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:12.180 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:12.180 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:12.180 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:12.180 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:12.180 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:12.180 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:12.180 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:12.180 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:12.180 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:12.180 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:12.180 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:12.180 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:10:12.180 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:12.180 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:12.180 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:12.180 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:12.180 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:12.180 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:12.180 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:12.180 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:12.180 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:12.180 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:12.180 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:10:12.180 20:40:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:14.712 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:14.712 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:10:14.712 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:14.712 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:14.712 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:14.712 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:14.712 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:14.712 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:10:14.712 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:14.712 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:10:14.712 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:10:14.712 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:10:14.712 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:10:14.712 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:10:14.712 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:10:14.712 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:14.712 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:14.712 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:14.712 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:14.712 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:14.712 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:14.712 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:14.712 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:14.712 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:14.713 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:14.713 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:14.713 Found net devices under 0000:09:00.0: cvl_0_0 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:14.713 Found net devices under 0000:09:00.1: cvl_0_1 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:14.713 20:40:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:14.713 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:14.713 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:14.713 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:14.713 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:14.713 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:14.713 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.313 ms 00:10:14.713 00:10:14.713 --- 10.0.0.2 ping statistics --- 00:10:14.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.713 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:10:14.713 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:14.713 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:14.713 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:10:14.713 00:10:14.713 --- 10.0.0.1 ping statistics --- 00:10:14.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.713 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:10:14.713 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:14.713 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:10:14.713 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:14.713 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:14.713 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:14.713 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:14.713 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:14.713 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:14.713 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:14.713 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:10:14.713 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:14.713 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:14.713 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:14.713 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=1608034 00:10:14.713 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:14.713 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 1608034 00:10:14.713 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 1608034 ']' 00:10:14.713 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.713 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:14.714 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.714 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:14.714 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:14.714 [2024-11-26 20:40:18.137030] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:10:14.714 [2024-11-26 20:40:18.137105] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:14.714 [2024-11-26 20:40:18.205937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:14.714 [2024-11-26 20:40:18.261136] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:14.714 [2024-11-26 20:40:18.261184] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:14.714 [2024-11-26 20:40:18.261213] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:14.714 [2024-11-26 20:40:18.261223] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:14.714 [2024-11-26 20:40:18.261232] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:14.714 [2024-11-26 20:40:18.262725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:14.714 [2024-11-26 20:40:18.262782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:14.714 [2024-11-26 20:40:18.262847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:14.714 [2024-11-26 20:40:18.262851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.714 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:14.714 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:10:14.714 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:14.714 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:14.714 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:14.972 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:14.972 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:14.972 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.972 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:14.972 [2024-11-26 20:40:18.418179] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:14.972 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.972 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:10:14.972 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.972 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:14.972 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.972 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:10:14.972 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:14.972 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.972 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:14.972 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.972 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:14.972 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.972 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:14.972 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.972 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:14.972 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.972 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:14.972 [2024-11-26 20:40:18.487317] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:14.972 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.972 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:10:14.972 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:10:14.972 20:40:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:10:18.249 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.776 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.302 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.827 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.109 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.109 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:10:29.109 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:10:29.109 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:29.109 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:10:29.109 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:29.109 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:10:29.109 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:29.109 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:29.109 rmmod nvme_tcp 00:10:29.109 rmmod nvme_fabrics 00:10:29.109 rmmod nvme_keyring 00:10:29.109 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:29.109 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:10:29.109 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:10:29.109 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 1608034 ']' 00:10:29.109 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 1608034 00:10:29.109 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1608034 ']' 00:10:29.109 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 1608034 00:10:29.109 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:10:29.109 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:29.109 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1608034 00:10:29.109 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:29.109 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:29.109 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1608034' 00:10:29.109 killing process with pid 1608034 00:10:29.109 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 1608034 00:10:29.109 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 1608034 00:10:29.109 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:29.109 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:29.109 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:29.110 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:10:29.110 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:10:29.110 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:29.110 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:10:29.110 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:29.110 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:29.110 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:29.110 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:29.110 20:40:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:31.016 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:31.016 00:10:31.016 real 0m19.009s 00:10:31.016 user 0m56.564s 00:10:31.016 sys 0m3.514s 00:10:31.016 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:31.016 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:31.016 ************************************ 00:10:31.016 END TEST nvmf_connect_disconnect 00:10:31.016 ************************************ 00:10:31.016 20:40:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:31.016 20:40:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:31.016 20:40:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:31.016 20:40:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:31.016 ************************************ 00:10:31.016 START TEST nvmf_multitarget 00:10:31.016 ************************************ 00:10:31.016 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:31.016 * Looking for test storage... 00:10:31.016 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:31.016 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:31.016 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:10:31.016 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:31.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.275 --rc genhtml_branch_coverage=1 00:10:31.275 --rc genhtml_function_coverage=1 00:10:31.275 --rc genhtml_legend=1 00:10:31.275 --rc geninfo_all_blocks=1 00:10:31.275 --rc geninfo_unexecuted_blocks=1 00:10:31.275 00:10:31.275 ' 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:31.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.275 --rc genhtml_branch_coverage=1 00:10:31.275 --rc genhtml_function_coverage=1 00:10:31.275 --rc genhtml_legend=1 00:10:31.275 --rc geninfo_all_blocks=1 00:10:31.275 --rc geninfo_unexecuted_blocks=1 00:10:31.275 00:10:31.275 ' 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:31.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.275 --rc genhtml_branch_coverage=1 00:10:31.275 --rc genhtml_function_coverage=1 00:10:31.275 --rc genhtml_legend=1 00:10:31.275 --rc geninfo_all_blocks=1 00:10:31.275 --rc geninfo_unexecuted_blocks=1 00:10:31.275 00:10:31.275 ' 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:31.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.275 --rc genhtml_branch_coverage=1 00:10:31.275 --rc genhtml_function_coverage=1 00:10:31.275 --rc genhtml_legend=1 00:10:31.275 --rc geninfo_all_blocks=1 00:10:31.275 --rc geninfo_unexecuted_blocks=1 00:10:31.275 00:10:31.275 ' 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.275 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:10:31.276 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:31.276 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:31.276 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:31.276 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:31.276 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:31.276 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:31.276 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:31.276 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:31.276 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:31.276 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:31.276 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:10:31.276 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:10:31.276 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:31.276 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:31.276 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:31.276 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:31.276 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:31.276 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:31.276 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:31.276 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:31.276 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:31.276 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:31.276 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:10:31.276 20:40:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:33.249 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:33.249 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:10:33.249 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:33.249 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:33.249 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:33.249 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:33.249 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:33.249 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:10:33.249 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:33.249 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:10:33.249 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:10:33.249 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:10:33.249 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:10:33.249 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:10:33.249 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:10:33.249 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:33.249 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:33.249 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:33.249 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:33.250 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:33.250 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:33.250 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:33.250 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:33.250 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:33.250 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:33.250 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:33.250 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:33.250 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:33.250 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:33.250 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:33.250 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:33.250 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:33.250 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:33.250 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:33.250 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:33.250 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:33.250 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:33.250 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:33.250 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:33.250 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:33.250 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:33.250 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:33.250 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:33.250 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:33.250 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:33.250 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:33.250 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:33.250 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:33.250 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:33.508 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:33.508 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:33.508 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:33.508 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:33.508 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:33.508 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:33.508 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:33.508 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:33.508 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:33.508 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:33.508 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:33.508 Found net devices under 0000:09:00.0: cvl_0_0 00:10:33.508 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:33.508 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:33.508 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:33.508 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:33.508 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:33.508 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:33.508 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:33.508 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:33.508 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:33.508 Found net devices under 0000:09:00.1: cvl_0_1 00:10:33.508 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:33.508 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:33.508 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:10:33.508 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:33.508 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:33.508 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:33.508 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:33.508 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:33.508 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:33.508 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:33.508 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:33.508 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:33.508 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:33.509 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:33.509 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:33.509 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:33.509 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:33.509 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:33.509 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:33.509 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:33.509 20:40:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:33.509 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:33.509 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:33.509 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:33.509 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:33.509 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:33.509 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:33.509 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:33.509 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:33.509 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:33.509 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:10:33.509 00:10:33.509 --- 10.0.0.2 ping statistics --- 00:10:33.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:33.509 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:10:33.509 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:33.509 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:33.509 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:10:33.509 00:10:33.509 --- 10.0.0.1 ping statistics --- 00:10:33.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:33.509 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:10:33.509 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:33.509 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:10:33.509 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:33.509 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:33.509 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:33.509 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:33.509 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:33.509 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:33.509 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:33.509 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:10:33.509 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:33.509 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:33.509 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:33.509 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=1611798 00:10:33.509 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 1611798 00:10:33.509 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 1611798 ']' 00:10:33.509 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:33.509 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:33.509 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:33.509 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:33.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:33.509 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:33.509 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:33.509 [2024-11-26 20:40:37.155605] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:10:33.509 [2024-11-26 20:40:37.155702] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:33.767 [2024-11-26 20:40:37.227766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:33.767 [2024-11-26 20:40:37.285191] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:33.767 [2024-11-26 20:40:37.285244] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:33.767 [2024-11-26 20:40:37.285273] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:33.767 [2024-11-26 20:40:37.285284] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:33.767 [2024-11-26 20:40:37.285312] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:33.767 [2024-11-26 20:40:37.286785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:33.767 [2024-11-26 20:40:37.286852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:33.767 [2024-11-26 20:40:37.286965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.767 [2024-11-26 20:40:37.286962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:33.767 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:33.767 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:10:33.767 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:33.767 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:33.767 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:33.767 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:33.767 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:33.767 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:33.767 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:10:34.024 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:10:34.024 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:10:34.024 "nvmf_tgt_1" 00:10:34.024 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:10:34.281 "nvmf_tgt_2" 00:10:34.281 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:34.281 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:10:34.281 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:10:34.281 20:40:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:10:34.538 true 00:10:34.538 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:10:34.538 true 00:10:34.538 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:34.538 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:10:34.796 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:10:34.797 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:34.797 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:10:34.797 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:34.797 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:10:34.797 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:34.797 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:10:34.797 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:34.797 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:34.797 rmmod nvme_tcp 00:10:34.797 rmmod nvme_fabrics 00:10:34.797 rmmod nvme_keyring 00:10:34.797 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:34.797 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:10:34.797 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:10:34.797 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 1611798 ']' 00:10:34.797 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 1611798 00:10:34.797 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 1611798 ']' 00:10:34.797 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 1611798 00:10:34.797 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:10:34.797 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:34.797 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1611798 00:10:34.797 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:34.797 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:34.797 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1611798' 00:10:34.797 killing process with pid 1611798 00:10:34.797 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 1611798 00:10:34.797 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 1611798 00:10:35.054 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:35.054 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:35.054 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:35.054 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:10:35.054 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:10:35.054 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:35.054 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:10:35.054 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:35.054 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:35.054 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.054 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:35.054 20:40:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:36.961 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:36.961 00:10:36.961 real 0m6.011s 00:10:36.961 user 0m6.760s 00:10:36.961 sys 0m2.094s 00:10:36.961 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:36.961 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:36.961 ************************************ 00:10:36.961 END TEST nvmf_multitarget 00:10:36.961 ************************************ 00:10:36.961 20:40:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:36.961 20:40:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:36.961 20:40:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:36.961 20:40:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:37.220 ************************************ 00:10:37.220 START TEST nvmf_rpc 00:10:37.220 ************************************ 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:37.220 * Looking for test storage... 00:10:37.220 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:37.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.220 --rc genhtml_branch_coverage=1 00:10:37.220 --rc genhtml_function_coverage=1 00:10:37.220 --rc genhtml_legend=1 00:10:37.220 --rc geninfo_all_blocks=1 00:10:37.220 --rc geninfo_unexecuted_blocks=1 00:10:37.220 00:10:37.220 ' 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:37.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.220 --rc genhtml_branch_coverage=1 00:10:37.220 --rc genhtml_function_coverage=1 00:10:37.220 --rc genhtml_legend=1 00:10:37.220 --rc geninfo_all_blocks=1 00:10:37.220 --rc geninfo_unexecuted_blocks=1 00:10:37.220 00:10:37.220 ' 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:37.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.220 --rc genhtml_branch_coverage=1 00:10:37.220 --rc genhtml_function_coverage=1 00:10:37.220 --rc genhtml_legend=1 00:10:37.220 --rc geninfo_all_blocks=1 00:10:37.220 --rc geninfo_unexecuted_blocks=1 00:10:37.220 00:10:37.220 ' 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:37.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.220 --rc genhtml_branch_coverage=1 00:10:37.220 --rc genhtml_function_coverage=1 00:10:37.220 --rc genhtml_legend=1 00:10:37.220 --rc geninfo_all_blocks=1 00:10:37.220 --rc geninfo_unexecuted_blocks=1 00:10:37.220 00:10:37.220 ' 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.220 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:10:37.221 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.221 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:10:37.221 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:37.221 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:37.221 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:37.221 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:37.221 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:37.221 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:37.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:37.221 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:37.221 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:37.221 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:37.221 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:10:37.221 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:10:37.221 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:37.221 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:37.221 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:37.221 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:37.221 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:37.221 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.221 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:37.221 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.221 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:37.221 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:37.221 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:10:37.221 20:40:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:39.751 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:39.751 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:10:39.751 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:39.751 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:39.751 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:39.751 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:39.751 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:39.751 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:10:39.751 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:39.751 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:10:39.751 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:10:39.751 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:10:39.751 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:10:39.751 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:10:39.751 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:10:39.751 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:39.751 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:39.751 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:39.751 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:39.751 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:39.752 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:39.752 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:39.752 Found net devices under 0000:09:00.0: cvl_0_0 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:39.752 Found net devices under 0000:09:00.1: cvl_0_1 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:39.752 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:39.752 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.292 ms 00:10:39.752 00:10:39.752 --- 10.0.0.2 ping statistics --- 00:10:39.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.752 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:39.752 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:39.752 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:10:39.752 00:10:39.752 --- 10.0.0.1 ping statistics --- 00:10:39.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.752 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=1613913 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 1613913 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 1613913 ']' 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:39.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:39.752 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:39.752 [2024-11-26 20:40:43.279955] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:10:39.753 [2024-11-26 20:40:43.280064] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:39.753 [2024-11-26 20:40:43.349950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:39.753 [2024-11-26 20:40:43.404400] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:39.753 [2024-11-26 20:40:43.404471] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:39.753 [2024-11-26 20:40:43.404494] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:39.753 [2024-11-26 20:40:43.404504] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:39.753 [2024-11-26 20:40:43.404513] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:39.753 [2024-11-26 20:40:43.406149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:39.753 [2024-11-26 20:40:43.406256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:39.753 [2024-11-26 20:40:43.406343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:39.753 [2024-11-26 20:40:43.406348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.011 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:40.012 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:40.012 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:40.012 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:40.012 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:40.012 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:40.012 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:10:40.012 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.012 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:40.012 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.012 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:10:40.012 "tick_rate": 2700000000, 00:10:40.012 "poll_groups": [ 00:10:40.012 { 00:10:40.012 "name": "nvmf_tgt_poll_group_000", 00:10:40.012 "admin_qpairs": 0, 00:10:40.012 "io_qpairs": 0, 00:10:40.012 "current_admin_qpairs": 0, 00:10:40.012 "current_io_qpairs": 0, 00:10:40.012 "pending_bdev_io": 0, 00:10:40.012 "completed_nvme_io": 0, 00:10:40.012 "transports": [] 00:10:40.012 }, 00:10:40.012 { 00:10:40.012 "name": "nvmf_tgt_poll_group_001", 00:10:40.012 "admin_qpairs": 0, 00:10:40.012 "io_qpairs": 0, 00:10:40.012 "current_admin_qpairs": 0, 00:10:40.012 "current_io_qpairs": 0, 00:10:40.012 "pending_bdev_io": 0, 00:10:40.012 "completed_nvme_io": 0, 00:10:40.012 "transports": [] 00:10:40.012 }, 00:10:40.012 { 00:10:40.012 "name": "nvmf_tgt_poll_group_002", 00:10:40.012 "admin_qpairs": 0, 00:10:40.012 "io_qpairs": 0, 00:10:40.012 "current_admin_qpairs": 0, 00:10:40.012 "current_io_qpairs": 0, 00:10:40.012 "pending_bdev_io": 0, 00:10:40.012 "completed_nvme_io": 0, 00:10:40.012 "transports": [] 00:10:40.012 }, 00:10:40.012 { 00:10:40.012 "name": "nvmf_tgt_poll_group_003", 00:10:40.012 "admin_qpairs": 0, 00:10:40.012 "io_qpairs": 0, 00:10:40.012 "current_admin_qpairs": 0, 00:10:40.012 "current_io_qpairs": 0, 00:10:40.012 "pending_bdev_io": 0, 00:10:40.012 "completed_nvme_io": 0, 00:10:40.012 "transports": [] 00:10:40.012 } 00:10:40.012 ] 00:10:40.012 }' 00:10:40.012 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:10:40.012 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:10:40.012 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:10:40.012 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:10:40.012 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:10:40.012 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:10:40.012 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:10:40.012 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:40.012 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.012 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:40.012 [2024-11-26 20:40:43.652543] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:40.012 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.012 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:10:40.012 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.012 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:40.012 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.012 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:10:40.012 "tick_rate": 2700000000, 00:10:40.012 "poll_groups": [ 00:10:40.012 { 00:10:40.012 "name": "nvmf_tgt_poll_group_000", 00:10:40.012 "admin_qpairs": 0, 00:10:40.012 "io_qpairs": 0, 00:10:40.012 "current_admin_qpairs": 0, 00:10:40.012 "current_io_qpairs": 0, 00:10:40.012 "pending_bdev_io": 0, 00:10:40.012 "completed_nvme_io": 0, 00:10:40.012 "transports": [ 00:10:40.012 { 00:10:40.012 "trtype": "TCP" 00:10:40.012 } 00:10:40.012 ] 00:10:40.012 }, 00:10:40.012 { 00:10:40.012 "name": "nvmf_tgt_poll_group_001", 00:10:40.012 "admin_qpairs": 0, 00:10:40.012 "io_qpairs": 0, 00:10:40.012 "current_admin_qpairs": 0, 00:10:40.012 "current_io_qpairs": 0, 00:10:40.012 "pending_bdev_io": 0, 00:10:40.012 "completed_nvme_io": 0, 00:10:40.012 "transports": [ 00:10:40.012 { 00:10:40.012 "trtype": "TCP" 00:10:40.012 } 00:10:40.012 ] 00:10:40.012 }, 00:10:40.012 { 00:10:40.012 "name": "nvmf_tgt_poll_group_002", 00:10:40.012 "admin_qpairs": 0, 00:10:40.012 "io_qpairs": 0, 00:10:40.012 "current_admin_qpairs": 0, 00:10:40.012 "current_io_qpairs": 0, 00:10:40.012 "pending_bdev_io": 0, 00:10:40.012 "completed_nvme_io": 0, 00:10:40.012 "transports": [ 00:10:40.012 { 00:10:40.012 "trtype": "TCP" 00:10:40.012 } 00:10:40.012 ] 00:10:40.012 }, 00:10:40.012 { 00:10:40.012 "name": "nvmf_tgt_poll_group_003", 00:10:40.012 "admin_qpairs": 0, 00:10:40.012 "io_qpairs": 0, 00:10:40.012 "current_admin_qpairs": 0, 00:10:40.012 "current_io_qpairs": 0, 00:10:40.012 "pending_bdev_io": 0, 00:10:40.012 "completed_nvme_io": 0, 00:10:40.012 "transports": [ 00:10:40.012 { 00:10:40.012 "trtype": "TCP" 00:10:40.012 } 00:10:40.012 ] 00:10:40.012 } 00:10:40.012 ] 00:10:40.012 }' 00:10:40.012 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:10:40.012 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:40.012 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:40.012 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:40.271 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:10:40.271 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:10:40.271 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:40.271 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:40.271 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:40.271 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:10:40.271 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:10:40.271 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:10:40.271 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:10:40.271 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:40.271 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.271 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:40.271 Malloc1 00:10:40.271 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.271 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:40.271 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.271 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:40.271 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.271 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:40.271 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.271 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:40.271 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.271 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:10:40.271 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.271 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:40.271 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.271 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:40.271 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.271 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:40.271 [2024-11-26 20:40:43.816923] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:40.271 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.271 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:10:40.271 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:10:40.271 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:10:40.271 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:10:40.271 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:40.271 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:10:40.271 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:40.271 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:10:40.271 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:40.271 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:10:40.271 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:10:40.271 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:10:40.271 [2024-11-26 20:40:43.839578] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:10:40.271 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:40.271 could not add new controller: failed to write to nvme-fabrics device 00:10:40.271 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:10:40.271 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:40.271 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:40.271 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:40.271 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:40.271 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.271 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:40.271 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.271 20:40:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:40.837 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:10:40.837 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:40.837 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:40.837 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:40.837 20:40:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:43.364 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:43.364 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:43.365 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:43.365 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:43.365 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:43.365 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:43.365 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:43.365 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.365 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:43.365 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:43.365 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:43.365 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:43.365 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:43.365 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:43.365 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:43.365 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:43.365 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.365 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:43.365 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.365 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:43.365 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:10:43.365 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:43.365 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:10:43.365 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:43.365 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:10:43.365 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:43.365 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:10:43.365 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:43.365 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:10:43.365 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:10:43.365 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:43.365 [2024-11-26 20:40:46.629636] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:10:43.365 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:43.365 could not add new controller: failed to write to nvme-fabrics device 00:10:43.365 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:10:43.365 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:43.365 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:43.365 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:43.365 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:10:43.365 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.365 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:43.365 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.365 20:40:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:43.931 20:40:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:10:43.931 20:40:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:43.931 20:40:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:43.931 20:40:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:43.931 20:40:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:45.837 20:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:45.837 20:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:45.837 20:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:45.837 20:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:45.837 20:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:45.837 20:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:45.837 20:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:45.837 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.837 20:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:45.837 20:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:45.837 20:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:45.837 20:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:45.837 20:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:45.837 20:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:45.837 20:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:45.837 20:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:45.837 20:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.837 20:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.837 20:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.837 20:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:10:45.837 20:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:45.837 20:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:45.837 20:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.837 20:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.837 20:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.837 20:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:45.837 20:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.837 20:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.837 [2024-11-26 20:40:49.508923] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:45.837 20:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.837 20:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:45.837 20:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.837 20:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.837 20:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.838 20:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:45.838 20:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.838 20:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:45.838 20:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.838 20:40:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:46.771 20:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:46.771 20:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:46.771 20:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:46.771 20:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:46.771 20:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:48.665 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:48.665 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:48.665 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:48.665 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:48.665 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:48.665 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:48.665 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:48.665 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.665 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:48.665 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:48.665 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:48.665 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:48.665 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:48.665 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:48.665 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:48.665 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:48.665 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.665 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:48.665 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.665 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:48.665 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.665 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:48.665 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.665 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:48.665 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:48.665 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.665 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:48.665 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.665 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:48.665 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.665 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:48.665 [2024-11-26 20:40:52.283374] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:48.665 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.665 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:48.665 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.665 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:48.665 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.665 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:48.665 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.665 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:48.665 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.665 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:49.229 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:49.229 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:49.229 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:49.229 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:49.229 20:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:51.754 20:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:51.754 20:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:51.754 20:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:51.754 20:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:51.754 20:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:51.754 20:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:51.754 20:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:51.754 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.754 20:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:51.754 20:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:51.754 20:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:51.754 20:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:51.754 20:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:51.754 20:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:51.754 20:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:51.754 20:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:51.754 20:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.754 20:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.754 20:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.754 20:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:51.754 20:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.754 20:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.754 20:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.754 20:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:51.754 20:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:51.754 20:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.754 20:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.754 20:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.754 20:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:51.754 20:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.754 20:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.754 [2024-11-26 20:40:55.061930] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:51.754 20:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.754 20:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:51.754 20:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.754 20:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.754 20:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.754 20:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:51.754 20:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.754 20:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.754 20:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.754 20:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:52.318 20:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:52.318 20:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:52.318 20:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:52.318 20:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:52.318 20:40:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:54.214 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:54.214 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:54.214 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:54.214 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:54.214 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:54.214 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:54.214 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:54.214 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.214 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:54.214 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:54.214 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:54.214 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:54.214 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:54.214 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:54.214 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:54.214 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:54.214 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.214 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.214 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.214 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:54.214 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.214 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.214 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.214 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:54.214 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:54.214 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.214 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.214 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.214 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:54.214 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.214 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.214 [2024-11-26 20:40:57.885856] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:54.214 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.214 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:54.214 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.214 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.214 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.214 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:54.214 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.214 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.214 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.214 20:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:55.150 20:40:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:55.150 20:40:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:55.150 20:40:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:55.150 20:40:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:55.150 20:40:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:57.125 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:57.125 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:57.125 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:57.125 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:57.125 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:57.125 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:57.125 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:57.125 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.125 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:57.125 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:57.125 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:57.125 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:57.125 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:57.125 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:57.125 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:57.125 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:57.125 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.125 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.125 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.125 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:57.125 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.125 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.125 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.125 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:57.125 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:57.125 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.125 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.125 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.125 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:57.125 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.125 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.125 [2024-11-26 20:41:00.722107] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:57.125 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.125 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:57.125 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.125 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.125 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.125 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:57.125 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.125 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.125 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.125 20:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:58.060 20:41:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:58.060 20:41:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:58.060 20:41:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:58.060 20:41:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:58.060 20:41:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:59.962 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.962 [2024-11-26 20:41:03.550162] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.962 [2024-11-26 20:41:03.598194] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.962 [2024-11-26 20:41:03.646380] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.962 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:00.221 [2024-11-26 20:41:03.694538] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:00.221 [2024-11-26 20:41:03.742737] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:11:00.221 "tick_rate": 2700000000, 00:11:00.221 "poll_groups": [ 00:11:00.221 { 00:11:00.221 "name": "nvmf_tgt_poll_group_000", 00:11:00.221 "admin_qpairs": 2, 00:11:00.221 "io_qpairs": 84, 00:11:00.221 "current_admin_qpairs": 0, 00:11:00.221 "current_io_qpairs": 0, 00:11:00.221 "pending_bdev_io": 0, 00:11:00.221 "completed_nvme_io": 185, 00:11:00.221 "transports": [ 00:11:00.221 { 00:11:00.221 "trtype": "TCP" 00:11:00.221 } 00:11:00.221 ] 00:11:00.221 }, 00:11:00.221 { 00:11:00.221 "name": "nvmf_tgt_poll_group_001", 00:11:00.221 "admin_qpairs": 2, 00:11:00.221 "io_qpairs": 84, 00:11:00.221 "current_admin_qpairs": 0, 00:11:00.221 "current_io_qpairs": 0, 00:11:00.221 "pending_bdev_io": 0, 00:11:00.221 "completed_nvme_io": 134, 00:11:00.221 "transports": [ 00:11:00.221 { 00:11:00.221 "trtype": "TCP" 00:11:00.221 } 00:11:00.221 ] 00:11:00.221 }, 00:11:00.221 { 00:11:00.221 "name": "nvmf_tgt_poll_group_002", 00:11:00.221 "admin_qpairs": 1, 00:11:00.221 "io_qpairs": 84, 00:11:00.221 "current_admin_qpairs": 0, 00:11:00.221 "current_io_qpairs": 0, 00:11:00.221 "pending_bdev_io": 0, 00:11:00.221 "completed_nvme_io": 232, 00:11:00.221 "transports": [ 00:11:00.221 { 00:11:00.221 "trtype": "TCP" 00:11:00.221 } 00:11:00.221 ] 00:11:00.221 }, 00:11:00.221 { 00:11:00.221 "name": "nvmf_tgt_poll_group_003", 00:11:00.221 "admin_qpairs": 2, 00:11:00.221 "io_qpairs": 84, 00:11:00.221 "current_admin_qpairs": 0, 00:11:00.221 "current_io_qpairs": 0, 00:11:00.221 "pending_bdev_io": 0, 00:11:00.221 "completed_nvme_io": 135, 00:11:00.221 "transports": [ 00:11:00.221 { 00:11:00.221 "trtype": "TCP" 00:11:00.221 } 00:11:00.221 ] 00:11:00.221 } 00:11:00.221 ] 00:11:00.221 }' 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:11:00.221 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:11:00.222 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:11:00.222 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:00.222 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:11:00.222 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:00.222 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:11:00.222 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:00.222 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:00.222 rmmod nvme_tcp 00:11:00.222 rmmod nvme_fabrics 00:11:00.222 rmmod nvme_keyring 00:11:00.479 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:00.479 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:11:00.479 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:11:00.479 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 1613913 ']' 00:11:00.479 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 1613913 00:11:00.479 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 1613913 ']' 00:11:00.479 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 1613913 00:11:00.479 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:11:00.479 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:00.479 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1613913 00:11:00.479 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:00.479 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:00.479 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1613913' 00:11:00.479 killing process with pid 1613913 00:11:00.479 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 1613913 00:11:00.479 20:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 1613913 00:11:00.736 20:41:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:00.736 20:41:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:00.736 20:41:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:00.736 20:41:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:11:00.736 20:41:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:11:00.736 20:41:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:00.736 20:41:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:11:00.736 20:41:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:00.736 20:41:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:00.736 20:41:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.736 20:41:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:00.736 20:41:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.643 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:02.643 00:11:02.643 real 0m25.610s 00:11:02.643 user 1m22.651s 00:11:02.643 sys 0m4.394s 00:11:02.643 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:02.643 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.643 ************************************ 00:11:02.643 END TEST nvmf_rpc 00:11:02.643 ************************************ 00:11:02.643 20:41:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:02.643 20:41:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:02.643 20:41:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:02.643 20:41:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:02.901 ************************************ 00:11:02.901 START TEST nvmf_invalid 00:11:02.901 ************************************ 00:11:02.901 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:02.901 * Looking for test storage... 00:11:02.901 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:02.901 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:02.901 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:11:02.901 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:02.901 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:02.901 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:02.901 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:02.901 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:02.901 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:11:02.901 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:11:02.901 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:11:02.901 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:11:02.901 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:11:02.901 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:11:02.901 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:11:02.901 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:02.901 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:11:02.901 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:11:02.901 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:02.901 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:02.901 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:11:02.901 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:11:02.901 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:02.901 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:11:02.901 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:11:02.901 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:11:02.901 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:11:02.901 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:02.901 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:11:02.901 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:11:02.901 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:02.901 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:02.901 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:11:02.901 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:02.901 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:02.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.901 --rc genhtml_branch_coverage=1 00:11:02.901 --rc genhtml_function_coverage=1 00:11:02.901 --rc genhtml_legend=1 00:11:02.901 --rc geninfo_all_blocks=1 00:11:02.901 --rc geninfo_unexecuted_blocks=1 00:11:02.901 00:11:02.901 ' 00:11:02.901 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:02.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.901 --rc genhtml_branch_coverage=1 00:11:02.901 --rc genhtml_function_coverage=1 00:11:02.901 --rc genhtml_legend=1 00:11:02.901 --rc geninfo_all_blocks=1 00:11:02.901 --rc geninfo_unexecuted_blocks=1 00:11:02.901 00:11:02.901 ' 00:11:02.901 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:02.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.901 --rc genhtml_branch_coverage=1 00:11:02.901 --rc genhtml_function_coverage=1 00:11:02.901 --rc genhtml_legend=1 00:11:02.901 --rc geninfo_all_blocks=1 00:11:02.901 --rc geninfo_unexecuted_blocks=1 00:11:02.901 00:11:02.901 ' 00:11:02.901 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:02.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.901 --rc genhtml_branch_coverage=1 00:11:02.901 --rc genhtml_function_coverage=1 00:11:02.901 --rc genhtml_legend=1 00:11:02.901 --rc geninfo_all_blocks=1 00:11:02.901 --rc geninfo_unexecuted_blocks=1 00:11:02.901 00:11:02.901 ' 00:11:02.901 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:02.901 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:11:02.901 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:02.901 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:02.901 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:02.901 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:02.901 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:02.901 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:02.901 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:02.901 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:02.901 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:02.901 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:02.901 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:02.901 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:02.901 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:02.901 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:02.901 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:02.901 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:02.901 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:02.901 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:11:02.901 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:02.901 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:02.901 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:02.902 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.902 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.902 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.902 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:11:02.902 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.902 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:11:02.902 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:02.902 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:02.902 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:02.902 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:02.902 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:02.902 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:02.902 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:02.902 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:02.902 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:02.902 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:02.902 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:02.902 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:02.902 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:11:02.902 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:11:02.902 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:11:02.902 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:11:02.902 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:02.902 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:02.902 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:02.902 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:02.902 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:02.902 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.902 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:02.902 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.902 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:02.902 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:02.902 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:11:02.902 20:41:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:05.430 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:05.431 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:05.431 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:05.431 Found net devices under 0000:09:00.0: cvl_0_0 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:05.431 Found net devices under 0000:09:00.1: cvl_0_1 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:11:05.431 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:05.432 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:05.432 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:05.432 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:05.432 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:05.432 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:05.432 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:05.432 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:05.432 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:05.432 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:05.432 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:05.432 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:05.432 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:05.432 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:05.432 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:05.432 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:05.432 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:05.432 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:05.432 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:05.432 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:05.432 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:05.432 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:05.432 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:05.432 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:05.432 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:05.432 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:05.432 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:05.432 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:11:05.432 00:11:05.432 --- 10.0.0.2 ping statistics --- 00:11:05.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.432 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:11:05.432 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:05.432 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:05.432 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:11:05.432 00:11:05.432 --- 10.0.0.1 ping statistics --- 00:11:05.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.432 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:11:05.432 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:05.432 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:11:05.432 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:05.432 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:05.432 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:05.432 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:05.432 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:05.432 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:05.432 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:05.432 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:11:05.432 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:05.432 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:05.432 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:05.432 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=1618484 00:11:05.432 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:05.432 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 1618484 00:11:05.432 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 1618484 ']' 00:11:05.432 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.432 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:05.432 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.432 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:05.432 20:41:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:05.432 [2024-11-26 20:41:08.871920] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:11:05.432 [2024-11-26 20:41:08.871998] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:05.432 [2024-11-26 20:41:08.964046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:05.432 [2024-11-26 20:41:09.040418] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:05.432 [2024-11-26 20:41:09.040475] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:05.432 [2024-11-26 20:41:09.040502] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:05.432 [2024-11-26 20:41:09.040524] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:05.432 [2024-11-26 20:41:09.040543] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:05.432 [2024-11-26 20:41:09.042532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:05.432 [2024-11-26 20:41:09.042595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:05.432 [2024-11-26 20:41:09.042672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.432 [2024-11-26 20:41:09.042663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:05.691 20:41:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:05.691 20:41:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:11:05.691 20:41:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:05.691 20:41:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:05.691 20:41:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:05.691 20:41:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:05.691 20:41:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:05.691 20:41:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode11178 00:11:05.948 [2024-11-26 20:41:09.539333] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:11:05.948 20:41:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:11:05.948 { 00:11:05.948 "nqn": "nqn.2016-06.io.spdk:cnode11178", 00:11:05.948 "tgt_name": "foobar", 00:11:05.948 "method": "nvmf_create_subsystem", 00:11:05.948 "req_id": 1 00:11:05.948 } 00:11:05.948 Got JSON-RPC error response 00:11:05.948 response: 00:11:05.948 { 00:11:05.948 "code": -32603, 00:11:05.948 "message": "Unable to find target foobar" 00:11:05.948 }' 00:11:05.948 20:41:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:11:05.948 { 00:11:05.948 "nqn": "nqn.2016-06.io.spdk:cnode11178", 00:11:05.948 "tgt_name": "foobar", 00:11:05.948 "method": "nvmf_create_subsystem", 00:11:05.948 "req_id": 1 00:11:05.948 } 00:11:05.948 Got JSON-RPC error response 00:11:05.948 response: 00:11:05.948 { 00:11:05.948 "code": -32603, 00:11:05.948 "message": "Unable to find target foobar" 00:11:05.948 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:11:05.948 20:41:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:11:05.948 20:41:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode22954 00:11:06.207 [2024-11-26 20:41:09.808247] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22954: invalid serial number 'SPDKISFASTANDAWESOME' 00:11:06.207 20:41:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:11:06.207 { 00:11:06.207 "nqn": "nqn.2016-06.io.spdk:cnode22954", 00:11:06.207 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:06.207 "method": "nvmf_create_subsystem", 00:11:06.207 "req_id": 1 00:11:06.207 } 00:11:06.207 Got JSON-RPC error response 00:11:06.207 response: 00:11:06.207 { 00:11:06.207 "code": -32602, 00:11:06.207 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:06.207 }' 00:11:06.207 20:41:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:11:06.207 { 00:11:06.207 "nqn": "nqn.2016-06.io.spdk:cnode22954", 00:11:06.207 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:06.207 "method": "nvmf_create_subsystem", 00:11:06.207 "req_id": 1 00:11:06.207 } 00:11:06.207 Got JSON-RPC error response 00:11:06.207 response: 00:11:06.207 { 00:11:06.207 "code": -32602, 00:11:06.207 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:06.207 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:06.207 20:41:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:11:06.207 20:41:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode3301 00:11:06.465 [2024-11-26 20:41:10.097251] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3301: invalid model number 'SPDK_Controller' 00:11:06.465 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:11:06.465 { 00:11:06.465 "nqn": "nqn.2016-06.io.spdk:cnode3301", 00:11:06.465 "model_number": "SPDK_Controller\u001f", 00:11:06.465 "method": "nvmf_create_subsystem", 00:11:06.465 "req_id": 1 00:11:06.465 } 00:11:06.465 Got JSON-RPC error response 00:11:06.465 response: 00:11:06.465 { 00:11:06.465 "code": -32602, 00:11:06.465 "message": "Invalid MN SPDK_Controller\u001f" 00:11:06.465 }' 00:11:06.465 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:11:06.465 { 00:11:06.466 "nqn": "nqn.2016-06.io.spdk:cnode3301", 00:11:06.466 "model_number": "SPDK_Controller\u001f", 00:11:06.466 "method": "nvmf_create_subsystem", 00:11:06.466 "req_id": 1 00:11:06.466 } 00:11:06.466 Got JSON-RPC error response 00:11:06.466 response: 00:11:06.466 { 00:11:06.466 "code": -32602, 00:11:06.466 "message": "Invalid MN SPDK_Controller\u001f" 00:11:06.466 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:06.466 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:11:06.466 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:11:06.466 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:06.466 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:06.466 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:06.466 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:06.466 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.466 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:11:06.466 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:11:06.466 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:11:06.466 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.466 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.466 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:11:06.466 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:11:06.466 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:11:06.466 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.466 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.466 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:11:06.466 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:11:06.466 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:11:06.466 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.466 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.466 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:11:06.466 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:11:06.466 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:11:06.466 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.466 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.466 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:11:06.466 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:11:06.466 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:11:06.466 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.466 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.466 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:11:06.466 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:11:06.466 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:11:06.466 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.466 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.466 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:11:06.466 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:11:06.466 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:11:06.466 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.466 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.466 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:11:06.466 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:11:06.466 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:11:06.466 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.466 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.466 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:11:06.466 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:11:06.724 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:11:06.724 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.724 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ J == \- ]] 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'JVK?3hrycl9!1pqXL%A;=' 00:11:06.725 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'JVK?3hrycl9!1pqXL%A;=' nqn.2016-06.io.spdk:cnode14643 00:11:06.984 [2024-11-26 20:41:10.450338] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14643: invalid serial number 'JVK?3hrycl9!1pqXL%A;=' 00:11:06.984 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:11:06.984 { 00:11:06.984 "nqn": "nqn.2016-06.io.spdk:cnode14643", 00:11:06.984 "serial_number": "JVK?3hrycl9!1pqXL%A;=", 00:11:06.984 "method": "nvmf_create_subsystem", 00:11:06.984 "req_id": 1 00:11:06.984 } 00:11:06.984 Got JSON-RPC error response 00:11:06.984 response: 00:11:06.984 { 00:11:06.984 "code": -32602, 00:11:06.984 "message": "Invalid SN JVK?3hrycl9!1pqXL%A;=" 00:11:06.984 }' 00:11:06.984 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:11:06.984 { 00:11:06.984 "nqn": "nqn.2016-06.io.spdk:cnode14643", 00:11:06.984 "serial_number": "JVK?3hrycl9!1pqXL%A;=", 00:11:06.984 "method": "nvmf_create_subsystem", 00:11:06.984 "req_id": 1 00:11:06.984 } 00:11:06.984 Got JSON-RPC error response 00:11:06.984 response: 00:11:06.984 { 00:11:06.984 "code": -32602, 00:11:06.984 "message": "Invalid SN JVK?3hrycl9!1pqXL%A;=" 00:11:06.984 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:06.984 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:11:06.984 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:11:06.984 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:06.984 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:06.984 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:06.984 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:06.984 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.984 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:11:06.984 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:11:06.984 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:11:06.984 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.984 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.984 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:11:06.984 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:11:06.984 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:11:06.984 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.984 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.984 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:11:06.984 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:11:06.984 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:11:06.984 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.984 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.984 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:11:06.984 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:11:06.984 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:11:06.984 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.984 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.984 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:11:06.984 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:11:06.984 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:11:06.984 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.984 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.984 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:11:06.984 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:11:06.984 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:11:06.984 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.984 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.984 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:11:06.984 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:11:06.984 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:11:06.984 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.984 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.984 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:11:06.985 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:11:06.986 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:11:06.986 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.986 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.986 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:11:06.986 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:11:06.986 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:11:06.986 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.986 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.986 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:11:06.986 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:11:06.986 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:11:06.986 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.986 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.986 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:11:06.986 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:11:06.986 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:11:06.986 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.986 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.986 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:11:06.986 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:11:06.986 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:11:06.986 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.986 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.986 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:11:06.986 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:11:06.986 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:11:06.986 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.986 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.986 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:11:06.986 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:11:06.986 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:11:06.986 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.986 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.986 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:11:06.986 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:11:06.986 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:11:06.986 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.986 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.986 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:11:06.986 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:11:06.986 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:11:06.986 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.986 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.986 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:11:06.986 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:11:06.986 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:11:06.986 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.986 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.986 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:11:06.986 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:11:06.986 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:11:06.986 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.986 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.986 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:11:06.986 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:11:06.986 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:11:06.986 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.986 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.986 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ u == \- ]] 00:11:06.986 20:41:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'utR(UtR(UtR(UtR(UtR(U /dev/null' 00:11:09.866 20:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:12.404 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:12.404 00:11:12.404 real 0m9.253s 00:11:12.404 user 0m22.216s 00:11:12.404 sys 0m2.569s 00:11:12.404 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:12.404 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:12.404 ************************************ 00:11:12.404 END TEST nvmf_invalid 00:11:12.404 ************************************ 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:12.405 ************************************ 00:11:12.405 START TEST nvmf_connect_stress 00:11:12.405 ************************************ 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:12.405 * Looking for test storage... 00:11:12.405 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:12.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.405 --rc genhtml_branch_coverage=1 00:11:12.405 --rc genhtml_function_coverage=1 00:11:12.405 --rc genhtml_legend=1 00:11:12.405 --rc geninfo_all_blocks=1 00:11:12.405 --rc geninfo_unexecuted_blocks=1 00:11:12.405 00:11:12.405 ' 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:12.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.405 --rc genhtml_branch_coverage=1 00:11:12.405 --rc genhtml_function_coverage=1 00:11:12.405 --rc genhtml_legend=1 00:11:12.405 --rc geninfo_all_blocks=1 00:11:12.405 --rc geninfo_unexecuted_blocks=1 00:11:12.405 00:11:12.405 ' 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:12.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.405 --rc genhtml_branch_coverage=1 00:11:12.405 --rc genhtml_function_coverage=1 00:11:12.405 --rc genhtml_legend=1 00:11:12.405 --rc geninfo_all_blocks=1 00:11:12.405 --rc geninfo_unexecuted_blocks=1 00:11:12.405 00:11:12.405 ' 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:12.405 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:12.405 --rc genhtml_branch_coverage=1 00:11:12.405 --rc genhtml_function_coverage=1 00:11:12.405 --rc genhtml_legend=1 00:11:12.405 --rc geninfo_all_blocks=1 00:11:12.405 --rc geninfo_unexecuted_blocks=1 00:11:12.405 00:11:12.405 ' 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:12.405 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:12.406 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:12.406 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:12.406 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:12.406 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:12.406 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:12.406 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:12.406 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:12.406 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:11:12.406 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:12.406 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:12.406 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:12.406 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:12.406 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:12.406 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:12.406 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:12.406 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:12.406 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:12.406 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:12.406 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:11:12.406 20:41:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:14.936 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:14.936 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:11:14.936 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:14.936 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:14.936 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:14.936 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:14.936 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:14.936 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:11:14.936 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:14.936 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:11:14.936 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:11:14.936 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:11:14.936 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:11:14.936 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:11:14.936 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:11:14.936 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:14.936 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:14.936 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:14.936 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:14.936 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:14.936 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:14.936 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:14.936 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:14.936 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:14.936 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:14.936 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:14.936 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:14.936 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:14.936 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:14.936 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:14.936 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:14.936 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:14.936 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:14.936 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:14.936 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:14.936 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:14.936 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:14.936 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:14.936 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:14.936 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:14.936 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:14.936 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:14.936 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:14.936 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:14.936 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:14.936 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:14.936 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:14.936 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:14.936 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:14.936 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:14.936 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:14.936 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:14.936 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:14.936 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:14.936 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:14.937 Found net devices under 0000:09:00.0: cvl_0_0 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:14.937 Found net devices under 0000:09:00.1: cvl_0_1 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:14.937 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:14.937 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:11:14.937 00:11:14.937 --- 10.0.0.2 ping statistics --- 00:11:14.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:14.937 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:14.937 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:14.937 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:11:14.937 00:11:14.937 --- 10.0.0.1 ping statistics --- 00:11:14.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:14.937 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=1621174 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 1621174 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 1621174 ']' 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:14.937 [2024-11-26 20:41:18.225235] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:11:14.937 [2024-11-26 20:41:18.225324] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:14.937 [2024-11-26 20:41:18.293813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:14.937 [2024-11-26 20:41:18.351451] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:14.937 [2024-11-26 20:41:18.351499] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:14.937 [2024-11-26 20:41:18.351523] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:14.937 [2024-11-26 20:41:18.351534] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:14.937 [2024-11-26 20:41:18.351544] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:14.937 [2024-11-26 20:41:18.352958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:14.937 [2024-11-26 20:41:18.353022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:14.937 [2024-11-26 20:41:18.353026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:14.937 [2024-11-26 20:41:18.500151] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:14.937 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.938 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:14.938 [2024-11-26 20:41:18.517541] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:14.938 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.938 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:14.938 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.938 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:14.938 NULL1 00:11:14.938 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.938 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1621205 00:11:14.938 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:11:14.938 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:14.938 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:14.938 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:11:14.938 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:14.938 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:14.938 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:14.938 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:14.938 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:14.938 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:14.938 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:14.938 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:14.938 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:14.938 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:14.938 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:14.938 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:14.938 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:14.938 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:14.938 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:14.938 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:14.938 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:14.938 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:14.938 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:14.938 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:14.938 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:14.938 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:14.938 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:14.938 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:14.938 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:14.938 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:14.938 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:14.938 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:14.938 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:14.938 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:14.938 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:14.938 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:14.938 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:14.938 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:14.938 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:14.938 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:14.938 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:14.938 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:14.938 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:14.938 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:14.938 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1621205 00:11:14.938 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:14.938 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.938 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:15.503 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.503 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1621205 00:11:15.504 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:15.504 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.504 20:41:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:15.761 20:41:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.761 20:41:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1621205 00:11:15.761 20:41:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:15.761 20:41:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.761 20:41:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:16.018 20:41:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.018 20:41:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1621205 00:11:16.018 20:41:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:16.018 20:41:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.018 20:41:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:16.275 20:41:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.275 20:41:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1621205 00:11:16.275 20:41:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:16.276 20:41:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.276 20:41:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:16.534 20:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.534 20:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1621205 00:11:16.534 20:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:16.534 20:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.534 20:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:17.099 20:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.099 20:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1621205 00:11:17.099 20:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:17.099 20:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.099 20:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:17.358 20:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.358 20:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1621205 00:11:17.358 20:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:17.358 20:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.358 20:41:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:17.616 20:41:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.616 20:41:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1621205 00:11:17.616 20:41:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:17.616 20:41:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.616 20:41:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:17.874 20:41:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.874 20:41:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1621205 00:11:17.874 20:41:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:17.874 20:41:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.874 20:41:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:18.134 20:41:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.134 20:41:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1621205 00:11:18.134 20:41:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:18.134 20:41:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.134 20:41:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:18.462 20:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.462 20:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1621205 00:11:18.462 20:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:18.462 20:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.462 20:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:19.027 20:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.027 20:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1621205 00:11:19.027 20:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:19.027 20:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.027 20:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:19.285 20:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.285 20:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1621205 00:11:19.285 20:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:19.285 20:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.285 20:41:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:19.543 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.543 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1621205 00:11:19.543 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:19.543 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.543 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:19.800 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.800 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1621205 00:11:19.800 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:19.800 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.800 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:20.058 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.058 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1621205 00:11:20.058 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:20.058 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.058 20:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:20.624 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.624 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1621205 00:11:20.624 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:20.624 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.624 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:20.883 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.883 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1621205 00:11:20.883 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:20.883 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.883 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:21.141 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.141 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1621205 00:11:21.141 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:21.141 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.141 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:21.397 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.397 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1621205 00:11:21.397 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:21.397 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.397 20:41:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:21.654 20:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.654 20:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1621205 00:11:21.654 20:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:21.654 20:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.654 20:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:22.220 20:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.220 20:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1621205 00:11:22.220 20:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:22.220 20:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.220 20:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:22.478 20:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.478 20:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1621205 00:11:22.478 20:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:22.478 20:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.478 20:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:22.735 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.735 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1621205 00:11:22.735 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:22.735 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.735 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:22.993 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.994 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1621205 00:11:22.994 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:22.994 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.994 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:23.252 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.252 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1621205 00:11:23.252 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:23.252 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.252 20:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:23.817 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.817 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1621205 00:11:23.817 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:23.817 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.817 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:24.075 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.075 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1621205 00:11:24.075 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:24.075 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.075 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:24.332 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.332 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1621205 00:11:24.332 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:24.332 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.332 20:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:24.590 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.590 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1621205 00:11:24.590 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:24.590 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.590 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:24.848 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.848 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1621205 00:11:24.848 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:24.848 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.848 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:25.105 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:25.362 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.362 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1621205 00:11:25.362 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1621205) - No such process 00:11:25.362 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1621205 00:11:25.362 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:25.362 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:25.362 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:25.362 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:25.362 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:11:25.362 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:25.362 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:11:25.362 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:25.362 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:25.362 rmmod nvme_tcp 00:11:25.362 rmmod nvme_fabrics 00:11:25.362 rmmod nvme_keyring 00:11:25.362 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:25.362 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:11:25.362 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:11:25.362 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 1621174 ']' 00:11:25.362 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 1621174 00:11:25.362 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 1621174 ']' 00:11:25.362 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 1621174 00:11:25.362 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:11:25.362 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:25.362 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1621174 00:11:25.362 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:25.362 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:25.362 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1621174' 00:11:25.362 killing process with pid 1621174 00:11:25.362 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 1621174 00:11:25.362 20:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 1621174 00:11:25.619 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:25.619 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:25.619 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:25.619 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:11:25.619 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:11:25.619 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:11:25.619 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:25.619 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:25.619 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:25.619 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:25.619 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:25.619 20:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.522 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:27.522 00:11:27.522 real 0m15.550s 00:11:27.522 user 0m38.479s 00:11:27.522 sys 0m5.973s 00:11:27.522 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:27.522 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:27.522 ************************************ 00:11:27.522 END TEST nvmf_connect_stress 00:11:27.522 ************************************ 00:11:27.781 20:41:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:27.781 20:41:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:27.781 20:41:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:27.781 20:41:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:27.781 ************************************ 00:11:27.781 START TEST nvmf_fused_ordering 00:11:27.781 ************************************ 00:11:27.781 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:27.781 * Looking for test storage... 00:11:27.781 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:27.781 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:27.781 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:11:27.781 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:27.781 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:27.781 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:27.781 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:27.781 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:27.781 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:11:27.781 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:11:27.781 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:11:27.781 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:11:27.781 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:11:27.781 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:11:27.781 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:11:27.781 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:27.781 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:11:27.781 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:11:27.781 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:27.781 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:27.781 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:11:27.781 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:11:27.781 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:27.781 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:11:27.781 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:11:27.781 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:11:27.781 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:11:27.781 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:27.781 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:11:27.781 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:11:27.781 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:27.781 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:27.781 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:11:27.781 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:27.781 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:27.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.781 --rc genhtml_branch_coverage=1 00:11:27.781 --rc genhtml_function_coverage=1 00:11:27.781 --rc genhtml_legend=1 00:11:27.781 --rc geninfo_all_blocks=1 00:11:27.781 --rc geninfo_unexecuted_blocks=1 00:11:27.781 00:11:27.781 ' 00:11:27.781 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:27.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.781 --rc genhtml_branch_coverage=1 00:11:27.781 --rc genhtml_function_coverage=1 00:11:27.781 --rc genhtml_legend=1 00:11:27.781 --rc geninfo_all_blocks=1 00:11:27.781 --rc geninfo_unexecuted_blocks=1 00:11:27.781 00:11:27.781 ' 00:11:27.781 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:27.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.781 --rc genhtml_branch_coverage=1 00:11:27.781 --rc genhtml_function_coverage=1 00:11:27.781 --rc genhtml_legend=1 00:11:27.781 --rc geninfo_all_blocks=1 00:11:27.781 --rc geninfo_unexecuted_blocks=1 00:11:27.781 00:11:27.781 ' 00:11:27.781 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:27.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.781 --rc genhtml_branch_coverage=1 00:11:27.781 --rc genhtml_function_coverage=1 00:11:27.781 --rc genhtml_legend=1 00:11:27.781 --rc geninfo_all_blocks=1 00:11:27.781 --rc geninfo_unexecuted_blocks=1 00:11:27.781 00:11:27.781 ' 00:11:27.781 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:27.781 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:11:27.781 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:27.781 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:27.782 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:27.782 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:27.782 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:27.782 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:27.782 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:27.782 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:27.782 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:27.782 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:27.782 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:27.782 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:27.782 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:27.782 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:27.782 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:27.782 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:27.782 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:27.782 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:11:27.782 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:27.782 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:27.782 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:27.782 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.782 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.782 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.782 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:11:27.782 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.782 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:11:27.782 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:27.782 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:27.782 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:27.782 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:27.782 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:27.782 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:27.782 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:27.782 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:27.782 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:27.782 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:27.782 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:27.782 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:27.782 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:27.782 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:27.782 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:27.782 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:27.782 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.782 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:27.782 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.782 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:27.782 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:27.782 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:11:27.782 20:41:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:30.316 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:30.316 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:11:30.316 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:30.316 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:30.316 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:30.316 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:30.316 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:30.316 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:11:30.316 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:30.316 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:11:30.316 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:11:30.316 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:11:30.316 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:11:30.316 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:11:30.316 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:11:30.316 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:30.316 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:30.316 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:30.316 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:30.316 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:30.316 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:30.316 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:30.316 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:30.316 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:30.316 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:30.316 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:30.316 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:30.316 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:30.316 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:30.316 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:30.316 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:30.316 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:30.316 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:30.316 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:30.316 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:30.316 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:30.316 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:30.316 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:30.316 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:30.316 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:30.316 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:30.316 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:30.316 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:30.316 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:30.316 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:30.316 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:30.316 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:30.316 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:30.316 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:30.316 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:30.316 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:30.316 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:30.316 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:30.316 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:30.316 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:30.316 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:30.316 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:30.317 Found net devices under 0000:09:00.0: cvl_0_0 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:30.317 Found net devices under 0000:09:00.1: cvl_0_1 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:30.317 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:30.317 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:11:30.317 00:11:30.317 --- 10.0.0.2 ping statistics --- 00:11:30.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:30.317 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:30.317 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:30.317 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:11:30.317 00:11:30.317 --- 10.0.0.1 ping statistics --- 00:11:30.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:30.317 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=1624442 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 1624442 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 1624442 ']' 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:30.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:30.317 20:41:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:30.317 [2024-11-26 20:41:33.849643] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:11:30.317 [2024-11-26 20:41:33.849748] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:30.317 [2024-11-26 20:41:33.920724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.317 [2024-11-26 20:41:33.973841] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:30.317 [2024-11-26 20:41:33.973900] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:30.317 [2024-11-26 20:41:33.973921] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:30.317 [2024-11-26 20:41:33.973931] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:30.317 [2024-11-26 20:41:33.973940] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:30.317 [2024-11-26 20:41:33.974532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:30.576 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:30.576 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:11:30.576 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:30.576 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:30.576 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:30.576 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:30.576 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:30.576 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.576 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:30.576 [2024-11-26 20:41:34.116111] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:30.576 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.577 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:30.577 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.577 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:30.577 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.577 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:30.577 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.577 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:30.577 [2024-11-26 20:41:34.132353] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:30.577 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.577 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:30.577 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.577 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:30.577 NULL1 00:11:30.577 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.577 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:30.577 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.577 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:30.577 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.577 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:30.577 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.577 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:30.577 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.577 20:41:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:30.577 [2024-11-26 20:41:34.176256] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:11:30.577 [2024-11-26 20:41:34.176313] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1624497 ] 00:11:31.142 Attached to nqn.2016-06.io.spdk:cnode1 00:11:31.142 Namespace ID: 1 size: 1GB 00:11:31.142 fused_ordering(0) 00:11:31.142 fused_ordering(1) 00:11:31.142 fused_ordering(2) 00:11:31.142 fused_ordering(3) 00:11:31.142 fused_ordering(4) 00:11:31.142 fused_ordering(5) 00:11:31.142 fused_ordering(6) 00:11:31.142 fused_ordering(7) 00:11:31.142 fused_ordering(8) 00:11:31.142 fused_ordering(9) 00:11:31.142 fused_ordering(10) 00:11:31.142 fused_ordering(11) 00:11:31.142 fused_ordering(12) 00:11:31.142 fused_ordering(13) 00:11:31.142 fused_ordering(14) 00:11:31.142 fused_ordering(15) 00:11:31.142 fused_ordering(16) 00:11:31.142 fused_ordering(17) 00:11:31.142 fused_ordering(18) 00:11:31.142 fused_ordering(19) 00:11:31.142 fused_ordering(20) 00:11:31.142 fused_ordering(21) 00:11:31.142 fused_ordering(22) 00:11:31.142 fused_ordering(23) 00:11:31.142 fused_ordering(24) 00:11:31.142 fused_ordering(25) 00:11:31.142 fused_ordering(26) 00:11:31.142 fused_ordering(27) 00:11:31.142 fused_ordering(28) 00:11:31.142 fused_ordering(29) 00:11:31.142 fused_ordering(30) 00:11:31.142 fused_ordering(31) 00:11:31.142 fused_ordering(32) 00:11:31.142 fused_ordering(33) 00:11:31.142 fused_ordering(34) 00:11:31.142 fused_ordering(35) 00:11:31.143 fused_ordering(36) 00:11:31.143 fused_ordering(37) 00:11:31.143 fused_ordering(38) 00:11:31.143 fused_ordering(39) 00:11:31.143 fused_ordering(40) 00:11:31.143 fused_ordering(41) 00:11:31.143 fused_ordering(42) 00:11:31.143 fused_ordering(43) 00:11:31.143 fused_ordering(44) 00:11:31.143 fused_ordering(45) 00:11:31.143 fused_ordering(46) 00:11:31.143 fused_ordering(47) 00:11:31.143 fused_ordering(48) 00:11:31.143 fused_ordering(49) 00:11:31.143 fused_ordering(50) 00:11:31.143 fused_ordering(51) 00:11:31.143 fused_ordering(52) 00:11:31.143 fused_ordering(53) 00:11:31.143 fused_ordering(54) 00:11:31.143 fused_ordering(55) 00:11:31.143 fused_ordering(56) 00:11:31.143 fused_ordering(57) 00:11:31.143 fused_ordering(58) 00:11:31.143 fused_ordering(59) 00:11:31.143 fused_ordering(60) 00:11:31.143 fused_ordering(61) 00:11:31.143 fused_ordering(62) 00:11:31.143 fused_ordering(63) 00:11:31.143 fused_ordering(64) 00:11:31.143 fused_ordering(65) 00:11:31.143 fused_ordering(66) 00:11:31.143 fused_ordering(67) 00:11:31.143 fused_ordering(68) 00:11:31.143 fused_ordering(69) 00:11:31.143 fused_ordering(70) 00:11:31.143 fused_ordering(71) 00:11:31.143 fused_ordering(72) 00:11:31.143 fused_ordering(73) 00:11:31.143 fused_ordering(74) 00:11:31.143 fused_ordering(75) 00:11:31.143 fused_ordering(76) 00:11:31.143 fused_ordering(77) 00:11:31.143 fused_ordering(78) 00:11:31.143 fused_ordering(79) 00:11:31.143 fused_ordering(80) 00:11:31.143 fused_ordering(81) 00:11:31.143 fused_ordering(82) 00:11:31.143 fused_ordering(83) 00:11:31.143 fused_ordering(84) 00:11:31.143 fused_ordering(85) 00:11:31.143 fused_ordering(86) 00:11:31.143 fused_ordering(87) 00:11:31.143 fused_ordering(88) 00:11:31.143 fused_ordering(89) 00:11:31.143 fused_ordering(90) 00:11:31.143 fused_ordering(91) 00:11:31.143 fused_ordering(92) 00:11:31.143 fused_ordering(93) 00:11:31.143 fused_ordering(94) 00:11:31.143 fused_ordering(95) 00:11:31.143 fused_ordering(96) 00:11:31.143 fused_ordering(97) 00:11:31.143 fused_ordering(98) 00:11:31.143 fused_ordering(99) 00:11:31.143 fused_ordering(100) 00:11:31.143 fused_ordering(101) 00:11:31.143 fused_ordering(102) 00:11:31.143 fused_ordering(103) 00:11:31.143 fused_ordering(104) 00:11:31.143 fused_ordering(105) 00:11:31.143 fused_ordering(106) 00:11:31.143 fused_ordering(107) 00:11:31.143 fused_ordering(108) 00:11:31.143 fused_ordering(109) 00:11:31.143 fused_ordering(110) 00:11:31.143 fused_ordering(111) 00:11:31.143 fused_ordering(112) 00:11:31.143 fused_ordering(113) 00:11:31.143 fused_ordering(114) 00:11:31.143 fused_ordering(115) 00:11:31.143 fused_ordering(116) 00:11:31.143 fused_ordering(117) 00:11:31.143 fused_ordering(118) 00:11:31.143 fused_ordering(119) 00:11:31.143 fused_ordering(120) 00:11:31.143 fused_ordering(121) 00:11:31.143 fused_ordering(122) 00:11:31.143 fused_ordering(123) 00:11:31.143 fused_ordering(124) 00:11:31.143 fused_ordering(125) 00:11:31.143 fused_ordering(126) 00:11:31.143 fused_ordering(127) 00:11:31.143 fused_ordering(128) 00:11:31.143 fused_ordering(129) 00:11:31.143 fused_ordering(130) 00:11:31.143 fused_ordering(131) 00:11:31.143 fused_ordering(132) 00:11:31.143 fused_ordering(133) 00:11:31.143 fused_ordering(134) 00:11:31.143 fused_ordering(135) 00:11:31.143 fused_ordering(136) 00:11:31.143 fused_ordering(137) 00:11:31.143 fused_ordering(138) 00:11:31.143 fused_ordering(139) 00:11:31.143 fused_ordering(140) 00:11:31.143 fused_ordering(141) 00:11:31.143 fused_ordering(142) 00:11:31.143 fused_ordering(143) 00:11:31.143 fused_ordering(144) 00:11:31.143 fused_ordering(145) 00:11:31.143 fused_ordering(146) 00:11:31.143 fused_ordering(147) 00:11:31.143 fused_ordering(148) 00:11:31.143 fused_ordering(149) 00:11:31.143 fused_ordering(150) 00:11:31.143 fused_ordering(151) 00:11:31.143 fused_ordering(152) 00:11:31.143 fused_ordering(153) 00:11:31.143 fused_ordering(154) 00:11:31.143 fused_ordering(155) 00:11:31.143 fused_ordering(156) 00:11:31.143 fused_ordering(157) 00:11:31.143 fused_ordering(158) 00:11:31.143 fused_ordering(159) 00:11:31.143 fused_ordering(160) 00:11:31.143 fused_ordering(161) 00:11:31.143 fused_ordering(162) 00:11:31.143 fused_ordering(163) 00:11:31.143 fused_ordering(164) 00:11:31.143 fused_ordering(165) 00:11:31.143 fused_ordering(166) 00:11:31.143 fused_ordering(167) 00:11:31.143 fused_ordering(168) 00:11:31.143 fused_ordering(169) 00:11:31.143 fused_ordering(170) 00:11:31.143 fused_ordering(171) 00:11:31.143 fused_ordering(172) 00:11:31.143 fused_ordering(173) 00:11:31.143 fused_ordering(174) 00:11:31.143 fused_ordering(175) 00:11:31.143 fused_ordering(176) 00:11:31.143 fused_ordering(177) 00:11:31.143 fused_ordering(178) 00:11:31.143 fused_ordering(179) 00:11:31.143 fused_ordering(180) 00:11:31.143 fused_ordering(181) 00:11:31.143 fused_ordering(182) 00:11:31.143 fused_ordering(183) 00:11:31.143 fused_ordering(184) 00:11:31.143 fused_ordering(185) 00:11:31.143 fused_ordering(186) 00:11:31.143 fused_ordering(187) 00:11:31.143 fused_ordering(188) 00:11:31.143 fused_ordering(189) 00:11:31.143 fused_ordering(190) 00:11:31.143 fused_ordering(191) 00:11:31.143 fused_ordering(192) 00:11:31.143 fused_ordering(193) 00:11:31.143 fused_ordering(194) 00:11:31.143 fused_ordering(195) 00:11:31.143 fused_ordering(196) 00:11:31.143 fused_ordering(197) 00:11:31.143 fused_ordering(198) 00:11:31.143 fused_ordering(199) 00:11:31.143 fused_ordering(200) 00:11:31.143 fused_ordering(201) 00:11:31.143 fused_ordering(202) 00:11:31.143 fused_ordering(203) 00:11:31.143 fused_ordering(204) 00:11:31.143 fused_ordering(205) 00:11:31.709 fused_ordering(206) 00:11:31.709 fused_ordering(207) 00:11:31.709 fused_ordering(208) 00:11:31.709 fused_ordering(209) 00:11:31.709 fused_ordering(210) 00:11:31.709 fused_ordering(211) 00:11:31.709 fused_ordering(212) 00:11:31.709 fused_ordering(213) 00:11:31.709 fused_ordering(214) 00:11:31.709 fused_ordering(215) 00:11:31.709 fused_ordering(216) 00:11:31.709 fused_ordering(217) 00:11:31.709 fused_ordering(218) 00:11:31.709 fused_ordering(219) 00:11:31.709 fused_ordering(220) 00:11:31.709 fused_ordering(221) 00:11:31.709 fused_ordering(222) 00:11:31.709 fused_ordering(223) 00:11:31.709 fused_ordering(224) 00:11:31.709 fused_ordering(225) 00:11:31.709 fused_ordering(226) 00:11:31.709 fused_ordering(227) 00:11:31.709 fused_ordering(228) 00:11:31.709 fused_ordering(229) 00:11:31.709 fused_ordering(230) 00:11:31.709 fused_ordering(231) 00:11:31.709 fused_ordering(232) 00:11:31.709 fused_ordering(233) 00:11:31.709 fused_ordering(234) 00:11:31.709 fused_ordering(235) 00:11:31.709 fused_ordering(236) 00:11:31.709 fused_ordering(237) 00:11:31.709 fused_ordering(238) 00:11:31.709 fused_ordering(239) 00:11:31.709 fused_ordering(240) 00:11:31.709 fused_ordering(241) 00:11:31.709 fused_ordering(242) 00:11:31.709 fused_ordering(243) 00:11:31.709 fused_ordering(244) 00:11:31.709 fused_ordering(245) 00:11:31.709 fused_ordering(246) 00:11:31.709 fused_ordering(247) 00:11:31.709 fused_ordering(248) 00:11:31.709 fused_ordering(249) 00:11:31.709 fused_ordering(250) 00:11:31.709 fused_ordering(251) 00:11:31.709 fused_ordering(252) 00:11:31.709 fused_ordering(253) 00:11:31.709 fused_ordering(254) 00:11:31.709 fused_ordering(255) 00:11:31.709 fused_ordering(256) 00:11:31.709 fused_ordering(257) 00:11:31.709 fused_ordering(258) 00:11:31.710 fused_ordering(259) 00:11:31.710 fused_ordering(260) 00:11:31.710 fused_ordering(261) 00:11:31.710 fused_ordering(262) 00:11:31.710 fused_ordering(263) 00:11:31.710 fused_ordering(264) 00:11:31.710 fused_ordering(265) 00:11:31.710 fused_ordering(266) 00:11:31.710 fused_ordering(267) 00:11:31.710 fused_ordering(268) 00:11:31.710 fused_ordering(269) 00:11:31.710 fused_ordering(270) 00:11:31.710 fused_ordering(271) 00:11:31.710 fused_ordering(272) 00:11:31.710 fused_ordering(273) 00:11:31.710 fused_ordering(274) 00:11:31.710 fused_ordering(275) 00:11:31.710 fused_ordering(276) 00:11:31.710 fused_ordering(277) 00:11:31.710 fused_ordering(278) 00:11:31.710 fused_ordering(279) 00:11:31.710 fused_ordering(280) 00:11:31.710 fused_ordering(281) 00:11:31.710 fused_ordering(282) 00:11:31.710 fused_ordering(283) 00:11:31.710 fused_ordering(284) 00:11:31.710 fused_ordering(285) 00:11:31.710 fused_ordering(286) 00:11:31.710 fused_ordering(287) 00:11:31.710 fused_ordering(288) 00:11:31.710 fused_ordering(289) 00:11:31.710 fused_ordering(290) 00:11:31.710 fused_ordering(291) 00:11:31.710 fused_ordering(292) 00:11:31.710 fused_ordering(293) 00:11:31.710 fused_ordering(294) 00:11:31.710 fused_ordering(295) 00:11:31.710 fused_ordering(296) 00:11:31.710 fused_ordering(297) 00:11:31.710 fused_ordering(298) 00:11:31.710 fused_ordering(299) 00:11:31.710 fused_ordering(300) 00:11:31.710 fused_ordering(301) 00:11:31.710 fused_ordering(302) 00:11:31.710 fused_ordering(303) 00:11:31.710 fused_ordering(304) 00:11:31.710 fused_ordering(305) 00:11:31.710 fused_ordering(306) 00:11:31.710 fused_ordering(307) 00:11:31.710 fused_ordering(308) 00:11:31.710 fused_ordering(309) 00:11:31.710 fused_ordering(310) 00:11:31.710 fused_ordering(311) 00:11:31.710 fused_ordering(312) 00:11:31.710 fused_ordering(313) 00:11:31.710 fused_ordering(314) 00:11:31.710 fused_ordering(315) 00:11:31.710 fused_ordering(316) 00:11:31.710 fused_ordering(317) 00:11:31.710 fused_ordering(318) 00:11:31.710 fused_ordering(319) 00:11:31.710 fused_ordering(320) 00:11:31.710 fused_ordering(321) 00:11:31.710 fused_ordering(322) 00:11:31.710 fused_ordering(323) 00:11:31.710 fused_ordering(324) 00:11:31.710 fused_ordering(325) 00:11:31.710 fused_ordering(326) 00:11:31.710 fused_ordering(327) 00:11:31.710 fused_ordering(328) 00:11:31.710 fused_ordering(329) 00:11:31.710 fused_ordering(330) 00:11:31.710 fused_ordering(331) 00:11:31.710 fused_ordering(332) 00:11:31.710 fused_ordering(333) 00:11:31.710 fused_ordering(334) 00:11:31.710 fused_ordering(335) 00:11:31.710 fused_ordering(336) 00:11:31.710 fused_ordering(337) 00:11:31.710 fused_ordering(338) 00:11:31.710 fused_ordering(339) 00:11:31.710 fused_ordering(340) 00:11:31.710 fused_ordering(341) 00:11:31.710 fused_ordering(342) 00:11:31.710 fused_ordering(343) 00:11:31.710 fused_ordering(344) 00:11:31.710 fused_ordering(345) 00:11:31.710 fused_ordering(346) 00:11:31.710 fused_ordering(347) 00:11:31.710 fused_ordering(348) 00:11:31.710 fused_ordering(349) 00:11:31.710 fused_ordering(350) 00:11:31.710 fused_ordering(351) 00:11:31.710 fused_ordering(352) 00:11:31.710 fused_ordering(353) 00:11:31.710 fused_ordering(354) 00:11:31.710 fused_ordering(355) 00:11:31.710 fused_ordering(356) 00:11:31.710 fused_ordering(357) 00:11:31.710 fused_ordering(358) 00:11:31.710 fused_ordering(359) 00:11:31.710 fused_ordering(360) 00:11:31.710 fused_ordering(361) 00:11:31.710 fused_ordering(362) 00:11:31.710 fused_ordering(363) 00:11:31.710 fused_ordering(364) 00:11:31.710 fused_ordering(365) 00:11:31.710 fused_ordering(366) 00:11:31.710 fused_ordering(367) 00:11:31.710 fused_ordering(368) 00:11:31.710 fused_ordering(369) 00:11:31.710 fused_ordering(370) 00:11:31.710 fused_ordering(371) 00:11:31.710 fused_ordering(372) 00:11:31.710 fused_ordering(373) 00:11:31.710 fused_ordering(374) 00:11:31.710 fused_ordering(375) 00:11:31.710 fused_ordering(376) 00:11:31.710 fused_ordering(377) 00:11:31.710 fused_ordering(378) 00:11:31.710 fused_ordering(379) 00:11:31.710 fused_ordering(380) 00:11:31.710 fused_ordering(381) 00:11:31.710 fused_ordering(382) 00:11:31.710 fused_ordering(383) 00:11:31.710 fused_ordering(384) 00:11:31.710 fused_ordering(385) 00:11:31.710 fused_ordering(386) 00:11:31.710 fused_ordering(387) 00:11:31.710 fused_ordering(388) 00:11:31.710 fused_ordering(389) 00:11:31.710 fused_ordering(390) 00:11:31.710 fused_ordering(391) 00:11:31.710 fused_ordering(392) 00:11:31.710 fused_ordering(393) 00:11:31.710 fused_ordering(394) 00:11:31.710 fused_ordering(395) 00:11:31.710 fused_ordering(396) 00:11:31.710 fused_ordering(397) 00:11:31.710 fused_ordering(398) 00:11:31.710 fused_ordering(399) 00:11:31.710 fused_ordering(400) 00:11:31.710 fused_ordering(401) 00:11:31.710 fused_ordering(402) 00:11:31.710 fused_ordering(403) 00:11:31.710 fused_ordering(404) 00:11:31.710 fused_ordering(405) 00:11:31.710 fused_ordering(406) 00:11:31.710 fused_ordering(407) 00:11:31.710 fused_ordering(408) 00:11:31.710 fused_ordering(409) 00:11:31.710 fused_ordering(410) 00:11:31.968 fused_ordering(411) 00:11:31.968 fused_ordering(412) 00:11:31.968 fused_ordering(413) 00:11:31.968 fused_ordering(414) 00:11:31.968 fused_ordering(415) 00:11:31.968 fused_ordering(416) 00:11:31.968 fused_ordering(417) 00:11:31.968 fused_ordering(418) 00:11:31.968 fused_ordering(419) 00:11:31.968 fused_ordering(420) 00:11:31.968 fused_ordering(421) 00:11:31.968 fused_ordering(422) 00:11:31.968 fused_ordering(423) 00:11:31.968 fused_ordering(424) 00:11:31.968 fused_ordering(425) 00:11:31.968 fused_ordering(426) 00:11:31.968 fused_ordering(427) 00:11:31.968 fused_ordering(428) 00:11:31.968 fused_ordering(429) 00:11:31.968 fused_ordering(430) 00:11:31.968 fused_ordering(431) 00:11:31.968 fused_ordering(432) 00:11:31.968 fused_ordering(433) 00:11:31.968 fused_ordering(434) 00:11:31.968 fused_ordering(435) 00:11:31.968 fused_ordering(436) 00:11:31.968 fused_ordering(437) 00:11:31.968 fused_ordering(438) 00:11:31.968 fused_ordering(439) 00:11:31.968 fused_ordering(440) 00:11:31.968 fused_ordering(441) 00:11:31.968 fused_ordering(442) 00:11:31.968 fused_ordering(443) 00:11:31.968 fused_ordering(444) 00:11:31.968 fused_ordering(445) 00:11:31.969 fused_ordering(446) 00:11:31.969 fused_ordering(447) 00:11:31.969 fused_ordering(448) 00:11:31.969 fused_ordering(449) 00:11:31.969 fused_ordering(450) 00:11:31.969 fused_ordering(451) 00:11:31.969 fused_ordering(452) 00:11:31.969 fused_ordering(453) 00:11:31.969 fused_ordering(454) 00:11:31.969 fused_ordering(455) 00:11:31.969 fused_ordering(456) 00:11:31.969 fused_ordering(457) 00:11:31.969 fused_ordering(458) 00:11:31.969 fused_ordering(459) 00:11:31.969 fused_ordering(460) 00:11:31.969 fused_ordering(461) 00:11:31.969 fused_ordering(462) 00:11:31.969 fused_ordering(463) 00:11:31.969 fused_ordering(464) 00:11:31.969 fused_ordering(465) 00:11:31.969 fused_ordering(466) 00:11:31.969 fused_ordering(467) 00:11:31.969 fused_ordering(468) 00:11:31.969 fused_ordering(469) 00:11:31.969 fused_ordering(470) 00:11:31.969 fused_ordering(471) 00:11:31.969 fused_ordering(472) 00:11:31.969 fused_ordering(473) 00:11:31.969 fused_ordering(474) 00:11:31.969 fused_ordering(475) 00:11:31.969 fused_ordering(476) 00:11:31.969 fused_ordering(477) 00:11:31.969 fused_ordering(478) 00:11:31.969 fused_ordering(479) 00:11:31.969 fused_ordering(480) 00:11:31.969 fused_ordering(481) 00:11:31.969 fused_ordering(482) 00:11:31.969 fused_ordering(483) 00:11:31.969 fused_ordering(484) 00:11:31.969 fused_ordering(485) 00:11:31.969 fused_ordering(486) 00:11:31.969 fused_ordering(487) 00:11:31.969 fused_ordering(488) 00:11:31.969 fused_ordering(489) 00:11:31.969 fused_ordering(490) 00:11:31.969 fused_ordering(491) 00:11:31.969 fused_ordering(492) 00:11:31.969 fused_ordering(493) 00:11:31.969 fused_ordering(494) 00:11:31.969 fused_ordering(495) 00:11:31.969 fused_ordering(496) 00:11:31.969 fused_ordering(497) 00:11:31.969 fused_ordering(498) 00:11:31.969 fused_ordering(499) 00:11:31.969 fused_ordering(500) 00:11:31.969 fused_ordering(501) 00:11:31.969 fused_ordering(502) 00:11:31.969 fused_ordering(503) 00:11:31.969 fused_ordering(504) 00:11:31.969 fused_ordering(505) 00:11:31.969 fused_ordering(506) 00:11:31.969 fused_ordering(507) 00:11:31.969 fused_ordering(508) 00:11:31.969 fused_ordering(509) 00:11:31.969 fused_ordering(510) 00:11:31.969 fused_ordering(511) 00:11:31.969 fused_ordering(512) 00:11:31.969 fused_ordering(513) 00:11:31.969 fused_ordering(514) 00:11:31.969 fused_ordering(515) 00:11:31.969 fused_ordering(516) 00:11:31.969 fused_ordering(517) 00:11:31.969 fused_ordering(518) 00:11:31.969 fused_ordering(519) 00:11:31.969 fused_ordering(520) 00:11:31.969 fused_ordering(521) 00:11:31.969 fused_ordering(522) 00:11:31.969 fused_ordering(523) 00:11:31.969 fused_ordering(524) 00:11:31.969 fused_ordering(525) 00:11:31.969 fused_ordering(526) 00:11:31.969 fused_ordering(527) 00:11:31.969 fused_ordering(528) 00:11:31.969 fused_ordering(529) 00:11:31.969 fused_ordering(530) 00:11:31.969 fused_ordering(531) 00:11:31.969 fused_ordering(532) 00:11:31.969 fused_ordering(533) 00:11:31.969 fused_ordering(534) 00:11:31.969 fused_ordering(535) 00:11:31.969 fused_ordering(536) 00:11:31.969 fused_ordering(537) 00:11:31.969 fused_ordering(538) 00:11:31.969 fused_ordering(539) 00:11:31.969 fused_ordering(540) 00:11:31.969 fused_ordering(541) 00:11:31.969 fused_ordering(542) 00:11:31.969 fused_ordering(543) 00:11:31.969 fused_ordering(544) 00:11:31.969 fused_ordering(545) 00:11:31.969 fused_ordering(546) 00:11:31.969 fused_ordering(547) 00:11:31.969 fused_ordering(548) 00:11:31.969 fused_ordering(549) 00:11:31.969 fused_ordering(550) 00:11:31.969 fused_ordering(551) 00:11:31.969 fused_ordering(552) 00:11:31.969 fused_ordering(553) 00:11:31.969 fused_ordering(554) 00:11:31.969 fused_ordering(555) 00:11:31.969 fused_ordering(556) 00:11:31.969 fused_ordering(557) 00:11:31.969 fused_ordering(558) 00:11:31.969 fused_ordering(559) 00:11:31.969 fused_ordering(560) 00:11:31.969 fused_ordering(561) 00:11:31.969 fused_ordering(562) 00:11:31.969 fused_ordering(563) 00:11:31.969 fused_ordering(564) 00:11:31.969 fused_ordering(565) 00:11:31.969 fused_ordering(566) 00:11:31.969 fused_ordering(567) 00:11:31.969 fused_ordering(568) 00:11:31.969 fused_ordering(569) 00:11:31.969 fused_ordering(570) 00:11:31.969 fused_ordering(571) 00:11:31.969 fused_ordering(572) 00:11:31.969 fused_ordering(573) 00:11:31.969 fused_ordering(574) 00:11:31.969 fused_ordering(575) 00:11:31.969 fused_ordering(576) 00:11:31.969 fused_ordering(577) 00:11:31.969 fused_ordering(578) 00:11:31.969 fused_ordering(579) 00:11:31.969 fused_ordering(580) 00:11:31.969 fused_ordering(581) 00:11:31.969 fused_ordering(582) 00:11:31.969 fused_ordering(583) 00:11:31.969 fused_ordering(584) 00:11:31.969 fused_ordering(585) 00:11:31.969 fused_ordering(586) 00:11:31.969 fused_ordering(587) 00:11:31.969 fused_ordering(588) 00:11:31.969 fused_ordering(589) 00:11:31.969 fused_ordering(590) 00:11:31.969 fused_ordering(591) 00:11:31.969 fused_ordering(592) 00:11:31.969 fused_ordering(593) 00:11:31.969 fused_ordering(594) 00:11:31.969 fused_ordering(595) 00:11:31.969 fused_ordering(596) 00:11:31.969 fused_ordering(597) 00:11:31.969 fused_ordering(598) 00:11:31.969 fused_ordering(599) 00:11:31.969 fused_ordering(600) 00:11:31.969 fused_ordering(601) 00:11:31.969 fused_ordering(602) 00:11:31.969 fused_ordering(603) 00:11:31.969 fused_ordering(604) 00:11:31.969 fused_ordering(605) 00:11:31.969 fused_ordering(606) 00:11:31.969 fused_ordering(607) 00:11:31.969 fused_ordering(608) 00:11:31.969 fused_ordering(609) 00:11:31.969 fused_ordering(610) 00:11:31.969 fused_ordering(611) 00:11:31.969 fused_ordering(612) 00:11:31.969 fused_ordering(613) 00:11:31.969 fused_ordering(614) 00:11:31.969 fused_ordering(615) 00:11:32.535 fused_ordering(616) 00:11:32.535 fused_ordering(617) 00:11:32.535 fused_ordering(618) 00:11:32.535 fused_ordering(619) 00:11:32.535 fused_ordering(620) 00:11:32.535 fused_ordering(621) 00:11:32.535 fused_ordering(622) 00:11:32.535 fused_ordering(623) 00:11:32.535 fused_ordering(624) 00:11:32.535 fused_ordering(625) 00:11:32.535 fused_ordering(626) 00:11:32.535 fused_ordering(627) 00:11:32.535 fused_ordering(628) 00:11:32.535 fused_ordering(629) 00:11:32.535 fused_ordering(630) 00:11:32.535 fused_ordering(631) 00:11:32.535 fused_ordering(632) 00:11:32.535 fused_ordering(633) 00:11:32.535 fused_ordering(634) 00:11:32.535 fused_ordering(635) 00:11:32.535 fused_ordering(636) 00:11:32.535 fused_ordering(637) 00:11:32.535 fused_ordering(638) 00:11:32.535 fused_ordering(639) 00:11:32.535 fused_ordering(640) 00:11:32.535 fused_ordering(641) 00:11:32.535 fused_ordering(642) 00:11:32.535 fused_ordering(643) 00:11:32.535 fused_ordering(644) 00:11:32.535 fused_ordering(645) 00:11:32.535 fused_ordering(646) 00:11:32.535 fused_ordering(647) 00:11:32.535 fused_ordering(648) 00:11:32.535 fused_ordering(649) 00:11:32.535 fused_ordering(650) 00:11:32.535 fused_ordering(651) 00:11:32.535 fused_ordering(652) 00:11:32.535 fused_ordering(653) 00:11:32.535 fused_ordering(654) 00:11:32.535 fused_ordering(655) 00:11:32.535 fused_ordering(656) 00:11:32.535 fused_ordering(657) 00:11:32.535 fused_ordering(658) 00:11:32.535 fused_ordering(659) 00:11:32.535 fused_ordering(660) 00:11:32.535 fused_ordering(661) 00:11:32.535 fused_ordering(662) 00:11:32.535 fused_ordering(663) 00:11:32.535 fused_ordering(664) 00:11:32.535 fused_ordering(665) 00:11:32.535 fused_ordering(666) 00:11:32.535 fused_ordering(667) 00:11:32.535 fused_ordering(668) 00:11:32.535 fused_ordering(669) 00:11:32.535 fused_ordering(670) 00:11:32.535 fused_ordering(671) 00:11:32.535 fused_ordering(672) 00:11:32.535 fused_ordering(673) 00:11:32.535 fused_ordering(674) 00:11:32.535 fused_ordering(675) 00:11:32.535 fused_ordering(676) 00:11:32.535 fused_ordering(677) 00:11:32.535 fused_ordering(678) 00:11:32.535 fused_ordering(679) 00:11:32.535 fused_ordering(680) 00:11:32.535 fused_ordering(681) 00:11:32.535 fused_ordering(682) 00:11:32.535 fused_ordering(683) 00:11:32.535 fused_ordering(684) 00:11:32.535 fused_ordering(685) 00:11:32.535 fused_ordering(686) 00:11:32.535 fused_ordering(687) 00:11:32.535 fused_ordering(688) 00:11:32.535 fused_ordering(689) 00:11:32.535 fused_ordering(690) 00:11:32.535 fused_ordering(691) 00:11:32.535 fused_ordering(692) 00:11:32.535 fused_ordering(693) 00:11:32.535 fused_ordering(694) 00:11:32.535 fused_ordering(695) 00:11:32.535 fused_ordering(696) 00:11:32.535 fused_ordering(697) 00:11:32.535 fused_ordering(698) 00:11:32.535 fused_ordering(699) 00:11:32.535 fused_ordering(700) 00:11:32.535 fused_ordering(701) 00:11:32.535 fused_ordering(702) 00:11:32.535 fused_ordering(703) 00:11:32.535 fused_ordering(704) 00:11:32.535 fused_ordering(705) 00:11:32.535 fused_ordering(706) 00:11:32.535 fused_ordering(707) 00:11:32.535 fused_ordering(708) 00:11:32.535 fused_ordering(709) 00:11:32.535 fused_ordering(710) 00:11:32.535 fused_ordering(711) 00:11:32.535 fused_ordering(712) 00:11:32.535 fused_ordering(713) 00:11:32.535 fused_ordering(714) 00:11:32.535 fused_ordering(715) 00:11:32.535 fused_ordering(716) 00:11:32.535 fused_ordering(717) 00:11:32.535 fused_ordering(718) 00:11:32.535 fused_ordering(719) 00:11:32.535 fused_ordering(720) 00:11:32.535 fused_ordering(721) 00:11:32.535 fused_ordering(722) 00:11:32.535 fused_ordering(723) 00:11:32.535 fused_ordering(724) 00:11:32.535 fused_ordering(725) 00:11:32.535 fused_ordering(726) 00:11:32.535 fused_ordering(727) 00:11:32.535 fused_ordering(728) 00:11:32.535 fused_ordering(729) 00:11:32.535 fused_ordering(730) 00:11:32.535 fused_ordering(731) 00:11:32.535 fused_ordering(732) 00:11:32.535 fused_ordering(733) 00:11:32.535 fused_ordering(734) 00:11:32.535 fused_ordering(735) 00:11:32.535 fused_ordering(736) 00:11:32.535 fused_ordering(737) 00:11:32.535 fused_ordering(738) 00:11:32.535 fused_ordering(739) 00:11:32.535 fused_ordering(740) 00:11:32.535 fused_ordering(741) 00:11:32.535 fused_ordering(742) 00:11:32.535 fused_ordering(743) 00:11:32.535 fused_ordering(744) 00:11:32.535 fused_ordering(745) 00:11:32.535 fused_ordering(746) 00:11:32.535 fused_ordering(747) 00:11:32.535 fused_ordering(748) 00:11:32.535 fused_ordering(749) 00:11:32.535 fused_ordering(750) 00:11:32.535 fused_ordering(751) 00:11:32.535 fused_ordering(752) 00:11:32.535 fused_ordering(753) 00:11:32.535 fused_ordering(754) 00:11:32.535 fused_ordering(755) 00:11:32.535 fused_ordering(756) 00:11:32.535 fused_ordering(757) 00:11:32.535 fused_ordering(758) 00:11:32.535 fused_ordering(759) 00:11:32.535 fused_ordering(760) 00:11:32.535 fused_ordering(761) 00:11:32.535 fused_ordering(762) 00:11:32.535 fused_ordering(763) 00:11:32.535 fused_ordering(764) 00:11:32.535 fused_ordering(765) 00:11:32.535 fused_ordering(766) 00:11:32.535 fused_ordering(767) 00:11:32.535 fused_ordering(768) 00:11:32.535 fused_ordering(769) 00:11:32.535 fused_ordering(770) 00:11:32.535 fused_ordering(771) 00:11:32.535 fused_ordering(772) 00:11:32.535 fused_ordering(773) 00:11:32.535 fused_ordering(774) 00:11:32.535 fused_ordering(775) 00:11:32.535 fused_ordering(776) 00:11:32.535 fused_ordering(777) 00:11:32.535 fused_ordering(778) 00:11:32.535 fused_ordering(779) 00:11:32.535 fused_ordering(780) 00:11:32.535 fused_ordering(781) 00:11:32.535 fused_ordering(782) 00:11:32.535 fused_ordering(783) 00:11:32.535 fused_ordering(784) 00:11:32.535 fused_ordering(785) 00:11:32.535 fused_ordering(786) 00:11:32.535 fused_ordering(787) 00:11:32.535 fused_ordering(788) 00:11:32.535 fused_ordering(789) 00:11:32.535 fused_ordering(790) 00:11:32.535 fused_ordering(791) 00:11:32.535 fused_ordering(792) 00:11:32.535 fused_ordering(793) 00:11:32.535 fused_ordering(794) 00:11:32.535 fused_ordering(795) 00:11:32.535 fused_ordering(796) 00:11:32.535 fused_ordering(797) 00:11:32.535 fused_ordering(798) 00:11:32.535 fused_ordering(799) 00:11:32.535 fused_ordering(800) 00:11:32.535 fused_ordering(801) 00:11:32.535 fused_ordering(802) 00:11:32.535 fused_ordering(803) 00:11:32.535 fused_ordering(804) 00:11:32.535 fused_ordering(805) 00:11:32.535 fused_ordering(806) 00:11:32.535 fused_ordering(807) 00:11:32.535 fused_ordering(808) 00:11:32.535 fused_ordering(809) 00:11:32.535 fused_ordering(810) 00:11:32.535 fused_ordering(811) 00:11:32.535 fused_ordering(812) 00:11:32.535 fused_ordering(813) 00:11:32.535 fused_ordering(814) 00:11:32.535 fused_ordering(815) 00:11:32.535 fused_ordering(816) 00:11:32.535 fused_ordering(817) 00:11:32.535 fused_ordering(818) 00:11:32.535 fused_ordering(819) 00:11:32.535 fused_ordering(820) 00:11:33.102 fused_ordering(821) 00:11:33.102 fused_ordering(822) 00:11:33.102 fused_ordering(823) 00:11:33.102 fused_ordering(824) 00:11:33.102 fused_ordering(825) 00:11:33.102 fused_ordering(826) 00:11:33.102 fused_ordering(827) 00:11:33.102 fused_ordering(828) 00:11:33.102 fused_ordering(829) 00:11:33.102 fused_ordering(830) 00:11:33.102 fused_ordering(831) 00:11:33.102 fused_ordering(832) 00:11:33.102 fused_ordering(833) 00:11:33.102 fused_ordering(834) 00:11:33.102 fused_ordering(835) 00:11:33.102 fused_ordering(836) 00:11:33.102 fused_ordering(837) 00:11:33.102 fused_ordering(838) 00:11:33.102 fused_ordering(839) 00:11:33.102 fused_ordering(840) 00:11:33.102 fused_ordering(841) 00:11:33.102 fused_ordering(842) 00:11:33.102 fused_ordering(843) 00:11:33.102 fused_ordering(844) 00:11:33.102 fused_ordering(845) 00:11:33.102 fused_ordering(846) 00:11:33.102 fused_ordering(847) 00:11:33.102 fused_ordering(848) 00:11:33.102 fused_ordering(849) 00:11:33.102 fused_ordering(850) 00:11:33.102 fused_ordering(851) 00:11:33.102 fused_ordering(852) 00:11:33.102 fused_ordering(853) 00:11:33.102 fused_ordering(854) 00:11:33.102 fused_ordering(855) 00:11:33.102 fused_ordering(856) 00:11:33.102 fused_ordering(857) 00:11:33.102 fused_ordering(858) 00:11:33.102 fused_ordering(859) 00:11:33.102 fused_ordering(860) 00:11:33.102 fused_ordering(861) 00:11:33.102 fused_ordering(862) 00:11:33.102 fused_ordering(863) 00:11:33.102 fused_ordering(864) 00:11:33.102 fused_ordering(865) 00:11:33.102 fused_ordering(866) 00:11:33.102 fused_ordering(867) 00:11:33.102 fused_ordering(868) 00:11:33.102 fused_ordering(869) 00:11:33.102 fused_ordering(870) 00:11:33.102 fused_ordering(871) 00:11:33.102 fused_ordering(872) 00:11:33.102 fused_ordering(873) 00:11:33.102 fused_ordering(874) 00:11:33.102 fused_ordering(875) 00:11:33.102 fused_ordering(876) 00:11:33.102 fused_ordering(877) 00:11:33.102 fused_ordering(878) 00:11:33.102 fused_ordering(879) 00:11:33.102 fused_ordering(880) 00:11:33.102 fused_ordering(881) 00:11:33.102 fused_ordering(882) 00:11:33.102 fused_ordering(883) 00:11:33.102 fused_ordering(884) 00:11:33.102 fused_ordering(885) 00:11:33.102 fused_ordering(886) 00:11:33.102 fused_ordering(887) 00:11:33.102 fused_ordering(888) 00:11:33.102 fused_ordering(889) 00:11:33.102 fused_ordering(890) 00:11:33.102 fused_ordering(891) 00:11:33.102 fused_ordering(892) 00:11:33.102 fused_ordering(893) 00:11:33.102 fused_ordering(894) 00:11:33.102 fused_ordering(895) 00:11:33.102 fused_ordering(896) 00:11:33.102 fused_ordering(897) 00:11:33.102 fused_ordering(898) 00:11:33.102 fused_ordering(899) 00:11:33.102 fused_ordering(900) 00:11:33.102 fused_ordering(901) 00:11:33.102 fused_ordering(902) 00:11:33.102 fused_ordering(903) 00:11:33.102 fused_ordering(904) 00:11:33.102 fused_ordering(905) 00:11:33.102 fused_ordering(906) 00:11:33.102 fused_ordering(907) 00:11:33.102 fused_ordering(908) 00:11:33.102 fused_ordering(909) 00:11:33.102 fused_ordering(910) 00:11:33.102 fused_ordering(911) 00:11:33.102 fused_ordering(912) 00:11:33.102 fused_ordering(913) 00:11:33.102 fused_ordering(914) 00:11:33.102 fused_ordering(915) 00:11:33.102 fused_ordering(916) 00:11:33.102 fused_ordering(917) 00:11:33.102 fused_ordering(918) 00:11:33.102 fused_ordering(919) 00:11:33.102 fused_ordering(920) 00:11:33.102 fused_ordering(921) 00:11:33.102 fused_ordering(922) 00:11:33.102 fused_ordering(923) 00:11:33.102 fused_ordering(924) 00:11:33.102 fused_ordering(925) 00:11:33.102 fused_ordering(926) 00:11:33.102 fused_ordering(927) 00:11:33.102 fused_ordering(928) 00:11:33.102 fused_ordering(929) 00:11:33.102 fused_ordering(930) 00:11:33.102 fused_ordering(931) 00:11:33.102 fused_ordering(932) 00:11:33.102 fused_ordering(933) 00:11:33.102 fused_ordering(934) 00:11:33.102 fused_ordering(935) 00:11:33.102 fused_ordering(936) 00:11:33.102 fused_ordering(937) 00:11:33.102 fused_ordering(938) 00:11:33.102 fused_ordering(939) 00:11:33.102 fused_ordering(940) 00:11:33.102 fused_ordering(941) 00:11:33.102 fused_ordering(942) 00:11:33.102 fused_ordering(943) 00:11:33.102 fused_ordering(944) 00:11:33.102 fused_ordering(945) 00:11:33.102 fused_ordering(946) 00:11:33.102 fused_ordering(947) 00:11:33.102 fused_ordering(948) 00:11:33.102 fused_ordering(949) 00:11:33.102 fused_ordering(950) 00:11:33.102 fused_ordering(951) 00:11:33.102 fused_ordering(952) 00:11:33.102 fused_ordering(953) 00:11:33.102 fused_ordering(954) 00:11:33.102 fused_ordering(955) 00:11:33.102 fused_ordering(956) 00:11:33.102 fused_ordering(957) 00:11:33.102 fused_ordering(958) 00:11:33.102 fused_ordering(959) 00:11:33.102 fused_ordering(960) 00:11:33.102 fused_ordering(961) 00:11:33.102 fused_ordering(962) 00:11:33.102 fused_ordering(963) 00:11:33.102 fused_ordering(964) 00:11:33.102 fused_ordering(965) 00:11:33.102 fused_ordering(966) 00:11:33.102 fused_ordering(967) 00:11:33.102 fused_ordering(968) 00:11:33.102 fused_ordering(969) 00:11:33.102 fused_ordering(970) 00:11:33.102 fused_ordering(971) 00:11:33.102 fused_ordering(972) 00:11:33.102 fused_ordering(973) 00:11:33.102 fused_ordering(974) 00:11:33.102 fused_ordering(975) 00:11:33.102 fused_ordering(976) 00:11:33.102 fused_ordering(977) 00:11:33.102 fused_ordering(978) 00:11:33.103 fused_ordering(979) 00:11:33.103 fused_ordering(980) 00:11:33.103 fused_ordering(981) 00:11:33.103 fused_ordering(982) 00:11:33.103 fused_ordering(983) 00:11:33.103 fused_ordering(984) 00:11:33.103 fused_ordering(985) 00:11:33.103 fused_ordering(986) 00:11:33.103 fused_ordering(987) 00:11:33.103 fused_ordering(988) 00:11:33.103 fused_ordering(989) 00:11:33.103 fused_ordering(990) 00:11:33.103 fused_ordering(991) 00:11:33.103 fused_ordering(992) 00:11:33.103 fused_ordering(993) 00:11:33.103 fused_ordering(994) 00:11:33.103 fused_ordering(995) 00:11:33.103 fused_ordering(996) 00:11:33.103 fused_ordering(997) 00:11:33.103 fused_ordering(998) 00:11:33.103 fused_ordering(999) 00:11:33.103 fused_ordering(1000) 00:11:33.103 fused_ordering(1001) 00:11:33.103 fused_ordering(1002) 00:11:33.103 fused_ordering(1003) 00:11:33.103 fused_ordering(1004) 00:11:33.103 fused_ordering(1005) 00:11:33.103 fused_ordering(1006) 00:11:33.103 fused_ordering(1007) 00:11:33.103 fused_ordering(1008) 00:11:33.103 fused_ordering(1009) 00:11:33.103 fused_ordering(1010) 00:11:33.103 fused_ordering(1011) 00:11:33.103 fused_ordering(1012) 00:11:33.103 fused_ordering(1013) 00:11:33.103 fused_ordering(1014) 00:11:33.103 fused_ordering(1015) 00:11:33.103 fused_ordering(1016) 00:11:33.103 fused_ordering(1017) 00:11:33.103 fused_ordering(1018) 00:11:33.103 fused_ordering(1019) 00:11:33.103 fused_ordering(1020) 00:11:33.103 fused_ordering(1021) 00:11:33.103 fused_ordering(1022) 00:11:33.103 fused_ordering(1023) 00:11:33.103 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:33.103 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:33.103 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:33.103 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:11:33.103 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:33.103 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:11:33.103 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:33.103 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:33.103 rmmod nvme_tcp 00:11:33.103 rmmod nvme_fabrics 00:11:33.103 rmmod nvme_keyring 00:11:33.103 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:33.103 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:11:33.103 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:11:33.103 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 1624442 ']' 00:11:33.103 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 1624442 00:11:33.103 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 1624442 ']' 00:11:33.103 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 1624442 00:11:33.103 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:11:33.103 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:33.103 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1624442 00:11:33.361 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:33.361 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:33.361 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1624442' 00:11:33.361 killing process with pid 1624442 00:11:33.361 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 1624442 00:11:33.361 20:41:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 1624442 00:11:33.361 20:41:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:33.361 20:41:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:33.361 20:41:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:33.361 20:41:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:11:33.361 20:41:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:11:33.361 20:41:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:33.361 20:41:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:11:33.361 20:41:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:33.361 20:41:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:33.361 20:41:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:33.361 20:41:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:33.361 20:41:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.898 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:35.898 00:11:35.898 real 0m7.838s 00:11:35.898 user 0m5.166s 00:11:35.898 sys 0m3.462s 00:11:35.898 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:35.898 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:35.898 ************************************ 00:11:35.898 END TEST nvmf_fused_ordering 00:11:35.898 ************************************ 00:11:35.898 20:41:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:35.898 20:41:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:35.898 20:41:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:35.898 20:41:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:35.898 ************************************ 00:11:35.898 START TEST nvmf_ns_masking 00:11:35.898 ************************************ 00:11:35.898 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:35.898 * Looking for test storage... 00:11:35.898 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:35.898 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:35.898 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:11:35.898 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:35.898 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:35.898 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:35.898 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:35.898 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:35.898 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:11:35.898 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:11:35.898 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:11:35.898 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:11:35.898 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:11:35.898 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:11:35.898 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:11:35.898 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:35.898 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:11:35.898 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:11:35.898 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:35.898 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:35.898 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:11:35.898 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:11:35.898 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:35.898 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:11:35.898 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:11:35.898 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:11:35.898 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:11:35.898 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:35.898 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:11:35.898 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:11:35.898 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:35.898 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:35.898 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:11:35.898 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:35.898 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:35.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.898 --rc genhtml_branch_coverage=1 00:11:35.898 --rc genhtml_function_coverage=1 00:11:35.898 --rc genhtml_legend=1 00:11:35.898 --rc geninfo_all_blocks=1 00:11:35.898 --rc geninfo_unexecuted_blocks=1 00:11:35.898 00:11:35.898 ' 00:11:35.898 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:35.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.898 --rc genhtml_branch_coverage=1 00:11:35.898 --rc genhtml_function_coverage=1 00:11:35.899 --rc genhtml_legend=1 00:11:35.899 --rc geninfo_all_blocks=1 00:11:35.899 --rc geninfo_unexecuted_blocks=1 00:11:35.899 00:11:35.899 ' 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:35.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.899 --rc genhtml_branch_coverage=1 00:11:35.899 --rc genhtml_function_coverage=1 00:11:35.899 --rc genhtml_legend=1 00:11:35.899 --rc geninfo_all_blocks=1 00:11:35.899 --rc geninfo_unexecuted_blocks=1 00:11:35.899 00:11:35.899 ' 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:35.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.899 --rc genhtml_branch_coverage=1 00:11:35.899 --rc genhtml_function_coverage=1 00:11:35.899 --rc genhtml_legend=1 00:11:35.899 --rc geninfo_all_blocks=1 00:11:35.899 --rc geninfo_unexecuted_blocks=1 00:11:35.899 00:11:35.899 ' 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:35.899 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=4cc78fa3-b8f1-4e7d-aee0-d2a58df4bbf1 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=bd4aab3d-25a1-4a07-ac15-7723697780b7 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=b6d139fc-3249-46fe-bf8f-a1644e96786d 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:11:35.899 20:41:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:38.433 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:38.433 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:11:38.433 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:38.433 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:38.433 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:38.433 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:38.433 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:38.433 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:11:38.433 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:38.433 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:11:38.433 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:11:38.433 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:11:38.433 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:11:38.433 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:11:38.433 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:11:38.433 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:38.433 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:38.433 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:38.433 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:38.433 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:38.433 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:38.433 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:38.433 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:38.433 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:38.433 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:38.433 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:38.433 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:38.433 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:38.433 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:38.433 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:38.433 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:38.433 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:38.433 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:38.433 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:38.433 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:38.433 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:38.434 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:38.434 Found net devices under 0000:09:00.0: cvl_0_0 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:38.434 Found net devices under 0000:09:00.1: cvl_0_1 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:38.434 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:38.434 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:11:38.434 00:11:38.434 --- 10.0.0.2 ping statistics --- 00:11:38.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:38.434 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:38.434 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:38.434 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:11:38.434 00:11:38.434 --- 10.0.0.1 ping statistics --- 00:11:38.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:38.434 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=1626749 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 1626749 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1626749 ']' 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:38.434 [2024-11-26 20:41:41.741108] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:11:38.434 [2024-11-26 20:41:41.741202] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:38.434 [2024-11-26 20:41:41.815004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.434 [2024-11-26 20:41:41.873294] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:38.434 [2024-11-26 20:41:41.873375] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:38.434 [2024-11-26 20:41:41.873388] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:38.434 [2024-11-26 20:41:41.873399] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:38.434 [2024-11-26 20:41:41.873408] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:38.434 [2024-11-26 20:41:41.874049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:38.434 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:38.435 20:41:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:38.435 20:41:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:38.435 20:41:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:38.692 [2024-11-26 20:41:42.322791] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:38.692 20:41:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:11:38.692 20:41:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:11:38.692 20:41:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:39.258 Malloc1 00:11:39.258 20:41:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:39.515 Malloc2 00:11:39.515 20:41:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:39.772 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:40.029 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:40.286 [2024-11-26 20:41:43.768944] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:40.286 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:11:40.286 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b6d139fc-3249-46fe-bf8f-a1644e96786d -a 10.0.0.2 -s 4420 -i 4 00:11:40.286 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:11:40.286 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:11:40.286 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:40.286 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:40.286 20:41:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:11:42.237 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:42.237 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:42.237 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:42.237 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:42.237 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:42.237 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:11:42.237 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:42.237 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:42.495 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:42.495 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:42.495 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:11:42.495 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:42.495 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:42.495 [ 0]:0x1 00:11:42.495 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:42.495 20:41:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:42.495 20:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4a43a4e70b5849bd97e4f96abd402d3c 00:11:42.495 20:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4a43a4e70b5849bd97e4f96abd402d3c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:42.495 20:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:42.753 20:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:11:42.753 20:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:42.753 20:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:42.753 [ 0]:0x1 00:11:42.753 20:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:42.753 20:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:42.753 20:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4a43a4e70b5849bd97e4f96abd402d3c 00:11:42.753 20:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4a43a4e70b5849bd97e4f96abd402d3c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:42.753 20:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:11:42.753 20:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:42.753 20:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:42.753 [ 1]:0x2 00:11:42.753 20:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:42.753 20:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:42.753 20:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7b272df1ebc64b2db3c75a457ffc83eb 00:11:42.753 20:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7b272df1ebc64b2db3c75a457ffc83eb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:42.753 20:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:11:42.753 20:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:43.011 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.011 20:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:43.269 20:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:43.528 20:41:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:11:43.528 20:41:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b6d139fc-3249-46fe-bf8f-a1644e96786d -a 10.0.0.2 -s 4420 -i 4 00:11:43.528 20:41:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:43.528 20:41:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:11:43.528 20:41:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:43.528 20:41:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:11:43.528 20:41:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:11:43.528 20:41:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:11:46.056 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:46.056 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:46.056 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:46.056 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:46.056 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:46.056 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:11:46.056 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:46.056 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:46.056 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:46.056 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:46.056 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:11:46.056 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:11:46.056 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:11:46.056 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:11:46.056 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:46.056 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:11:46.056 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:46.056 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:11:46.056 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:46.056 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:46.057 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:46.057 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:46.057 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:46.057 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:46.057 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:11:46.057 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:46.057 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:46.057 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:46.057 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:11:46.057 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:46.057 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:46.057 [ 0]:0x2 00:11:46.057 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:46.057 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:46.057 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7b272df1ebc64b2db3c75a457ffc83eb 00:11:46.057 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7b272df1ebc64b2db3c75a457ffc83eb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:46.057 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:46.057 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:11:46.057 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:46.057 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:46.057 [ 0]:0x1 00:11:46.057 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:46.057 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:46.057 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4a43a4e70b5849bd97e4f96abd402d3c 00:11:46.057 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4a43a4e70b5849bd97e4f96abd402d3c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:46.057 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:11:46.057 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:46.057 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:46.057 [ 1]:0x2 00:11:46.057 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:46.057 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:46.057 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7b272df1ebc64b2db3c75a457ffc83eb 00:11:46.057 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7b272df1ebc64b2db3c75a457ffc83eb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:46.057 20:41:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:46.315 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:11:46.315 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:11:46.315 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:11:46.315 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:11:46.573 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:46.573 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:11:46.573 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:46.573 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:11:46.573 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:46.573 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:46.573 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:46.573 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:46.573 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:46.573 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:46.573 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:11:46.573 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:46.573 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:46.573 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:46.573 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:11:46.573 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:46.573 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:46.573 [ 0]:0x2 00:11:46.573 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:46.573 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:46.573 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7b272df1ebc64b2db3c75a457ffc83eb 00:11:46.573 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7b272df1ebc64b2db3c75a457ffc83eb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:46.573 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:11:46.573 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:46.573 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.573 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:47.139 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:11:47.139 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b6d139fc-3249-46fe-bf8f-a1644e96786d -a 10.0.0.2 -s 4420 -i 4 00:11:47.139 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:47.139 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:11:47.139 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:47.139 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:11:47.139 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:11:47.139 20:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:11:49.038 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:49.038 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:49.038 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:49.297 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:11:49.297 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:49.297 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:11:49.297 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:49.297 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:49.297 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:49.297 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:49.297 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:11:49.297 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:49.297 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:49.297 [ 0]:0x1 00:11:49.297 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:49.297 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:49.297 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4a43a4e70b5849bd97e4f96abd402d3c 00:11:49.297 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4a43a4e70b5849bd97e4f96abd402d3c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:49.297 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:11:49.297 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:49.297 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:49.297 [ 1]:0x2 00:11:49.297 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:49.297 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:49.297 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7b272df1ebc64b2db3c75a457ffc83eb 00:11:49.297 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7b272df1ebc64b2db3c75a457ffc83eb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:49.297 20:41:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:49.556 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:11:49.556 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:11:49.556 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:11:49.556 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:11:49.556 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:49.556 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:11:49.556 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:49.556 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:11:49.556 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:49.556 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:49.556 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:49.556 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:49.556 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:49.556 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:49.556 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:11:49.556 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:49.556 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:49.556 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:49.556 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:11:49.556 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:49.556 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:49.556 [ 0]:0x2 00:11:49.556 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:49.556 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:49.815 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7b272df1ebc64b2db3c75a457ffc83eb 00:11:49.815 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7b272df1ebc64b2db3c75a457ffc83eb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:49.815 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:49.815 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:11:49.815 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:49.815 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:49.815 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:49.815 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:49.815 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:49.815 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:49.815 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:49.815 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:49.815 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:49.815 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:50.074 [2024-11-26 20:41:53.513789] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:11:50.074 request: 00:11:50.074 { 00:11:50.074 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:50.074 "nsid": 2, 00:11:50.074 "host": "nqn.2016-06.io.spdk:host1", 00:11:50.074 "method": "nvmf_ns_remove_host", 00:11:50.074 "req_id": 1 00:11:50.074 } 00:11:50.074 Got JSON-RPC error response 00:11:50.074 response: 00:11:50.074 { 00:11:50.074 "code": -32602, 00:11:50.074 "message": "Invalid parameters" 00:11:50.074 } 00:11:50.074 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:11:50.074 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:50.074 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:50.074 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:50.074 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:11:50.074 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:11:50.074 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:11:50.074 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:11:50.074 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:50.074 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:11:50.074 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:50.074 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:11:50.074 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:50.074 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:50.074 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:50.074 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:50.074 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:50.074 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:50.074 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:11:50.074 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:50.074 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:50.074 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:50.074 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:11:50.074 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:50.074 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:50.074 [ 0]:0x2 00:11:50.074 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:50.074 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:50.074 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7b272df1ebc64b2db3c75a457ffc83eb 00:11:50.074 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7b272df1ebc64b2db3c75a457ffc83eb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:50.074 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:11:50.074 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:50.074 [2024-11-26 20:41:53.682642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e91a80 is same with the state(6) to be set 00:11:50.074 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.074 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1628337 00:11:50.074 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:11:50.074 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:11:50.074 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1628337 /var/tmp/host.sock 00:11:50.074 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1628337 ']' 00:11:50.074 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:11:50.074 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:50.074 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:50.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:50.074 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:50.074 20:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:50.074 [2024-11-26 20:41:53.736195] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:11:50.074 [2024-11-26 20:41:53.736264] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1628337 ] 00:11:50.333 [2024-11-26 20:41:53.803638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.333 [2024-11-26 20:41:53.863484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:50.591 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:50.591 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:11:50.592 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:50.849 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:51.106 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 4cc78fa3-b8f1-4e7d-aee0-d2a58df4bbf1 00:11:51.106 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:11:51.106 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 4CC78FA3B8F14E7DAEE0D2A58DF4BBF1 -i 00:11:51.364 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid bd4aab3d-25a1-4a07-ac15-7723697780b7 00:11:51.364 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:11:51.364 20:41:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g BD4AAB3D25A14A07AC157723697780B7 -i 00:11:51.621 20:41:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:51.878 20:41:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:11:52.136 20:41:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:52.136 20:41:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:52.701 nvme0n1 00:11:52.701 20:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:52.701 20:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:53.266 nvme1n2 00:11:53.266 20:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:11:53.266 20:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:11:53.266 20:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:11:53.266 20:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:11:53.266 20:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:11:53.524 20:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:11:53.524 20:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:11:53.524 20:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:11:53.524 20:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:11:53.781 20:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 4cc78fa3-b8f1-4e7d-aee0-d2a58df4bbf1 == \4\c\c\7\8\f\a\3\-\b\8\f\1\-\4\e\7\d\-\a\e\e\0\-\d\2\a\5\8\d\f\4\b\b\f\1 ]] 00:11:53.781 20:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:11:53.781 20:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:11:53.781 20:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:11:54.039 20:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ bd4aab3d-25a1-4a07-ac15-7723697780b7 == \b\d\4\a\a\b\3\d\-\2\5\a\1\-\4\a\0\7\-\a\c\1\5\-\7\7\2\3\6\9\7\7\8\0\b\7 ]] 00:11:54.039 20:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:54.297 20:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:54.555 20:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 4cc78fa3-b8f1-4e7d-aee0-d2a58df4bbf1 00:11:54.555 20:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:11:54.555 20:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 4CC78FA3B8F14E7DAEE0D2A58DF4BBF1 00:11:54.555 20:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:11:54.555 20:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 4CC78FA3B8F14E7DAEE0D2A58DF4BBF1 00:11:54.555 20:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:54.555 20:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:54.555 20:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:54.555 20:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:54.555 20:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:54.555 20:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:54.555 20:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:54.555 20:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:54.555 20:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 4CC78FA3B8F14E7DAEE0D2A58DF4BBF1 00:11:54.813 [2024-11-26 20:41:58.364055] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:11:54.813 [2024-11-26 20:41:58.364094] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:11:54.813 [2024-11-26 20:41:58.364109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:54.813 request: 00:11:54.813 { 00:11:54.813 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:54.813 "namespace": { 00:11:54.813 "bdev_name": "invalid", 00:11:54.813 "nsid": 1, 00:11:54.813 "nguid": "4CC78FA3B8F14E7DAEE0D2A58DF4BBF1", 00:11:54.813 "no_auto_visible": false 00:11:54.813 }, 00:11:54.813 "method": "nvmf_subsystem_add_ns", 00:11:54.813 "req_id": 1 00:11:54.813 } 00:11:54.813 Got JSON-RPC error response 00:11:54.813 response: 00:11:54.813 { 00:11:54.813 "code": -32602, 00:11:54.813 "message": "Invalid parameters" 00:11:54.813 } 00:11:54.813 20:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:11:54.813 20:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:54.813 20:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:54.813 20:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:54.813 20:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 4cc78fa3-b8f1-4e7d-aee0-d2a58df4bbf1 00:11:54.813 20:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:11:54.813 20:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 4CC78FA3B8F14E7DAEE0D2A58DF4BBF1 -i 00:11:55.070 20:41:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:11:57.598 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:11:57.598 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:11:57.598 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:11:57.598 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:11:57.598 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 1628337 00:11:57.598 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1628337 ']' 00:11:57.598 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1628337 00:11:57.598 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:11:57.598 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:57.598 20:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1628337 00:11:57.598 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:57.598 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:57.598 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1628337' 00:11:57.598 killing process with pid 1628337 00:11:57.598 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1628337 00:11:57.598 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1628337 00:11:57.855 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:58.113 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:11:58.113 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:11:58.113 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:58.113 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:11:58.113 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:58.113 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:11:58.113 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:58.113 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:58.113 rmmod nvme_tcp 00:11:58.113 rmmod nvme_fabrics 00:11:58.113 rmmod nvme_keyring 00:11:58.113 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:58.113 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:11:58.113 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:11:58.113 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 1626749 ']' 00:11:58.113 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 1626749 00:11:58.113 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1626749 ']' 00:11:58.113 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1626749 00:11:58.113 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:11:58.113 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:58.113 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1626749 00:11:58.113 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:58.113 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:58.113 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1626749' 00:11:58.113 killing process with pid 1626749 00:11:58.113 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1626749 00:11:58.113 20:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1626749 00:11:58.371 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:58.371 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:58.371 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:58.371 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:11:58.371 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:11:58.371 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:58.371 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:11:58.630 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:58.630 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:58.630 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:58.630 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:58.630 20:42:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:00.536 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:00.536 00:12:00.536 real 0m24.972s 00:12:00.536 user 0m36.418s 00:12:00.536 sys 0m4.729s 00:12:00.536 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:00.536 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:00.536 ************************************ 00:12:00.536 END TEST nvmf_ns_masking 00:12:00.536 ************************************ 00:12:00.536 20:42:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:12:00.536 20:42:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:00.536 20:42:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:00.536 20:42:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:00.536 20:42:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:00.536 ************************************ 00:12:00.536 START TEST nvmf_nvme_cli 00:12:00.536 ************************************ 00:12:00.536 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:00.536 * Looking for test storage... 00:12:00.536 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:00.536 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:00.536 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:12:00.536 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:00.795 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:00.795 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:00.795 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:00.795 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:00.795 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:12:00.795 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:12:00.795 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:12:00.795 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:12:00.795 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:12:00.795 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:12:00.795 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:12:00.795 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:00.795 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:12:00.795 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:12:00.795 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:00.795 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:00.795 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:12:00.795 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:12:00.795 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:00.795 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:12:00.795 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:12:00.795 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:12:00.795 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:12:00.795 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:00.795 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:12:00.795 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:12:00.795 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:00.795 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:00.795 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:12:00.795 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:00.795 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:00.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.795 --rc genhtml_branch_coverage=1 00:12:00.795 --rc genhtml_function_coverage=1 00:12:00.795 --rc genhtml_legend=1 00:12:00.795 --rc geninfo_all_blocks=1 00:12:00.795 --rc geninfo_unexecuted_blocks=1 00:12:00.795 00:12:00.795 ' 00:12:00.795 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:00.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.795 --rc genhtml_branch_coverage=1 00:12:00.795 --rc genhtml_function_coverage=1 00:12:00.795 --rc genhtml_legend=1 00:12:00.795 --rc geninfo_all_blocks=1 00:12:00.795 --rc geninfo_unexecuted_blocks=1 00:12:00.795 00:12:00.795 ' 00:12:00.795 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:00.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.795 --rc genhtml_branch_coverage=1 00:12:00.795 --rc genhtml_function_coverage=1 00:12:00.795 --rc genhtml_legend=1 00:12:00.795 --rc geninfo_all_blocks=1 00:12:00.795 --rc geninfo_unexecuted_blocks=1 00:12:00.795 00:12:00.795 ' 00:12:00.795 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:00.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.795 --rc genhtml_branch_coverage=1 00:12:00.795 --rc genhtml_function_coverage=1 00:12:00.795 --rc genhtml_legend=1 00:12:00.795 --rc geninfo_all_blocks=1 00:12:00.795 --rc geninfo_unexecuted_blocks=1 00:12:00.795 00:12:00.796 ' 00:12:00.796 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:00.796 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:12:00.796 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:00.796 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:00.796 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:00.796 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:00.796 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:00.796 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:00.796 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:00.796 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:00.796 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:00.796 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:00.796 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:00.796 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:00.796 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:00.796 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:00.796 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:00.796 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:00.796 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:00.796 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:12:00.796 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:00.796 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:00.796 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:00.796 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.796 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.796 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.796 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:12:00.796 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.796 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:12:00.796 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:00.796 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:00.796 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:00.796 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:00.796 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:00.796 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:00.796 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:00.796 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:00.796 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:00.796 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:00.796 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:00.796 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:00.796 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:12:00.796 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:12:00.796 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:00.796 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:00.796 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:00.796 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:00.796 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:00.796 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:00.796 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:00.796 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:00.796 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:00.796 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:00.796 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:12:00.796 20:42:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:03.327 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:03.327 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:03.327 Found net devices under 0000:09:00.0: cvl_0_0 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:03.327 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:03.328 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:03.328 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:03.328 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:03.328 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:03.328 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:03.328 Found net devices under 0000:09:00.1: cvl_0_1 00:12:03.328 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:03.328 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:03.328 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:12:03.328 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:03.328 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:03.328 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:03.328 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:03.328 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:03.328 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:03.328 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:03.328 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:03.328 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:03.328 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:03.328 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:03.328 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:03.328 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:03.328 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:03.328 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:03.328 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:03.328 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:03.328 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:03.328 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:03.328 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:03.328 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:03.328 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:03.328 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:03.328 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:03.328 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:03.328 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:03.328 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:03.328 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:12:03.328 00:12:03.328 --- 10.0.0.2 ping statistics --- 00:12:03.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:03.328 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:12:03.328 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:03.328 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:03.328 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:12:03.328 00:12:03.328 --- 10.0.0.1 ping statistics --- 00:12:03.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:03.328 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:12:03.328 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:03.328 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:12:03.328 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:03.328 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:03.328 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:03.328 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:03.328 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:03.328 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:03.328 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:03.328 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:12:03.328 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:03.328 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:03.328 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:03.328 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=1631481 00:12:03.328 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:03.328 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 1631481 00:12:03.328 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 1631481 ']' 00:12:03.328 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:03.328 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:03.328 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:03.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:03.328 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:03.328 20:42:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:03.328 [2024-11-26 20:42:06.756665] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:12:03.328 [2024-11-26 20:42:06.756749] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:03.328 [2024-11-26 20:42:06.839509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:03.328 [2024-11-26 20:42:06.901616] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:03.328 [2024-11-26 20:42:06.901673] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:03.328 [2024-11-26 20:42:06.901687] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:03.328 [2024-11-26 20:42:06.901697] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:03.328 [2024-11-26 20:42:06.901706] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:03.328 [2024-11-26 20:42:06.903257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:03.328 [2024-11-26 20:42:06.903325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:03.328 [2024-11-26 20:42:06.903385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:03.328 [2024-11-26 20:42:06.903388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.618 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:03.618 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:12:03.618 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:03.618 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:03.618 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:03.618 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:03.618 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:03.618 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.618 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:03.618 [2024-11-26 20:42:07.049182] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:03.618 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.618 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:03.618 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.618 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:03.618 Malloc0 00:12:03.618 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.618 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:03.618 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.618 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:03.618 Malloc1 00:12:03.618 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.618 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:12:03.618 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.618 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:03.618 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.618 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:03.618 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.618 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:03.618 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.618 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:03.618 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.618 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:03.618 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.618 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:03.618 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.618 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:03.618 [2024-11-26 20:42:07.151978] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:03.618 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.618 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:03.618 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.618 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:03.618 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.618 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:12:03.618 00:12:03.618 Discovery Log Number of Records 2, Generation counter 2 00:12:03.618 =====Discovery Log Entry 0====== 00:12:03.618 trtype: tcp 00:12:03.618 adrfam: ipv4 00:12:03.618 subtype: current discovery subsystem 00:12:03.618 treq: not required 00:12:03.618 portid: 0 00:12:03.618 trsvcid: 4420 00:12:03.618 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:03.618 traddr: 10.0.0.2 00:12:03.618 eflags: explicit discovery connections, duplicate discovery information 00:12:03.618 sectype: none 00:12:03.618 =====Discovery Log Entry 1====== 00:12:03.618 trtype: tcp 00:12:03.618 adrfam: ipv4 00:12:03.618 subtype: nvme subsystem 00:12:03.618 treq: not required 00:12:03.618 portid: 0 00:12:03.618 trsvcid: 4420 00:12:03.618 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:03.618 traddr: 10.0.0.2 00:12:03.618 eflags: none 00:12:03.618 sectype: none 00:12:03.618 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:12:03.618 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:12:03.618 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:12:03.916 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:03.916 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:12:03.916 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:12:03.916 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:03.916 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:12:03.916 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:03.916 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:12:03.916 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:04.482 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:04.482 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:12:04.482 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:04.482 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:12:04.482 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:12:04.482 20:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:12:06.381 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:06.381 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:06.381 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:06.381 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:12:06.381 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:06.381 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:12:06.381 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:06.381 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:12:06.381 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:06.381 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:12:06.381 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:12:06.381 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:06.381 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:12:06.381 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:06.381 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:06.381 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:12:06.381 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:06.381 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:06.381 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:12:06.381 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:06.381 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:12:06.381 /dev/nvme0n2 ]] 00:12:06.381 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:06.381 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:06.381 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:12:06.381 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:06.381 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:12:06.381 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:12:06.381 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:06.381 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:12:06.381 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:06.381 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:06.381 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:12:06.381 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:06.381 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:06.381 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:12:06.381 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:06.381 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:06.381 20:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:06.381 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.381 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:06.381 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:12:06.381 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:06.381 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:06.381 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:06.381 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:06.381 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:12:06.381 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:06.381 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:06.381 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.381 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:06.381 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.381 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:06.381 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:06.381 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:06.381 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:12:06.381 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:06.381 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:12:06.381 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:06.381 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:06.381 rmmod nvme_tcp 00:12:06.639 rmmod nvme_fabrics 00:12:06.639 rmmod nvme_keyring 00:12:06.639 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:06.639 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:12:06.639 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:12:06.639 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 1631481 ']' 00:12:06.639 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 1631481 00:12:06.639 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 1631481 ']' 00:12:06.639 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 1631481 00:12:06.639 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:12:06.639 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:06.639 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1631481 00:12:06.639 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:06.639 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:06.639 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1631481' 00:12:06.639 killing process with pid 1631481 00:12:06.639 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 1631481 00:12:06.639 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 1631481 00:12:06.897 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:06.897 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:06.897 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:06.897 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:12:06.897 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:12:06.897 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:06.897 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:12:06.897 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:06.897 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:06.897 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:06.897 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:06.897 20:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:08.804 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:08.804 00:12:08.804 real 0m8.331s 00:12:08.804 user 0m14.732s 00:12:08.804 sys 0m2.421s 00:12:08.804 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:08.804 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:08.804 ************************************ 00:12:08.804 END TEST nvmf_nvme_cli 00:12:08.804 ************************************ 00:12:09.063 20:42:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:12:09.063 20:42:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:09.063 20:42:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:09.063 20:42:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:09.063 20:42:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:09.063 ************************************ 00:12:09.063 START TEST nvmf_vfio_user 00:12:09.063 ************************************ 00:12:09.063 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:09.063 * Looking for test storage... 00:12:09.063 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:09.063 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:09.063 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:12:09.063 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:09.063 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:09.063 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:09.063 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:09.063 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:09.063 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:09.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.064 --rc genhtml_branch_coverage=1 00:12:09.064 --rc genhtml_function_coverage=1 00:12:09.064 --rc genhtml_legend=1 00:12:09.064 --rc geninfo_all_blocks=1 00:12:09.064 --rc geninfo_unexecuted_blocks=1 00:12:09.064 00:12:09.064 ' 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:09.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.064 --rc genhtml_branch_coverage=1 00:12:09.064 --rc genhtml_function_coverage=1 00:12:09.064 --rc genhtml_legend=1 00:12:09.064 --rc geninfo_all_blocks=1 00:12:09.064 --rc geninfo_unexecuted_blocks=1 00:12:09.064 00:12:09.064 ' 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:09.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.064 --rc genhtml_branch_coverage=1 00:12:09.064 --rc genhtml_function_coverage=1 00:12:09.064 --rc genhtml_legend=1 00:12:09.064 --rc geninfo_all_blocks=1 00:12:09.064 --rc geninfo_unexecuted_blocks=1 00:12:09.064 00:12:09.064 ' 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:09.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.064 --rc genhtml_branch_coverage=1 00:12:09.064 --rc genhtml_function_coverage=1 00:12:09.064 --rc genhtml_legend=1 00:12:09.064 --rc geninfo_all_blocks=1 00:12:09.064 --rc geninfo_unexecuted_blocks=1 00:12:09.064 00:12:09.064 ' 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:09.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:09.064 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:09.065 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:09.065 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:09.065 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:09.065 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1632803 00:12:09.065 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1632803' 00:12:09.065 Process pid: 1632803 00:12:09.065 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:09.065 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:09.065 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1632803 00:12:09.065 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1632803 ']' 00:12:09.065 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.065 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:09.065 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.065 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:09.065 20:42:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:09.323 [2024-11-26 20:42:12.779646] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:12:09.323 [2024-11-26 20:42:12.779734] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:09.323 [2024-11-26 20:42:12.846774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:09.323 [2024-11-26 20:42:12.905156] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:09.323 [2024-11-26 20:42:12.905209] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:09.323 [2024-11-26 20:42:12.905237] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:09.323 [2024-11-26 20:42:12.905248] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:09.323 [2024-11-26 20:42:12.905258] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:09.323 [2024-11-26 20:42:12.906866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:09.323 [2024-11-26 20:42:12.906931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:09.323 [2024-11-26 20:42:12.906998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:09.323 [2024-11-26 20:42:12.907002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.581 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:09.582 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:12:09.582 20:42:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:10.513 20:42:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:10.770 20:42:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:10.770 20:42:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:10.770 20:42:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:10.770 20:42:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:10.770 20:42:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:11.029 Malloc1 00:12:11.029 20:42:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:11.594 20:42:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:11.594 20:42:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:11.852 20:42:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:11.852 20:42:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:11.852 20:42:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:12.110 Malloc2 00:12:12.368 20:42:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:12.625 20:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:12.882 20:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:13.142 20:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:13.142 20:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:13.142 20:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:13.142 20:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:13.142 20:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:13.142 20:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:13.142 [2024-11-26 20:42:16.624379] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:12:13.142 [2024-11-26 20:42:16.624430] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1633232 ] 00:12:13.143 [2024-11-26 20:42:16.674565] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:13.143 [2024-11-26 20:42:16.683768] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:13.143 [2024-11-26 20:42:16.683800] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fdb9345d000 00:12:13.143 [2024-11-26 20:42:16.684765] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:13.143 [2024-11-26 20:42:16.685759] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:13.143 [2024-11-26 20:42:16.686766] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:13.143 [2024-11-26 20:42:16.687787] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:13.143 [2024-11-26 20:42:16.688773] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:13.143 [2024-11-26 20:42:16.689777] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:13.143 [2024-11-26 20:42:16.690783] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:13.143 [2024-11-26 20:42:16.691787] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:13.143 [2024-11-26 20:42:16.692800] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:13.143 [2024-11-26 20:42:16.692820] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fdb93452000 00:12:13.143 [2024-11-26 20:42:16.693948] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:13.143 [2024-11-26 20:42:16.709717] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:13.143 [2024-11-26 20:42:16.709756] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:12:13.143 [2024-11-26 20:42:16.711908] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:13.143 [2024-11-26 20:42:16.711972] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:13.143 [2024-11-26 20:42:16.712076] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:12:13.143 [2024-11-26 20:42:16.712119] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:12:13.143 [2024-11-26 20:42:16.712131] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:12:13.143 [2024-11-26 20:42:16.712894] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:13.143 [2024-11-26 20:42:16.712919] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:12:13.143 [2024-11-26 20:42:16.712933] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:12:13.143 [2024-11-26 20:42:16.713901] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:13.143 [2024-11-26 20:42:16.713922] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:12:13.143 [2024-11-26 20:42:16.713936] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:12:13.143 [2024-11-26 20:42:16.714905] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:13.143 [2024-11-26 20:42:16.714925] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:13.143 [2024-11-26 20:42:16.715912] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:13.143 [2024-11-26 20:42:16.715930] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:12:13.143 [2024-11-26 20:42:16.715939] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:12:13.143 [2024-11-26 20:42:16.715950] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:13.143 [2024-11-26 20:42:16.716060] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:12:13.143 [2024-11-26 20:42:16.716068] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:13.143 [2024-11-26 20:42:16.716076] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:13.143 [2024-11-26 20:42:16.716917] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:13.143 [2024-11-26 20:42:16.717923] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:13.143 [2024-11-26 20:42:16.718929] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:13.143 [2024-11-26 20:42:16.719921] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:13.143 [2024-11-26 20:42:16.720043] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:13.143 [2024-11-26 20:42:16.720942] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:13.143 [2024-11-26 20:42:16.720961] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:13.143 [2024-11-26 20:42:16.720970] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:12:13.143 [2024-11-26 20:42:16.720994] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:12:13.143 [2024-11-26 20:42:16.721010] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:12:13.143 [2024-11-26 20:42:16.721041] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:13.143 [2024-11-26 20:42:16.721062] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:13.143 [2024-11-26 20:42:16.721069] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:13.143 [2024-11-26 20:42:16.721090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:13.143 [2024-11-26 20:42:16.721164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:13.143 [2024-11-26 20:42:16.721180] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:12:13.143 [2024-11-26 20:42:16.721189] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:12:13.143 [2024-11-26 20:42:16.721196] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:12:13.143 [2024-11-26 20:42:16.721203] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:13.143 [2024-11-26 20:42:16.721211] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:12:13.143 [2024-11-26 20:42:16.721218] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:12:13.143 [2024-11-26 20:42:16.721226] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:12:13.143 [2024-11-26 20:42:16.721240] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:12:13.143 [2024-11-26 20:42:16.721254] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:13.143 [2024-11-26 20:42:16.721266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:13.144 [2024-11-26 20:42:16.721281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:13.144 [2024-11-26 20:42:16.721320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:13.144 [2024-11-26 20:42:16.721334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:13.144 [2024-11-26 20:42:16.721347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:13.144 [2024-11-26 20:42:16.721356] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:12:13.144 [2024-11-26 20:42:16.721373] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:13.144 [2024-11-26 20:42:16.721389] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:13.144 [2024-11-26 20:42:16.721402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:13.144 [2024-11-26 20:42:16.721413] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:12:13.144 [2024-11-26 20:42:16.721422] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:13.144 [2024-11-26 20:42:16.721437] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:12:13.144 [2024-11-26 20:42:16.721448] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:12:13.144 [2024-11-26 20:42:16.721465] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:13.144 [2024-11-26 20:42:16.721479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:13.144 [2024-11-26 20:42:16.721549] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:12:13.144 [2024-11-26 20:42:16.721566] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:12:13.144 [2024-11-26 20:42:16.721580] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:13.144 [2024-11-26 20:42:16.721603] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:13.144 [2024-11-26 20:42:16.721609] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:13.144 [2024-11-26 20:42:16.721619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:13.144 [2024-11-26 20:42:16.721633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:13.144 [2024-11-26 20:42:16.721654] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:12:13.144 [2024-11-26 20:42:16.721669] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:12:13.144 [2024-11-26 20:42:16.721683] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:12:13.144 [2024-11-26 20:42:16.721695] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:13.144 [2024-11-26 20:42:16.721703] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:13.144 [2024-11-26 20:42:16.721709] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:13.144 [2024-11-26 20:42:16.721718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:13.144 [2024-11-26 20:42:16.721743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:13.144 [2024-11-26 20:42:16.721760] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:13.144 [2024-11-26 20:42:16.721776] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:13.144 [2024-11-26 20:42:16.721788] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:13.144 [2024-11-26 20:42:16.721796] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:13.144 [2024-11-26 20:42:16.721802] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:13.144 [2024-11-26 20:42:16.721811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:13.144 [2024-11-26 20:42:16.721825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:13.144 [2024-11-26 20:42:16.721843] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:13.144 [2024-11-26 20:42:16.721855] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:12:13.144 [2024-11-26 20:42:16.721869] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:12:13.144 [2024-11-26 20:42:16.721883] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:12:13.144 [2024-11-26 20:42:16.721892] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:13.144 [2024-11-26 20:42:16.721900] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:12:13.144 [2024-11-26 20:42:16.721908] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:12:13.144 [2024-11-26 20:42:16.721915] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:12:13.144 [2024-11-26 20:42:16.721924] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:12:13.144 [2024-11-26 20:42:16.721948] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:13.144 [2024-11-26 20:42:16.721967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:13.144 [2024-11-26 20:42:16.721986] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:13.144 [2024-11-26 20:42:16.721998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:13.144 [2024-11-26 20:42:16.722014] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:13.144 [2024-11-26 20:42:16.722039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:13.144 [2024-11-26 20:42:16.722055] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:13.144 [2024-11-26 20:42:16.722070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:13.144 [2024-11-26 20:42:16.722100] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:13.144 [2024-11-26 20:42:16.722110] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:13.144 [2024-11-26 20:42:16.722117] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:13.144 [2024-11-26 20:42:16.722122] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:13.144 [2024-11-26 20:42:16.722128] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:12:13.144 [2024-11-26 20:42:16.722137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:13.144 [2024-11-26 20:42:16.722149] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:13.144 [2024-11-26 20:42:16.722157] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:13.144 [2024-11-26 20:42:16.722163] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:13.144 [2024-11-26 20:42:16.722172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:13.145 [2024-11-26 20:42:16.722183] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:13.145 [2024-11-26 20:42:16.722191] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:13.145 [2024-11-26 20:42:16.722197] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:13.145 [2024-11-26 20:42:16.722211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:13.145 [2024-11-26 20:42:16.722225] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:13.145 [2024-11-26 20:42:16.722233] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:13.145 [2024-11-26 20:42:16.722239] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:13.145 [2024-11-26 20:42:16.722247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:13.145 [2024-11-26 20:42:16.722259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:13.145 [2024-11-26 20:42:16.722279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:13.145 [2024-11-26 20:42:16.722329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:13.145 [2024-11-26 20:42:16.722344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:13.145 ===================================================== 00:12:13.145 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:13.145 ===================================================== 00:12:13.145 Controller Capabilities/Features 00:12:13.145 ================================ 00:12:13.145 Vendor ID: 4e58 00:12:13.145 Subsystem Vendor ID: 4e58 00:12:13.145 Serial Number: SPDK1 00:12:13.145 Model Number: SPDK bdev Controller 00:12:13.145 Firmware Version: 25.01 00:12:13.145 Recommended Arb Burst: 6 00:12:13.145 IEEE OUI Identifier: 8d 6b 50 00:12:13.145 Multi-path I/O 00:12:13.145 May have multiple subsystem ports: Yes 00:12:13.145 May have multiple controllers: Yes 00:12:13.145 Associated with SR-IOV VF: No 00:12:13.145 Max Data Transfer Size: 131072 00:12:13.145 Max Number of Namespaces: 32 00:12:13.145 Max Number of I/O Queues: 127 00:12:13.145 NVMe Specification Version (VS): 1.3 00:12:13.145 NVMe Specification Version (Identify): 1.3 00:12:13.145 Maximum Queue Entries: 256 00:12:13.145 Contiguous Queues Required: Yes 00:12:13.145 Arbitration Mechanisms Supported 00:12:13.145 Weighted Round Robin: Not Supported 00:12:13.145 Vendor Specific: Not Supported 00:12:13.145 Reset Timeout: 15000 ms 00:12:13.145 Doorbell Stride: 4 bytes 00:12:13.145 NVM Subsystem Reset: Not Supported 00:12:13.145 Command Sets Supported 00:12:13.145 NVM Command Set: Supported 00:12:13.145 Boot Partition: Not Supported 00:12:13.145 Memory Page Size Minimum: 4096 bytes 00:12:13.145 Memory Page Size Maximum: 4096 bytes 00:12:13.145 Persistent Memory Region: Not Supported 00:12:13.145 Optional Asynchronous Events Supported 00:12:13.145 Namespace Attribute Notices: Supported 00:12:13.145 Firmware Activation Notices: Not Supported 00:12:13.145 ANA Change Notices: Not Supported 00:12:13.145 PLE Aggregate Log Change Notices: Not Supported 00:12:13.145 LBA Status Info Alert Notices: Not Supported 00:12:13.145 EGE Aggregate Log Change Notices: Not Supported 00:12:13.145 Normal NVM Subsystem Shutdown event: Not Supported 00:12:13.145 Zone Descriptor Change Notices: Not Supported 00:12:13.145 Discovery Log Change Notices: Not Supported 00:12:13.145 Controller Attributes 00:12:13.145 128-bit Host Identifier: Supported 00:12:13.145 Non-Operational Permissive Mode: Not Supported 00:12:13.145 NVM Sets: Not Supported 00:12:13.145 Read Recovery Levels: Not Supported 00:12:13.145 Endurance Groups: Not Supported 00:12:13.145 Predictable Latency Mode: Not Supported 00:12:13.145 Traffic Based Keep ALive: Not Supported 00:12:13.145 Namespace Granularity: Not Supported 00:12:13.145 SQ Associations: Not Supported 00:12:13.145 UUID List: Not Supported 00:12:13.145 Multi-Domain Subsystem: Not Supported 00:12:13.145 Fixed Capacity Management: Not Supported 00:12:13.145 Variable Capacity Management: Not Supported 00:12:13.145 Delete Endurance Group: Not Supported 00:12:13.145 Delete NVM Set: Not Supported 00:12:13.145 Extended LBA Formats Supported: Not Supported 00:12:13.145 Flexible Data Placement Supported: Not Supported 00:12:13.145 00:12:13.145 Controller Memory Buffer Support 00:12:13.145 ================================ 00:12:13.145 Supported: No 00:12:13.145 00:12:13.145 Persistent Memory Region Support 00:12:13.145 ================================ 00:12:13.145 Supported: No 00:12:13.145 00:12:13.145 Admin Command Set Attributes 00:12:13.145 ============================ 00:12:13.145 Security Send/Receive: Not Supported 00:12:13.145 Format NVM: Not Supported 00:12:13.145 Firmware Activate/Download: Not Supported 00:12:13.145 Namespace Management: Not Supported 00:12:13.145 Device Self-Test: Not Supported 00:12:13.145 Directives: Not Supported 00:12:13.145 NVMe-MI: Not Supported 00:12:13.145 Virtualization Management: Not Supported 00:12:13.145 Doorbell Buffer Config: Not Supported 00:12:13.145 Get LBA Status Capability: Not Supported 00:12:13.145 Command & Feature Lockdown Capability: Not Supported 00:12:13.145 Abort Command Limit: 4 00:12:13.145 Async Event Request Limit: 4 00:12:13.145 Number of Firmware Slots: N/A 00:12:13.145 Firmware Slot 1 Read-Only: N/A 00:12:13.145 Firmware Activation Without Reset: N/A 00:12:13.145 Multiple Update Detection Support: N/A 00:12:13.145 Firmware Update Granularity: No Information Provided 00:12:13.145 Per-Namespace SMART Log: No 00:12:13.145 Asymmetric Namespace Access Log Page: Not Supported 00:12:13.145 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:13.145 Command Effects Log Page: Supported 00:12:13.145 Get Log Page Extended Data: Supported 00:12:13.145 Telemetry Log Pages: Not Supported 00:12:13.145 Persistent Event Log Pages: Not Supported 00:12:13.145 Supported Log Pages Log Page: May Support 00:12:13.145 Commands Supported & Effects Log Page: Not Supported 00:12:13.145 Feature Identifiers & Effects Log Page:May Support 00:12:13.145 NVMe-MI Commands & Effects Log Page: May Support 00:12:13.145 Data Area 4 for Telemetry Log: Not Supported 00:12:13.145 Error Log Page Entries Supported: 128 00:12:13.145 Keep Alive: Supported 00:12:13.145 Keep Alive Granularity: 10000 ms 00:12:13.145 00:12:13.145 NVM Command Set Attributes 00:12:13.145 ========================== 00:12:13.145 Submission Queue Entry Size 00:12:13.145 Max: 64 00:12:13.145 Min: 64 00:12:13.145 Completion Queue Entry Size 00:12:13.145 Max: 16 00:12:13.145 Min: 16 00:12:13.145 Number of Namespaces: 32 00:12:13.145 Compare Command: Supported 00:12:13.145 Write Uncorrectable Command: Not Supported 00:12:13.145 Dataset Management Command: Supported 00:12:13.145 Write Zeroes Command: Supported 00:12:13.145 Set Features Save Field: Not Supported 00:12:13.145 Reservations: Not Supported 00:12:13.145 Timestamp: Not Supported 00:12:13.145 Copy: Supported 00:12:13.146 Volatile Write Cache: Present 00:12:13.146 Atomic Write Unit (Normal): 1 00:12:13.146 Atomic Write Unit (PFail): 1 00:12:13.146 Atomic Compare & Write Unit: 1 00:12:13.146 Fused Compare & Write: Supported 00:12:13.146 Scatter-Gather List 00:12:13.146 SGL Command Set: Supported (Dword aligned) 00:12:13.146 SGL Keyed: Not Supported 00:12:13.146 SGL Bit Bucket Descriptor: Not Supported 00:12:13.146 SGL Metadata Pointer: Not Supported 00:12:13.146 Oversized SGL: Not Supported 00:12:13.146 SGL Metadata Address: Not Supported 00:12:13.146 SGL Offset: Not Supported 00:12:13.146 Transport SGL Data Block: Not Supported 00:12:13.146 Replay Protected Memory Block: Not Supported 00:12:13.146 00:12:13.146 Firmware Slot Information 00:12:13.146 ========================= 00:12:13.146 Active slot: 1 00:12:13.146 Slot 1 Firmware Revision: 25.01 00:12:13.146 00:12:13.146 00:12:13.146 Commands Supported and Effects 00:12:13.146 ============================== 00:12:13.146 Admin Commands 00:12:13.146 -------------- 00:12:13.146 Get Log Page (02h): Supported 00:12:13.146 Identify (06h): Supported 00:12:13.146 Abort (08h): Supported 00:12:13.146 Set Features (09h): Supported 00:12:13.146 Get Features (0Ah): Supported 00:12:13.146 Asynchronous Event Request (0Ch): Supported 00:12:13.146 Keep Alive (18h): Supported 00:12:13.146 I/O Commands 00:12:13.146 ------------ 00:12:13.146 Flush (00h): Supported LBA-Change 00:12:13.146 Write (01h): Supported LBA-Change 00:12:13.146 Read (02h): Supported 00:12:13.146 Compare (05h): Supported 00:12:13.146 Write Zeroes (08h): Supported LBA-Change 00:12:13.146 Dataset Management (09h): Supported LBA-Change 00:12:13.146 Copy (19h): Supported LBA-Change 00:12:13.146 00:12:13.146 Error Log 00:12:13.146 ========= 00:12:13.146 00:12:13.146 Arbitration 00:12:13.146 =========== 00:12:13.146 Arbitration Burst: 1 00:12:13.146 00:12:13.146 Power Management 00:12:13.146 ================ 00:12:13.146 Number of Power States: 1 00:12:13.146 Current Power State: Power State #0 00:12:13.146 Power State #0: 00:12:13.146 Max Power: 0.00 W 00:12:13.146 Non-Operational State: Operational 00:12:13.146 Entry Latency: Not Reported 00:12:13.146 Exit Latency: Not Reported 00:12:13.146 Relative Read Throughput: 0 00:12:13.146 Relative Read Latency: 0 00:12:13.146 Relative Write Throughput: 0 00:12:13.146 Relative Write Latency: 0 00:12:13.146 Idle Power: Not Reported 00:12:13.146 Active Power: Not Reported 00:12:13.146 Non-Operational Permissive Mode: Not Supported 00:12:13.146 00:12:13.146 Health Information 00:12:13.146 ================== 00:12:13.146 Critical Warnings: 00:12:13.146 Available Spare Space: OK 00:12:13.146 Temperature: OK 00:12:13.146 Device Reliability: OK 00:12:13.146 Read Only: No 00:12:13.146 Volatile Memory Backup: OK 00:12:13.146 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:13.146 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:13.146 Available Spare: 0% 00:12:13.146 Available Sp[2024-11-26 20:42:16.722470] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:13.146 [2024-11-26 20:42:16.722488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:13.146 [2024-11-26 20:42:16.722534] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:12:13.146 [2024-11-26 20:42:16.722552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:13.146 [2024-11-26 20:42:16.722564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:13.146 [2024-11-26 20:42:16.722574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:13.146 [2024-11-26 20:42:16.722584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:13.146 [2024-11-26 20:42:16.725315] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:13.146 [2024-11-26 20:42:16.725338] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:13.146 [2024-11-26 20:42:16.725960] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:13.146 [2024-11-26 20:42:16.726052] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:12:13.146 [2024-11-26 20:42:16.726065] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:12:13.146 [2024-11-26 20:42:16.726970] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:13.146 [2024-11-26 20:42:16.726994] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:12:13.146 [2024-11-26 20:42:16.727046] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:13.146 [2024-11-26 20:42:16.730314] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:13.146 are Threshold: 0% 00:12:13.146 Life Percentage Used: 0% 00:12:13.146 Data Units Read: 0 00:12:13.146 Data Units Written: 0 00:12:13.146 Host Read Commands: 0 00:12:13.146 Host Write Commands: 0 00:12:13.146 Controller Busy Time: 0 minutes 00:12:13.146 Power Cycles: 0 00:12:13.146 Power On Hours: 0 hours 00:12:13.146 Unsafe Shutdowns: 0 00:12:13.146 Unrecoverable Media Errors: 0 00:12:13.146 Lifetime Error Log Entries: 0 00:12:13.146 Warning Temperature Time: 0 minutes 00:12:13.146 Critical Temperature Time: 0 minutes 00:12:13.146 00:12:13.146 Number of Queues 00:12:13.146 ================ 00:12:13.146 Number of I/O Submission Queues: 127 00:12:13.146 Number of I/O Completion Queues: 127 00:12:13.146 00:12:13.146 Active Namespaces 00:12:13.146 ================= 00:12:13.146 Namespace ID:1 00:12:13.146 Error Recovery Timeout: Unlimited 00:12:13.146 Command Set Identifier: NVM (00h) 00:12:13.146 Deallocate: Supported 00:12:13.146 Deallocated/Unwritten Error: Not Supported 00:12:13.146 Deallocated Read Value: Unknown 00:12:13.146 Deallocate in Write Zeroes: Not Supported 00:12:13.146 Deallocated Guard Field: 0xFFFF 00:12:13.146 Flush: Supported 00:12:13.146 Reservation: Supported 00:12:13.147 Namespace Sharing Capabilities: Multiple Controllers 00:12:13.147 Size (in LBAs): 131072 (0GiB) 00:12:13.147 Capacity (in LBAs): 131072 (0GiB) 00:12:13.147 Utilization (in LBAs): 131072 (0GiB) 00:12:13.147 NGUID: E084B91620E44569A80806566BC9C485 00:12:13.147 UUID: e084b916-20e4-4569-a808-06566bc9c485 00:12:13.147 Thin Provisioning: Not Supported 00:12:13.147 Per-NS Atomic Units: Yes 00:12:13.147 Atomic Boundary Size (Normal): 0 00:12:13.147 Atomic Boundary Size (PFail): 0 00:12:13.147 Atomic Boundary Offset: 0 00:12:13.147 Maximum Single Source Range Length: 65535 00:12:13.147 Maximum Copy Length: 65535 00:12:13.147 Maximum Source Range Count: 1 00:12:13.147 NGUID/EUI64 Never Reused: No 00:12:13.147 Namespace Write Protected: No 00:12:13.147 Number of LBA Formats: 1 00:12:13.147 Current LBA Format: LBA Format #00 00:12:13.147 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:13.147 00:12:13.147 20:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:13.404 [2024-11-26 20:42:16.980188] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:18.665 Initializing NVMe Controllers 00:12:18.666 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:18.666 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:18.666 Initialization complete. Launching workers. 00:12:18.666 ======================================================== 00:12:18.666 Latency(us) 00:12:18.666 Device Information : IOPS MiB/s Average min max 00:12:18.666 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 33205.37 129.71 3856.49 1193.44 8987.74 00:12:18.666 ======================================================== 00:12:18.666 Total : 33205.37 129.71 3856.49 1193.44 8987.74 00:12:18.666 00:12:18.666 [2024-11-26 20:42:22.001222] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:18.666 20:42:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:18.666 [2024-11-26 20:42:22.266514] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:23.919 Initializing NVMe Controllers 00:12:23.919 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:23.919 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:23.919 Initialization complete. Launching workers. 00:12:23.919 ======================================================== 00:12:23.919 Latency(us) 00:12:23.919 Device Information : IOPS MiB/s Average min max 00:12:23.919 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15913.20 62.16 8052.04 4972.11 15980.90 00:12:23.919 ======================================================== 00:12:23.919 Total : 15913.20 62.16 8052.04 4972.11 15980.90 00:12:23.919 00:12:23.919 [2024-11-26 20:42:27.303893] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:23.919 20:42:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:23.919 [2024-11-26 20:42:27.526026] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:29.179 [2024-11-26 20:42:32.577633] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:29.179 Initializing NVMe Controllers 00:12:29.179 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:29.179 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:29.179 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:12:29.179 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:12:29.179 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:12:29.179 Initialization complete. Launching workers. 00:12:29.179 Starting thread on core 2 00:12:29.179 Starting thread on core 3 00:12:29.179 Starting thread on core 1 00:12:29.179 20:42:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:12:29.436 [2024-11-26 20:42:32.909900] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:32.717 [2024-11-26 20:42:36.311623] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:32.717 Initializing NVMe Controllers 00:12:32.717 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:32.717 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:32.717 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:12:32.717 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:12:32.717 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:12:32.717 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:12:32.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:32.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:32.717 Initialization complete. Launching workers. 00:12:32.717 Starting thread on core 1 with urgent priority queue 00:12:32.717 Starting thread on core 2 with urgent priority queue 00:12:32.717 Starting thread on core 3 with urgent priority queue 00:12:32.717 Starting thread on core 0 with urgent priority queue 00:12:32.717 SPDK bdev Controller (SPDK1 ) core 0: 4350.33 IO/s 22.99 secs/100000 ios 00:12:32.717 SPDK bdev Controller (SPDK1 ) core 1: 5170.33 IO/s 19.34 secs/100000 ios 00:12:32.717 SPDK bdev Controller (SPDK1 ) core 2: 5045.33 IO/s 19.82 secs/100000 ios 00:12:32.717 SPDK bdev Controller (SPDK1 ) core 3: 4730.00 IO/s 21.14 secs/100000 ios 00:12:32.717 ======================================================== 00:12:32.717 00:12:32.717 20:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:32.975 [2024-11-26 20:42:36.632831] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:32.975 Initializing NVMe Controllers 00:12:32.975 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:32.975 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:32.975 Namespace ID: 1 size: 0GB 00:12:32.975 Initialization complete. 00:12:32.975 INFO: using host memory buffer for IO 00:12:32.975 Hello world! 00:12:32.975 [2024-11-26 20:42:36.667500] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:33.240 20:42:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:33.561 [2024-11-26 20:42:36.976135] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:34.519 Initializing NVMe Controllers 00:12:34.519 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:34.519 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:34.519 Initialization complete. Launching workers. 00:12:34.519 submit (in ns) avg, min, max = 5799.8, 3571.1, 4016164.4 00:12:34.519 complete (in ns) avg, min, max = 25660.2, 2073.3, 4018290.0 00:12:34.519 00:12:34.519 Submit histogram 00:12:34.519 ================ 00:12:34.519 Range in us Cumulative Count 00:12:34.519 3.556 - 3.579: 0.0849% ( 11) 00:12:34.519 3.579 - 3.603: 3.3205% ( 419) 00:12:34.519 3.603 - 3.627: 10.9575% ( 989) 00:12:34.519 3.627 - 3.650: 23.8147% ( 1665) 00:12:34.519 3.650 - 3.674: 32.3166% ( 1101) 00:12:34.519 3.674 - 3.698: 40.4402% ( 1052) 00:12:34.519 3.698 - 3.721: 47.1274% ( 866) 00:12:34.519 3.721 - 3.745: 52.4479% ( 689) 00:12:34.519 3.745 - 3.769: 57.2819% ( 626) 00:12:34.519 3.769 - 3.793: 60.6718% ( 439) 00:12:34.519 3.793 - 3.816: 64.1158% ( 446) 00:12:34.519 3.816 - 3.840: 67.4903% ( 437) 00:12:34.519 3.840 - 3.864: 72.3398% ( 628) 00:12:34.519 3.864 - 3.887: 77.2046% ( 630) 00:12:34.519 3.887 - 3.911: 81.5830% ( 567) 00:12:34.519 3.911 - 3.935: 84.7490% ( 410) 00:12:34.519 3.935 - 3.959: 86.6100% ( 241) 00:12:34.519 3.959 - 3.982: 88.3398% ( 224) 00:12:34.519 3.982 - 4.006: 89.9459% ( 208) 00:12:34.519 4.006 - 4.030: 91.1892% ( 161) 00:12:34.519 4.030 - 4.053: 92.2780% ( 141) 00:12:34.519 4.053 - 4.077: 93.5058% ( 159) 00:12:34.519 4.077 - 4.101: 94.3861% ( 114) 00:12:34.519 4.101 - 4.124: 95.3127% ( 120) 00:12:34.519 4.124 - 4.148: 95.8301% ( 67) 00:12:34.519 4.148 - 4.172: 96.2857% ( 59) 00:12:34.519 4.172 - 4.196: 96.5251% ( 31) 00:12:34.519 4.196 - 4.219: 96.6332% ( 14) 00:12:34.519 4.219 - 4.243: 96.7336% ( 13) 00:12:34.519 4.243 - 4.267: 96.8803% ( 19) 00:12:34.519 4.267 - 4.290: 96.9498% ( 9) 00:12:34.519 4.290 - 4.314: 97.0425% ( 12) 00:12:34.519 4.314 - 4.338: 97.1737% ( 17) 00:12:34.519 4.338 - 4.361: 97.2973% ( 16) 00:12:34.519 4.361 - 4.385: 97.3514% ( 7) 00:12:34.519 4.385 - 4.409: 97.4208% ( 9) 00:12:34.519 4.409 - 4.433: 97.4440% ( 3) 00:12:34.519 4.456 - 4.480: 97.4826% ( 5) 00:12:34.519 4.480 - 4.504: 97.4981% ( 2) 00:12:34.519 4.504 - 4.527: 97.5058% ( 1) 00:12:34.519 4.527 - 4.551: 97.5367% ( 4) 00:12:34.519 4.551 - 4.575: 97.5444% ( 1) 00:12:34.519 4.575 - 4.599: 97.5521% ( 1) 00:12:34.519 4.599 - 4.622: 97.5753% ( 3) 00:12:34.519 4.622 - 4.646: 97.5985% ( 3) 00:12:34.519 4.646 - 4.670: 97.6448% ( 6) 00:12:34.519 4.670 - 4.693: 97.6834% ( 5) 00:12:34.519 4.693 - 4.717: 97.7297% ( 6) 00:12:34.519 4.717 - 4.741: 97.7838% ( 7) 00:12:34.519 4.741 - 4.764: 97.8687% ( 11) 00:12:34.519 4.764 - 4.788: 97.9382% ( 9) 00:12:34.519 4.788 - 4.812: 98.0077% ( 9) 00:12:34.519 4.812 - 4.836: 98.0386% ( 4) 00:12:34.519 4.836 - 4.859: 98.0772% ( 5) 00:12:34.519 4.859 - 4.883: 98.1081% ( 4) 00:12:34.519 4.883 - 4.907: 98.1236% ( 2) 00:12:34.519 4.907 - 4.930: 98.1544% ( 4) 00:12:34.519 4.930 - 4.954: 98.1622% ( 1) 00:12:34.519 4.954 - 4.978: 98.2085% ( 6) 00:12:34.519 4.978 - 5.001: 98.2317% ( 3) 00:12:34.519 5.001 - 5.025: 98.2934% ( 8) 00:12:34.519 5.025 - 5.049: 98.3243% ( 4) 00:12:34.519 5.073 - 5.096: 98.3475% ( 3) 00:12:34.519 5.096 - 5.120: 98.3629% ( 2) 00:12:34.519 5.120 - 5.144: 98.3784% ( 2) 00:12:34.519 5.167 - 5.191: 98.3938% ( 2) 00:12:34.519 5.191 - 5.215: 98.4170% ( 3) 00:12:34.519 5.215 - 5.239: 98.4247% ( 1) 00:12:34.519 5.239 - 5.262: 98.4324% ( 1) 00:12:34.519 5.262 - 5.286: 98.4402% ( 1) 00:12:34.519 5.286 - 5.310: 98.4479% ( 1) 00:12:34.519 5.333 - 5.357: 98.4556% ( 1) 00:12:34.519 5.381 - 5.404: 98.4633% ( 1) 00:12:34.519 5.902 - 5.926: 98.4710% ( 1) 00:12:34.519 6.258 - 6.305: 98.4788% ( 1) 00:12:34.519 6.542 - 6.590: 98.4865% ( 1) 00:12:34.519 6.732 - 6.779: 98.4942% ( 1) 00:12:34.519 6.779 - 6.827: 98.5019% ( 1) 00:12:34.519 7.396 - 7.443: 98.5174% ( 2) 00:12:34.519 7.443 - 7.490: 98.5251% ( 1) 00:12:34.519 7.538 - 7.585: 98.5328% ( 1) 00:12:34.519 7.585 - 7.633: 98.5405% ( 1) 00:12:34.519 7.680 - 7.727: 98.5483% ( 1) 00:12:34.519 7.727 - 7.775: 98.5560% ( 1) 00:12:34.519 7.775 - 7.822: 98.5792% ( 3) 00:12:34.519 7.822 - 7.870: 98.5946% ( 2) 00:12:34.519 8.059 - 8.107: 98.6023% ( 1) 00:12:34.519 8.154 - 8.201: 98.6178% ( 2) 00:12:34.519 8.296 - 8.344: 98.6255% ( 1) 00:12:34.519 8.344 - 8.391: 98.6409% ( 2) 00:12:34.519 8.391 - 8.439: 98.6486% ( 1) 00:12:34.519 8.439 - 8.486: 98.6718% ( 3) 00:12:34.519 8.581 - 8.628: 98.6795% ( 1) 00:12:34.519 8.628 - 8.676: 98.6950% ( 2) 00:12:34.519 8.723 - 8.770: 98.7027% ( 1) 00:12:34.519 8.913 - 8.960: 98.7104% ( 1) 00:12:34.519 8.960 - 9.007: 98.7259% ( 2) 00:12:34.519 9.007 - 9.055: 98.7336% ( 1) 00:12:34.519 9.055 - 9.102: 98.7413% ( 1) 00:12:34.519 9.150 - 9.197: 98.7568% ( 2) 00:12:34.519 9.244 - 9.292: 98.7645% ( 1) 00:12:34.519 9.434 - 9.481: 98.7722% ( 1) 00:12:34.519 9.576 - 9.624: 98.7799% ( 1) 00:12:34.519 9.624 - 9.671: 98.7876% ( 1) 00:12:34.519 9.671 - 9.719: 98.7954% ( 1) 00:12:34.519 9.719 - 9.766: 98.8031% ( 1) 00:12:34.519 9.813 - 9.861: 98.8263% ( 3) 00:12:34.519 9.861 - 9.908: 98.8340% ( 1) 00:12:34.519 9.908 - 9.956: 98.8417% ( 1) 00:12:34.519 9.956 - 10.003: 98.8494% ( 1) 00:12:34.519 10.003 - 10.050: 98.8571% ( 1) 00:12:34.519 10.050 - 10.098: 98.8726% ( 2) 00:12:34.519 10.098 - 10.145: 98.8803% ( 1) 00:12:34.519 10.145 - 10.193: 98.8880% ( 1) 00:12:34.519 10.240 - 10.287: 98.8958% ( 1) 00:12:34.519 10.287 - 10.335: 98.9035% ( 1) 00:12:34.519 10.335 - 10.382: 98.9266% ( 3) 00:12:34.519 10.430 - 10.477: 98.9344% ( 1) 00:12:34.519 10.667 - 10.714: 98.9498% ( 2) 00:12:34.519 10.856 - 10.904: 98.9575% ( 1) 00:12:34.519 11.093 - 11.141: 98.9653% ( 1) 00:12:34.519 11.330 - 11.378: 98.9730% ( 1) 00:12:34.519 11.425 - 11.473: 98.9807% ( 1) 00:12:34.519 11.567 - 11.615: 98.9884% ( 1) 00:12:34.519 11.662 - 11.710: 98.9961% ( 1) 00:12:34.519 11.710 - 11.757: 99.0039% ( 1) 00:12:34.519 11.804 - 11.852: 99.0116% ( 1) 00:12:34.519 11.947 - 11.994: 99.0193% ( 1) 00:12:34.519 12.800 - 12.895: 99.0270% ( 1) 00:12:34.519 13.179 - 13.274: 99.0347% ( 1) 00:12:34.519 13.369 - 13.464: 99.0425% ( 1) 00:12:34.519 13.559 - 13.653: 99.0502% ( 1) 00:12:34.519 13.653 - 13.748: 99.0579% ( 1) 00:12:34.519 13.938 - 14.033: 99.0656% ( 1) 00:12:34.519 14.222 - 14.317: 99.0888% ( 3) 00:12:34.519 14.412 - 14.507: 99.0965% ( 1) 00:12:34.519 14.601 - 14.696: 99.1042% ( 1) 00:12:34.519 16.972 - 17.067: 99.1120% ( 1) 00:12:34.519 17.161 - 17.256: 99.1197% ( 1) 00:12:34.519 17.256 - 17.351: 99.1351% ( 2) 00:12:34.519 17.351 - 17.446: 99.1660% ( 4) 00:12:34.519 17.446 - 17.541: 99.2046% ( 5) 00:12:34.519 17.541 - 17.636: 99.2587% ( 7) 00:12:34.519 17.636 - 17.730: 99.2896% ( 4) 00:12:34.519 17.730 - 17.825: 99.3436% ( 7) 00:12:34.519 17.825 - 17.920: 99.3822% ( 5) 00:12:34.519 17.920 - 18.015: 99.4517% ( 9) 00:12:34.519 18.015 - 18.110: 99.4981% ( 6) 00:12:34.519 18.110 - 18.204: 99.5521% ( 7) 00:12:34.519 18.204 - 18.299: 99.6371% ( 11) 00:12:34.519 18.299 - 18.394: 99.6680% ( 4) 00:12:34.519 18.394 - 18.489: 99.7066% ( 5) 00:12:34.519 18.489 - 18.584: 99.7915% ( 11) 00:12:34.519 18.584 - 18.679: 99.8301% ( 5) 00:12:34.519 18.679 - 18.773: 99.8378% ( 1) 00:12:34.519 18.773 - 18.868: 99.8533% ( 2) 00:12:34.519 18.868 - 18.963: 99.8764% ( 3) 00:12:34.519 19.153 - 19.247: 99.8996% ( 3) 00:12:34.519 19.532 - 19.627: 99.9073% ( 1) 00:12:34.519 20.196 - 20.290: 99.9151% ( 1) 00:12:34.519 21.713 - 21.807: 99.9228% ( 1) 00:12:34.519 21.902 - 21.997: 99.9305% ( 1) 00:12:34.519 25.600 - 25.790: 99.9382% ( 1) 00:12:34.519 28.824 - 29.013: 99.9459% ( 1) 00:12:34.519 29.013 - 29.203: 99.9537% ( 1) 00:12:34.520 3980.705 - 4004.978: 99.9846% ( 4) 00:12:34.520 4004.978 - 4029.250: 100.0000% ( 2) 00:12:34.520 00:12:34.520 Complete histogram 00:12:34.520 ================== 00:12:34.520 Range in us Cumulative Count 00:12:34.520 2.062 - 2.074: 0.0077% ( 1) 00:12:34.520 2.074 - 2.086: 11.4826% ( 1486) 00:12:34.520 2.086 - 2.098: 40.0541% ( 3700) 00:12:34.520 2.098 - 2.110: 43.6139% ( 461) 00:12:34.520 2.110 - 2.121: 51.1429% ( 975) 00:12:34.520 2.121 - 2.133: 57.1583% ( 779) 00:12:34.520 2.133 - 2.145: 58.7799% ( 210) 00:12:34.520 2.145 - 2.157: 68.1467% ( 1213) 00:12:34.520 2.157 - 2.169: 75.1660% ( 909) 00:12:34.520 2.169 - 2.181: 76.0927% ( 120) 00:12:34.520 2.181 - 2.193: 79.0039% ( 377) 00:12:34.520 2.193 - 2.204: 80.6332% ( 211) 00:12:34.520 2.204 - 2.216: 81.1274% ( 64) 00:12:34.520 2.216 - 2.228: 84.6023% ( 450) 00:12:34.520 2.228 - 2.240: 89.2664% ( 604) 00:12:34.520 2.240 - 2.252: 91.2046% ( 251) 00:12:34.520 2.252 - 2.264: 92.4710% ( 164) 00:12:34.520 2.264 - 2.276: 93.1815% ( 92) 00:12:34.520 2.276 - 2.287: 93.4440% ( 34) 00:12:34.520 2.287 - 2.299: 93.9459% ( 65) 00:12:34.520 2.299 - 2.311: 94.5714% ( 81) 00:12:34.520 2.311 - 2.323: 95.3127% ( 96) 00:12:34.520 2.323 - 2.335: 95.4749% ( 21) 00:12:34.520 2.335 - 2.347: 95.5367% ( 8) 00:12:34.520 2.347 - 2.359: 95.5985% ( 8) 00:12:34.520 2.359 - 2.370: 95.6757% ( 10) 00:12:34.520 2.370 - 2.382: 95.7992% ( 16) 00:12:34.520 2.382 - 2.394: 96.2239% ( 55) 00:12:34.520 2.394 - 2.406: 96.6332% ( 53) 00:12:34.520 2.406 - 2.418: 96.9035% ( 35) 00:12:34.520 2.418 - 2.430: 97.1274% ( 29) 00:12:34.520 2.430 - 2.441: 97.2741% ( 19) 00:12:34.520 2.441 - 2.453: 97.4672% ( 25) 00:12:34.520 2.453 - 2.465: 97.6834% ( 28) 00:12:34.520 2.465 - 2.477: 97.8301% ( 19) 00:12:34.520 2.477 - 2.489: 97.9073% ( 10) 00:12:34.520 2.489 - 2.501: 97.9846% ( 10) 00:12:34.520 2.501 - 2.513: 98.0849% ( 13) 00:12:34.520 2.513 - 2.524: 98.1699% ( 11) 00:12:34.520 2.524 - 2.536: 98.1853% ( 2) 00:12:34.520 2.536 - 2.548: 98.2394% ( 7) 00:12:34.520 2.548 - 2.560: 98.2780% ( 5) 00:12:34.520 2.560 - 2.572: 98.3243% ( 6) 00:12:34.520 2.572 - 2.584: 98.3320% ( 1) 00:12:34.520 2.607 - 2.619: 98.3398% ( 1) 00:12:34.520 2.619 - 2.631: 98.3475% ( 1) 00:12:34.520 2.643 - 2.655: 98.3784% ( 4) 00:12:34.520 2.667 - 2.679: 98.3861% ( 1) 00:12:34.520 2.714 - 2.726: 98.3938% ( 1) 00:12:34.520 2.904 - 2.916: 98.4015% ( 1) 00:12:34.520 2.927 - 2.939: 98.4093% ( 1) 00:12:34.520 3.271 - 3.295: 98.4170% ( 1) 00:12:34.520 3.295 - 3.319: 98.4247% ( 1) 00:12:34.520 3.437 - 3.461: 98.4479% ( 3) 00:12:34.520 3.461 - 3.484: 98.4556% ( 1) 00:12:34.520 3.532 - 3.556: 98.4633% ( 1) 00:12:34.520 3.556 - 3.579: 98.4788% ( 2) 00:12:34.520 3.579 - 3.603: 98.4865% ( 1) 00:12:34.520 3.603 - 3.627: 98.4942% ( 1) 00:12:34.520 3.674 - 3.698: 98.5097% ( 2) 00:12:34.520 3.721 - 3.745: 98.5174% ( 1) 00:12:34.520 3.745 - 3.769: 98.5328% ( 2) 00:12:34.520 3.769 - 3.793: 98.5405% ( 1) 00:12:34.520 3.793 - 3.816: 98.5483% ( 1) 00:12:34.520 3.816 - 3.840: 98.5560% ( 1) 00:12:34.520 3.887 - 3.911: 98.5637% ( 1) 00:12:34.520 3.911 - 3.935: 98.5714% ( 1) 00:12:34.520 3.935 - 3.959: 98.5792% ( 1) 00:12:34.520 4.053 - 4.077: 98.5869% ( 1) 00:12:34.520 4.124 - 4.148: 98.5946% ( 1) 00:12:34.520 4.148 - 4.172: 98.6023% ( 1) 00:12:34.520 4.243 - 4.267: 98.6178% ( 2) 00:12:34.520 5.760 - 5.784: 98.6255% ( 1) 00:12:34.520 6.305 - 6.353: 98.6409% ( 2) 00:12:34.520 6.590 - 6.637: 98.6486% ( 1) 00:12:34.520 7.064 - 7.111: 98.6564% ( 1) 00:12:34.520 7.253 - 7.301: 98.6718% ( 2) 00:12:34.520 7.301 - 7.348: 98.6795% ( 1) 00:12:34.520 7.585 - 7.633: 98.6950% ( 2) 00:12:34.520 7.822 - 7.870: 98.7027% ( 1) 00:12:34.520 7.964 - 8.012: 98.7104% ( 1) 00:12:34.520 8.059 - 8.107: 98.7181% ( 1) 00:12:34.520 8.296 - 8.344: 98.7259% ( 1) 00:12:34.520 8.344 - 8.391: 98.7336% ( 1) 00:12:34.520 8.391 - 8.439: 9[2024-11-26 20:42:37.996426] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:34.520 8.7413% ( 1) 00:12:34.520 8.676 - 8.723: 98.7490% ( 1) 00:12:34.520 8.818 - 8.865: 98.7568% ( 1) 00:12:34.520 9.102 - 9.150: 98.7645% ( 1) 00:12:34.520 9.244 - 9.292: 98.7722% ( 1) 00:12:34.520 15.455 - 15.550: 98.7954% ( 3) 00:12:34.520 15.644 - 15.739: 98.8031% ( 1) 00:12:34.520 15.739 - 15.834: 98.8185% ( 2) 00:12:34.520 15.834 - 15.929: 98.8417% ( 3) 00:12:34.520 15.929 - 16.024: 98.8803% ( 5) 00:12:34.520 16.024 - 16.119: 98.9035% ( 3) 00:12:34.520 16.119 - 16.213: 98.9266% ( 3) 00:12:34.520 16.213 - 16.308: 98.9730% ( 6) 00:12:34.520 16.308 - 16.403: 99.0270% ( 7) 00:12:34.520 16.403 - 16.498: 99.0502% ( 3) 00:12:34.520 16.498 - 16.593: 99.0811% ( 4) 00:12:34.520 16.593 - 16.687: 99.1042% ( 3) 00:12:34.520 16.687 - 16.782: 99.1197% ( 2) 00:12:34.520 16.782 - 16.877: 99.1969% ( 10) 00:12:34.520 16.877 - 16.972: 99.2278% ( 4) 00:12:34.520 16.972 - 17.067: 99.2355% ( 1) 00:12:34.520 17.161 - 17.256: 99.2432% ( 1) 00:12:34.520 17.256 - 17.351: 99.2510% ( 1) 00:12:34.520 17.351 - 17.446: 99.2819% ( 4) 00:12:34.520 17.446 - 17.541: 99.3050% ( 3) 00:12:34.520 17.541 - 17.636: 99.3205% ( 2) 00:12:34.520 17.636 - 17.730: 99.3359% ( 2) 00:12:34.520 17.730 - 17.825: 99.3436% ( 1) 00:12:34.520 17.920 - 18.015: 99.3514% ( 1) 00:12:34.520 18.015 - 18.110: 99.3591% ( 1) 00:12:34.520 18.299 - 18.394: 99.3668% ( 1) 00:12:34.520 18.394 - 18.489: 99.3900% ( 3) 00:12:34.520 19.247 - 19.342: 99.3977% ( 1) 00:12:34.520 20.196 - 20.290: 99.4054% ( 1) 00:12:34.520 21.144 - 21.239: 99.4131% ( 1) 00:12:34.520 3009.801 - 3021.938: 99.4208% ( 1) 00:12:34.520 3980.705 - 4004.978: 99.8456% ( 55) 00:12:34.520 4004.978 - 4029.250: 100.0000% ( 20) 00:12:34.520 00:12:34.520 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:12:34.520 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:34.520 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:12:34.520 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:12:34.520 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:34.778 [ 00:12:34.778 { 00:12:34.778 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:34.778 "subtype": "Discovery", 00:12:34.778 "listen_addresses": [], 00:12:34.778 "allow_any_host": true, 00:12:34.778 "hosts": [] 00:12:34.778 }, 00:12:34.778 { 00:12:34.778 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:34.778 "subtype": "NVMe", 00:12:34.778 "listen_addresses": [ 00:12:34.778 { 00:12:34.778 "trtype": "VFIOUSER", 00:12:34.778 "adrfam": "IPv4", 00:12:34.778 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:34.778 "trsvcid": "0" 00:12:34.778 } 00:12:34.778 ], 00:12:34.778 "allow_any_host": true, 00:12:34.778 "hosts": [], 00:12:34.778 "serial_number": "SPDK1", 00:12:34.778 "model_number": "SPDK bdev Controller", 00:12:34.778 "max_namespaces": 32, 00:12:34.778 "min_cntlid": 1, 00:12:34.778 "max_cntlid": 65519, 00:12:34.778 "namespaces": [ 00:12:34.778 { 00:12:34.778 "nsid": 1, 00:12:34.778 "bdev_name": "Malloc1", 00:12:34.778 "name": "Malloc1", 00:12:34.778 "nguid": "E084B91620E44569A80806566BC9C485", 00:12:34.778 "uuid": "e084b916-20e4-4569-a808-06566bc9c485" 00:12:34.778 } 00:12:34.778 ] 00:12:34.778 }, 00:12:34.778 { 00:12:34.778 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:34.778 "subtype": "NVMe", 00:12:34.778 "listen_addresses": [ 00:12:34.778 { 00:12:34.778 "trtype": "VFIOUSER", 00:12:34.778 "adrfam": "IPv4", 00:12:34.778 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:34.778 "trsvcid": "0" 00:12:34.778 } 00:12:34.778 ], 00:12:34.778 "allow_any_host": true, 00:12:34.778 "hosts": [], 00:12:34.778 "serial_number": "SPDK2", 00:12:34.778 "model_number": "SPDK bdev Controller", 00:12:34.778 "max_namespaces": 32, 00:12:34.778 "min_cntlid": 1, 00:12:34.778 "max_cntlid": 65519, 00:12:34.778 "namespaces": [ 00:12:34.778 { 00:12:34.778 "nsid": 1, 00:12:34.778 "bdev_name": "Malloc2", 00:12:34.778 "name": "Malloc2", 00:12:34.778 "nguid": "B5110E47520949F28789C1EA49776C57", 00:12:34.778 "uuid": "b5110e47-5209-49f2-8789-c1ea49776c57" 00:12:34.778 } 00:12:34.778 ] 00:12:34.778 } 00:12:34.778 ] 00:12:34.778 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:34.778 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1635871 00:12:34.778 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:12:34.778 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:34.778 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:12:34.778 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:34.778 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:34.778 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:12:34.778 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:34.778 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:12:35.035 [2024-11-26 20:42:38.537810] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:35.035 Malloc3 00:12:35.035 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:12:35.293 [2024-11-26 20:42:38.922798] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:35.293 20:42:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:35.293 Asynchronous Event Request test 00:12:35.293 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:35.293 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:35.293 Registering asynchronous event callbacks... 00:12:35.293 Starting namespace attribute notice tests for all controllers... 00:12:35.293 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:35.293 aer_cb - Changed Namespace 00:12:35.293 Cleaning up... 00:12:35.550 [ 00:12:35.550 { 00:12:35.550 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:35.550 "subtype": "Discovery", 00:12:35.550 "listen_addresses": [], 00:12:35.550 "allow_any_host": true, 00:12:35.550 "hosts": [] 00:12:35.550 }, 00:12:35.550 { 00:12:35.550 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:35.550 "subtype": "NVMe", 00:12:35.550 "listen_addresses": [ 00:12:35.550 { 00:12:35.550 "trtype": "VFIOUSER", 00:12:35.550 "adrfam": "IPv4", 00:12:35.550 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:35.550 "trsvcid": "0" 00:12:35.550 } 00:12:35.550 ], 00:12:35.550 "allow_any_host": true, 00:12:35.550 "hosts": [], 00:12:35.550 "serial_number": "SPDK1", 00:12:35.550 "model_number": "SPDK bdev Controller", 00:12:35.550 "max_namespaces": 32, 00:12:35.550 "min_cntlid": 1, 00:12:35.550 "max_cntlid": 65519, 00:12:35.550 "namespaces": [ 00:12:35.550 { 00:12:35.550 "nsid": 1, 00:12:35.550 "bdev_name": "Malloc1", 00:12:35.550 "name": "Malloc1", 00:12:35.550 "nguid": "E084B91620E44569A80806566BC9C485", 00:12:35.550 "uuid": "e084b916-20e4-4569-a808-06566bc9c485" 00:12:35.550 }, 00:12:35.550 { 00:12:35.550 "nsid": 2, 00:12:35.550 "bdev_name": "Malloc3", 00:12:35.550 "name": "Malloc3", 00:12:35.550 "nguid": "5E30DAAAE2CC4EA68DE13AE976F08B17", 00:12:35.550 "uuid": "5e30daaa-e2cc-4ea6-8de1-3ae976f08b17" 00:12:35.550 } 00:12:35.550 ] 00:12:35.550 }, 00:12:35.550 { 00:12:35.550 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:35.550 "subtype": "NVMe", 00:12:35.550 "listen_addresses": [ 00:12:35.550 { 00:12:35.550 "trtype": "VFIOUSER", 00:12:35.550 "adrfam": "IPv4", 00:12:35.550 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:35.550 "trsvcid": "0" 00:12:35.550 } 00:12:35.550 ], 00:12:35.550 "allow_any_host": true, 00:12:35.550 "hosts": [], 00:12:35.550 "serial_number": "SPDK2", 00:12:35.550 "model_number": "SPDK bdev Controller", 00:12:35.550 "max_namespaces": 32, 00:12:35.550 "min_cntlid": 1, 00:12:35.550 "max_cntlid": 65519, 00:12:35.550 "namespaces": [ 00:12:35.550 { 00:12:35.550 "nsid": 1, 00:12:35.550 "bdev_name": "Malloc2", 00:12:35.550 "name": "Malloc2", 00:12:35.550 "nguid": "B5110E47520949F28789C1EA49776C57", 00:12:35.550 "uuid": "b5110e47-5209-49f2-8789-c1ea49776c57" 00:12:35.550 } 00:12:35.550 ] 00:12:35.550 } 00:12:35.550 ] 00:12:35.550 20:42:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1635871 00:12:35.550 20:42:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:35.550 20:42:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:35.550 20:42:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:12:35.551 20:42:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:35.551 [2024-11-26 20:42:39.227730] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:12:35.551 [2024-11-26 20:42:39.227774] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1635893 ] 00:12:35.808 [2024-11-26 20:42:39.278553] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:12:35.808 [2024-11-26 20:42:39.287649] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:35.808 [2024-11-26 20:42:39.287682] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f265b665000 00:12:35.808 [2024-11-26 20:42:39.288641] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:35.808 [2024-11-26 20:42:39.289644] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:35.808 [2024-11-26 20:42:39.290661] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:35.808 [2024-11-26 20:42:39.291669] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:35.808 [2024-11-26 20:42:39.292676] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:35.808 [2024-11-26 20:42:39.293703] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:35.808 [2024-11-26 20:42:39.294697] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:35.808 [2024-11-26 20:42:39.295704] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:35.808 [2024-11-26 20:42:39.296711] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:35.808 [2024-11-26 20:42:39.296732] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f265b65a000 00:12:35.808 [2024-11-26 20:42:39.297861] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:35.809 [2024-11-26 20:42:39.312674] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:12:35.809 [2024-11-26 20:42:39.312714] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:12:35.809 [2024-11-26 20:42:39.314802] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:35.809 [2024-11-26 20:42:39.314858] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:35.809 [2024-11-26 20:42:39.314948] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:12:35.809 [2024-11-26 20:42:39.314971] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:12:35.809 [2024-11-26 20:42:39.314981] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:12:35.809 [2024-11-26 20:42:39.315812] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:12:35.809 [2024-11-26 20:42:39.315838] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:12:35.809 [2024-11-26 20:42:39.315853] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:12:35.809 [2024-11-26 20:42:39.316822] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:35.809 [2024-11-26 20:42:39.316844] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:12:35.809 [2024-11-26 20:42:39.316857] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:12:35.809 [2024-11-26 20:42:39.317829] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:12:35.809 [2024-11-26 20:42:39.317849] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:35.809 [2024-11-26 20:42:39.318835] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:12:35.809 [2024-11-26 20:42:39.318855] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:12:35.809 [2024-11-26 20:42:39.318864] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:12:35.809 [2024-11-26 20:42:39.318876] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:35.809 [2024-11-26 20:42:39.318985] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:12:35.809 [2024-11-26 20:42:39.318993] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:35.809 [2024-11-26 20:42:39.319001] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:12:35.809 [2024-11-26 20:42:39.319845] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:12:35.809 [2024-11-26 20:42:39.320851] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:12:35.809 [2024-11-26 20:42:39.321863] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:35.809 [2024-11-26 20:42:39.322857] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:35.809 [2024-11-26 20:42:39.322941] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:35.809 [2024-11-26 20:42:39.323871] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:12:35.809 [2024-11-26 20:42:39.323891] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:35.809 [2024-11-26 20:42:39.323900] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:12:35.809 [2024-11-26 20:42:39.323923] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:12:35.809 [2024-11-26 20:42:39.323937] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:12:35.809 [2024-11-26 20:42:39.323961] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:35.809 [2024-11-26 20:42:39.323970] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:35.809 [2024-11-26 20:42:39.323977] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:35.809 [2024-11-26 20:42:39.323994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:35.809 [2024-11-26 20:42:39.330315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:35.809 [2024-11-26 20:42:39.330337] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:12:35.809 [2024-11-26 20:42:39.330345] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:12:35.809 [2024-11-26 20:42:39.330352] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:12:35.809 [2024-11-26 20:42:39.330360] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:35.809 [2024-11-26 20:42:39.330367] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:12:35.809 [2024-11-26 20:42:39.330375] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:12:35.809 [2024-11-26 20:42:39.330382] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:12:35.809 [2024-11-26 20:42:39.330395] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:12:35.809 [2024-11-26 20:42:39.330410] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:35.809 [2024-11-26 20:42:39.338332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:35.809 [2024-11-26 20:42:39.338357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:35.809 [2024-11-26 20:42:39.338370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:35.809 [2024-11-26 20:42:39.338402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:35.809 [2024-11-26 20:42:39.338415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:35.809 [2024-11-26 20:42:39.338424] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:12:35.809 [2024-11-26 20:42:39.338441] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:35.809 [2024-11-26 20:42:39.338456] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:35.809 [2024-11-26 20:42:39.346313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:35.809 [2024-11-26 20:42:39.346331] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:12:35.809 [2024-11-26 20:42:39.346366] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:35.809 [2024-11-26 20:42:39.346383] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:12:35.809 [2024-11-26 20:42:39.346395] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:12:35.809 [2024-11-26 20:42:39.346409] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:35.809 [2024-11-26 20:42:39.354318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:35.809 [2024-11-26 20:42:39.354424] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:12:35.809 [2024-11-26 20:42:39.354443] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:12:35.809 [2024-11-26 20:42:39.354457] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:35.809 [2024-11-26 20:42:39.354466] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:35.809 [2024-11-26 20:42:39.354472] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:35.809 [2024-11-26 20:42:39.354482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:35.809 [2024-11-26 20:42:39.362316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:35.809 [2024-11-26 20:42:39.362347] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:12:35.809 [2024-11-26 20:42:39.362368] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:12:35.809 [2024-11-26 20:42:39.362384] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:12:35.809 [2024-11-26 20:42:39.362397] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:35.809 [2024-11-26 20:42:39.362408] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:35.809 [2024-11-26 20:42:39.362414] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:35.809 [2024-11-26 20:42:39.362424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:35.809 [2024-11-26 20:42:39.370313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:35.809 [2024-11-26 20:42:39.370349] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:35.809 [2024-11-26 20:42:39.370365] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:35.809 [2024-11-26 20:42:39.370379] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:35.810 [2024-11-26 20:42:39.370388] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:35.810 [2024-11-26 20:42:39.370394] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:35.810 [2024-11-26 20:42:39.370404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:35.810 [2024-11-26 20:42:39.378315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:35.810 [2024-11-26 20:42:39.378343] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:35.810 [2024-11-26 20:42:39.378373] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:12:35.810 [2024-11-26 20:42:39.378386] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:12:35.810 [2024-11-26 20:42:39.378397] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:12:35.810 [2024-11-26 20:42:39.378405] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:35.810 [2024-11-26 20:42:39.378413] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:12:35.810 [2024-11-26 20:42:39.378422] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:12:35.810 [2024-11-26 20:42:39.378430] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:12:35.810 [2024-11-26 20:42:39.378438] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:12:35.810 [2024-11-26 20:42:39.378462] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:35.810 [2024-11-26 20:42:39.386318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:35.810 [2024-11-26 20:42:39.386369] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:35.810 [2024-11-26 20:42:39.394313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:35.810 [2024-11-26 20:42:39.394346] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:35.810 [2024-11-26 20:42:39.402314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:35.810 [2024-11-26 20:42:39.402361] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:35.810 [2024-11-26 20:42:39.410318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:35.810 [2024-11-26 20:42:39.410354] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:35.810 [2024-11-26 20:42:39.410366] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:35.810 [2024-11-26 20:42:39.410373] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:35.810 [2024-11-26 20:42:39.410379] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:35.810 [2024-11-26 20:42:39.410384] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:12:35.810 [2024-11-26 20:42:39.410394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:35.810 [2024-11-26 20:42:39.410407] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:35.810 [2024-11-26 20:42:39.410415] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:35.810 [2024-11-26 20:42:39.410421] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:35.810 [2024-11-26 20:42:39.410430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:35.810 [2024-11-26 20:42:39.410442] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:35.810 [2024-11-26 20:42:39.410450] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:35.810 [2024-11-26 20:42:39.410456] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:35.810 [2024-11-26 20:42:39.410465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:35.810 [2024-11-26 20:42:39.410478] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:35.810 [2024-11-26 20:42:39.410486] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:35.810 [2024-11-26 20:42:39.410492] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:35.810 [2024-11-26 20:42:39.410501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:35.810 [2024-11-26 20:42:39.418317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:35.810 [2024-11-26 20:42:39.418372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:35.810 [2024-11-26 20:42:39.418392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:35.810 [2024-11-26 20:42:39.418404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:35.810 ===================================================== 00:12:35.810 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:35.810 ===================================================== 00:12:35.810 Controller Capabilities/Features 00:12:35.810 ================================ 00:12:35.810 Vendor ID: 4e58 00:12:35.810 Subsystem Vendor ID: 4e58 00:12:35.810 Serial Number: SPDK2 00:12:35.810 Model Number: SPDK bdev Controller 00:12:35.810 Firmware Version: 25.01 00:12:35.810 Recommended Arb Burst: 6 00:12:35.810 IEEE OUI Identifier: 8d 6b 50 00:12:35.810 Multi-path I/O 00:12:35.810 May have multiple subsystem ports: Yes 00:12:35.810 May have multiple controllers: Yes 00:12:35.810 Associated with SR-IOV VF: No 00:12:35.810 Max Data Transfer Size: 131072 00:12:35.810 Max Number of Namespaces: 32 00:12:35.810 Max Number of I/O Queues: 127 00:12:35.810 NVMe Specification Version (VS): 1.3 00:12:35.810 NVMe Specification Version (Identify): 1.3 00:12:35.810 Maximum Queue Entries: 256 00:12:35.810 Contiguous Queues Required: Yes 00:12:35.810 Arbitration Mechanisms Supported 00:12:35.810 Weighted Round Robin: Not Supported 00:12:35.810 Vendor Specific: Not Supported 00:12:35.810 Reset Timeout: 15000 ms 00:12:35.810 Doorbell Stride: 4 bytes 00:12:35.810 NVM Subsystem Reset: Not Supported 00:12:35.810 Command Sets Supported 00:12:35.810 NVM Command Set: Supported 00:12:35.810 Boot Partition: Not Supported 00:12:35.810 Memory Page Size Minimum: 4096 bytes 00:12:35.810 Memory Page Size Maximum: 4096 bytes 00:12:35.810 Persistent Memory Region: Not Supported 00:12:35.810 Optional Asynchronous Events Supported 00:12:35.810 Namespace Attribute Notices: Supported 00:12:35.810 Firmware Activation Notices: Not Supported 00:12:35.810 ANA Change Notices: Not Supported 00:12:35.810 PLE Aggregate Log Change Notices: Not Supported 00:12:35.810 LBA Status Info Alert Notices: Not Supported 00:12:35.810 EGE Aggregate Log Change Notices: Not Supported 00:12:35.810 Normal NVM Subsystem Shutdown event: Not Supported 00:12:35.810 Zone Descriptor Change Notices: Not Supported 00:12:35.810 Discovery Log Change Notices: Not Supported 00:12:35.810 Controller Attributes 00:12:35.810 128-bit Host Identifier: Supported 00:12:35.810 Non-Operational Permissive Mode: Not Supported 00:12:35.810 NVM Sets: Not Supported 00:12:35.810 Read Recovery Levels: Not Supported 00:12:35.810 Endurance Groups: Not Supported 00:12:35.810 Predictable Latency Mode: Not Supported 00:12:35.810 Traffic Based Keep ALive: Not Supported 00:12:35.810 Namespace Granularity: Not Supported 00:12:35.810 SQ Associations: Not Supported 00:12:35.810 UUID List: Not Supported 00:12:35.810 Multi-Domain Subsystem: Not Supported 00:12:35.810 Fixed Capacity Management: Not Supported 00:12:35.810 Variable Capacity Management: Not Supported 00:12:35.810 Delete Endurance Group: Not Supported 00:12:35.810 Delete NVM Set: Not Supported 00:12:35.810 Extended LBA Formats Supported: Not Supported 00:12:35.810 Flexible Data Placement Supported: Not Supported 00:12:35.810 00:12:35.810 Controller Memory Buffer Support 00:12:35.810 ================================ 00:12:35.810 Supported: No 00:12:35.810 00:12:35.810 Persistent Memory Region Support 00:12:35.810 ================================ 00:12:35.810 Supported: No 00:12:35.810 00:12:35.810 Admin Command Set Attributes 00:12:35.810 ============================ 00:12:35.810 Security Send/Receive: Not Supported 00:12:35.810 Format NVM: Not Supported 00:12:35.810 Firmware Activate/Download: Not Supported 00:12:35.810 Namespace Management: Not Supported 00:12:35.810 Device Self-Test: Not Supported 00:12:35.810 Directives: Not Supported 00:12:35.810 NVMe-MI: Not Supported 00:12:35.810 Virtualization Management: Not Supported 00:12:35.810 Doorbell Buffer Config: Not Supported 00:12:35.810 Get LBA Status Capability: Not Supported 00:12:35.810 Command & Feature Lockdown Capability: Not Supported 00:12:35.810 Abort Command Limit: 4 00:12:35.810 Async Event Request Limit: 4 00:12:35.810 Number of Firmware Slots: N/A 00:12:35.810 Firmware Slot 1 Read-Only: N/A 00:12:35.810 Firmware Activation Without Reset: N/A 00:12:35.811 Multiple Update Detection Support: N/A 00:12:35.811 Firmware Update Granularity: No Information Provided 00:12:35.811 Per-Namespace SMART Log: No 00:12:35.811 Asymmetric Namespace Access Log Page: Not Supported 00:12:35.811 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:12:35.811 Command Effects Log Page: Supported 00:12:35.811 Get Log Page Extended Data: Supported 00:12:35.811 Telemetry Log Pages: Not Supported 00:12:35.811 Persistent Event Log Pages: Not Supported 00:12:35.811 Supported Log Pages Log Page: May Support 00:12:35.811 Commands Supported & Effects Log Page: Not Supported 00:12:35.811 Feature Identifiers & Effects Log Page:May Support 00:12:35.811 NVMe-MI Commands & Effects Log Page: May Support 00:12:35.811 Data Area 4 for Telemetry Log: Not Supported 00:12:35.811 Error Log Page Entries Supported: 128 00:12:35.811 Keep Alive: Supported 00:12:35.811 Keep Alive Granularity: 10000 ms 00:12:35.811 00:12:35.811 NVM Command Set Attributes 00:12:35.811 ========================== 00:12:35.811 Submission Queue Entry Size 00:12:35.811 Max: 64 00:12:35.811 Min: 64 00:12:35.811 Completion Queue Entry Size 00:12:35.811 Max: 16 00:12:35.811 Min: 16 00:12:35.811 Number of Namespaces: 32 00:12:35.811 Compare Command: Supported 00:12:35.811 Write Uncorrectable Command: Not Supported 00:12:35.811 Dataset Management Command: Supported 00:12:35.811 Write Zeroes Command: Supported 00:12:35.811 Set Features Save Field: Not Supported 00:12:35.811 Reservations: Not Supported 00:12:35.811 Timestamp: Not Supported 00:12:35.811 Copy: Supported 00:12:35.811 Volatile Write Cache: Present 00:12:35.811 Atomic Write Unit (Normal): 1 00:12:35.811 Atomic Write Unit (PFail): 1 00:12:35.811 Atomic Compare & Write Unit: 1 00:12:35.811 Fused Compare & Write: Supported 00:12:35.811 Scatter-Gather List 00:12:35.811 SGL Command Set: Supported (Dword aligned) 00:12:35.811 SGL Keyed: Not Supported 00:12:35.811 SGL Bit Bucket Descriptor: Not Supported 00:12:35.811 SGL Metadata Pointer: Not Supported 00:12:35.811 Oversized SGL: Not Supported 00:12:35.811 SGL Metadata Address: Not Supported 00:12:35.811 SGL Offset: Not Supported 00:12:35.811 Transport SGL Data Block: Not Supported 00:12:35.811 Replay Protected Memory Block: Not Supported 00:12:35.811 00:12:35.811 Firmware Slot Information 00:12:35.811 ========================= 00:12:35.811 Active slot: 1 00:12:35.811 Slot 1 Firmware Revision: 25.01 00:12:35.811 00:12:35.811 00:12:35.811 Commands Supported and Effects 00:12:35.811 ============================== 00:12:35.811 Admin Commands 00:12:35.811 -------------- 00:12:35.811 Get Log Page (02h): Supported 00:12:35.811 Identify (06h): Supported 00:12:35.811 Abort (08h): Supported 00:12:35.811 Set Features (09h): Supported 00:12:35.811 Get Features (0Ah): Supported 00:12:35.811 Asynchronous Event Request (0Ch): Supported 00:12:35.811 Keep Alive (18h): Supported 00:12:35.811 I/O Commands 00:12:35.811 ------------ 00:12:35.811 Flush (00h): Supported LBA-Change 00:12:35.811 Write (01h): Supported LBA-Change 00:12:35.811 Read (02h): Supported 00:12:35.811 Compare (05h): Supported 00:12:35.811 Write Zeroes (08h): Supported LBA-Change 00:12:35.811 Dataset Management (09h): Supported LBA-Change 00:12:35.811 Copy (19h): Supported LBA-Change 00:12:35.811 00:12:35.811 Error Log 00:12:35.811 ========= 00:12:35.811 00:12:35.811 Arbitration 00:12:35.811 =========== 00:12:35.811 Arbitration Burst: 1 00:12:35.811 00:12:35.811 Power Management 00:12:35.811 ================ 00:12:35.811 Number of Power States: 1 00:12:35.811 Current Power State: Power State #0 00:12:35.811 Power State #0: 00:12:35.811 Max Power: 0.00 W 00:12:35.811 Non-Operational State: Operational 00:12:35.811 Entry Latency: Not Reported 00:12:35.811 Exit Latency: Not Reported 00:12:35.811 Relative Read Throughput: 0 00:12:35.811 Relative Read Latency: 0 00:12:35.811 Relative Write Throughput: 0 00:12:35.811 Relative Write Latency: 0 00:12:35.811 Idle Power: Not Reported 00:12:35.811 Active Power: Not Reported 00:12:35.811 Non-Operational Permissive Mode: Not Supported 00:12:35.811 00:12:35.811 Health Information 00:12:35.811 ================== 00:12:35.811 Critical Warnings: 00:12:35.811 Available Spare Space: OK 00:12:35.811 Temperature: OK 00:12:35.811 Device Reliability: OK 00:12:35.811 Read Only: No 00:12:35.811 Volatile Memory Backup: OK 00:12:35.811 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:35.811 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:35.811 Available Spare: 0% 00:12:35.811 Available Sp[2024-11-26 20:42:39.418525] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:35.811 [2024-11-26 20:42:39.426313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:35.811 [2024-11-26 20:42:39.426377] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:12:35.811 [2024-11-26 20:42:39.426396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:35.811 [2024-11-26 20:42:39.426407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:35.811 [2024-11-26 20:42:39.426417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:35.811 [2024-11-26 20:42:39.426430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:35.811 [2024-11-26 20:42:39.426521] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:35.811 [2024-11-26 20:42:39.426543] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:12:35.811 [2024-11-26 20:42:39.427522] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:35.811 [2024-11-26 20:42:39.427611] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:12:35.811 [2024-11-26 20:42:39.427627] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:12:35.811 [2024-11-26 20:42:39.428528] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:12:35.811 [2024-11-26 20:42:39.428552] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:12:35.811 [2024-11-26 20:42:39.428604] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:12:35.811 [2024-11-26 20:42:39.431330] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:35.811 are Threshold: 0% 00:12:35.811 Life Percentage Used: 0% 00:12:35.811 Data Units Read: 0 00:12:35.811 Data Units Written: 0 00:12:35.811 Host Read Commands: 0 00:12:35.811 Host Write Commands: 0 00:12:35.811 Controller Busy Time: 0 minutes 00:12:35.811 Power Cycles: 0 00:12:35.811 Power On Hours: 0 hours 00:12:35.811 Unsafe Shutdowns: 0 00:12:35.811 Unrecoverable Media Errors: 0 00:12:35.811 Lifetime Error Log Entries: 0 00:12:35.811 Warning Temperature Time: 0 minutes 00:12:35.811 Critical Temperature Time: 0 minutes 00:12:35.811 00:12:35.811 Number of Queues 00:12:35.811 ================ 00:12:35.811 Number of I/O Submission Queues: 127 00:12:35.811 Number of I/O Completion Queues: 127 00:12:35.811 00:12:35.811 Active Namespaces 00:12:35.811 ================= 00:12:35.811 Namespace ID:1 00:12:35.811 Error Recovery Timeout: Unlimited 00:12:35.811 Command Set Identifier: NVM (00h) 00:12:35.811 Deallocate: Supported 00:12:35.811 Deallocated/Unwritten Error: Not Supported 00:12:35.811 Deallocated Read Value: Unknown 00:12:35.811 Deallocate in Write Zeroes: Not Supported 00:12:35.811 Deallocated Guard Field: 0xFFFF 00:12:35.811 Flush: Supported 00:12:35.811 Reservation: Supported 00:12:35.811 Namespace Sharing Capabilities: Multiple Controllers 00:12:35.811 Size (in LBAs): 131072 (0GiB) 00:12:35.811 Capacity (in LBAs): 131072 (0GiB) 00:12:35.811 Utilization (in LBAs): 131072 (0GiB) 00:12:35.811 NGUID: B5110E47520949F28789C1EA49776C57 00:12:35.811 UUID: b5110e47-5209-49f2-8789-c1ea49776c57 00:12:35.811 Thin Provisioning: Not Supported 00:12:35.812 Per-NS Atomic Units: Yes 00:12:35.812 Atomic Boundary Size (Normal): 0 00:12:35.812 Atomic Boundary Size (PFail): 0 00:12:35.812 Atomic Boundary Offset: 0 00:12:35.812 Maximum Single Source Range Length: 65535 00:12:35.812 Maximum Copy Length: 65535 00:12:35.812 Maximum Source Range Count: 1 00:12:35.812 NGUID/EUI64 Never Reused: No 00:12:35.812 Namespace Write Protected: No 00:12:35.812 Number of LBA Formats: 1 00:12:35.812 Current LBA Format: LBA Format #00 00:12:35.812 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:35.812 00:12:35.812 20:42:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:36.068 [2024-11-26 20:42:39.682225] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:41.325 Initializing NVMe Controllers 00:12:41.325 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:41.325 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:41.325 Initialization complete. Launching workers. 00:12:41.325 ======================================================== 00:12:41.325 Latency(us) 00:12:41.325 Device Information : IOPS MiB/s Average min max 00:12:41.325 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33049.72 129.10 3872.32 1166.04 11540.90 00:12:41.325 ======================================================== 00:12:41.325 Total : 33049.72 129.10 3872.32 1166.04 11540.90 00:12:41.325 00:12:41.325 [2024-11-26 20:42:44.781702] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:41.325 20:42:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:41.583 [2024-11-26 20:42:45.045425] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:46.841 Initializing NVMe Controllers 00:12:46.841 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:46.841 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:46.841 Initialization complete. Launching workers. 00:12:46.841 ======================================================== 00:12:46.841 Latency(us) 00:12:46.841 Device Information : IOPS MiB/s Average min max 00:12:46.841 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31089.33 121.44 4117.11 1209.77 8324.56 00:12:46.841 ======================================================== 00:12:46.841 Total : 31089.33 121.44 4117.11 1209.77 8324.56 00:12:46.841 00:12:46.841 [2024-11-26 20:42:50.067840] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:46.841 20:42:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:46.841 [2024-11-26 20:42:50.289873] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:52.096 [2024-11-26 20:42:55.429463] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:52.096 Initializing NVMe Controllers 00:12:52.096 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:52.096 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:52.096 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:12:52.096 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:12:52.096 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:12:52.096 Initialization complete. Launching workers. 00:12:52.096 Starting thread on core 2 00:12:52.096 Starting thread on core 3 00:12:52.096 Starting thread on core 1 00:12:52.096 20:42:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:12:52.096 [2024-11-26 20:42:55.759790] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:55.374 [2024-11-26 20:42:58.842117] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:55.374 Initializing NVMe Controllers 00:12:55.374 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:55.374 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:55.374 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:12:55.374 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:12:55.374 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:12:55.374 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:12:55.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:55.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:55.374 Initialization complete. Launching workers. 00:12:55.374 Starting thread on core 1 with urgent priority queue 00:12:55.374 Starting thread on core 2 with urgent priority queue 00:12:55.374 Starting thread on core 3 with urgent priority queue 00:12:55.374 Starting thread on core 0 with urgent priority queue 00:12:55.374 SPDK bdev Controller (SPDK2 ) core 0: 4202.00 IO/s 23.80 secs/100000 ios 00:12:55.374 SPDK bdev Controller (SPDK2 ) core 1: 4805.00 IO/s 20.81 secs/100000 ios 00:12:55.374 SPDK bdev Controller (SPDK2 ) core 2: 4634.67 IO/s 21.58 secs/100000 ios 00:12:55.374 SPDK bdev Controller (SPDK2 ) core 3: 5211.67 IO/s 19.19 secs/100000 ios 00:12:55.374 ======================================================== 00:12:55.374 00:12:55.374 20:42:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:55.637 [2024-11-26 20:42:59.149733] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:55.637 Initializing NVMe Controllers 00:12:55.637 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:55.637 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:55.637 Namespace ID: 1 size: 0GB 00:12:55.637 Initialization complete. 00:12:55.637 INFO: using host memory buffer for IO 00:12:55.637 Hello world! 00:12:55.637 [2024-11-26 20:42:59.161808] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:55.637 20:42:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:55.898 [2024-11-26 20:42:59.479803] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:57.270 Initializing NVMe Controllers 00:12:57.270 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:57.270 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:57.270 Initialization complete. Launching workers. 00:12:57.270 submit (in ns) avg, min, max = 8096.5, 3553.3, 4019303.3 00:12:57.270 complete (in ns) avg, min, max = 27914.2, 2068.9, 4021937.8 00:12:57.270 00:12:57.270 Submit histogram 00:12:57.270 ================ 00:12:57.270 Range in us Cumulative Count 00:12:57.270 3.532 - 3.556: 0.0078% ( 1) 00:12:57.270 3.556 - 3.579: 0.3277% ( 41) 00:12:57.270 3.579 - 3.603: 3.2306% ( 372) 00:12:57.270 3.603 - 3.627: 9.7854% ( 840) 00:12:57.270 3.627 - 3.650: 19.0480% ( 1187) 00:12:57.270 3.650 - 3.674: 26.5314% ( 959) 00:12:57.270 3.674 - 3.698: 33.5388% ( 898) 00:12:57.270 3.698 - 3.721: 41.0066% ( 957) 00:12:57.270 3.721 - 3.745: 47.1167% ( 783) 00:12:57.270 3.745 - 3.769: 52.7195% ( 718) 00:12:57.270 3.769 - 3.793: 56.4963% ( 484) 00:12:57.270 3.793 - 3.816: 59.6332% ( 402) 00:12:57.270 3.816 - 3.840: 63.1838% ( 455) 00:12:57.270 3.840 - 3.864: 67.9204% ( 607) 00:12:57.270 3.864 - 3.887: 72.3839% ( 572) 00:12:57.270 3.887 - 3.911: 76.6836% ( 551) 00:12:57.270 3.911 - 3.935: 80.6009% ( 502) 00:12:57.270 3.935 - 3.959: 83.2462% ( 339) 00:12:57.270 3.959 - 3.982: 85.5560% ( 296) 00:12:57.270 3.982 - 4.006: 87.2883% ( 222) 00:12:57.270 4.006 - 4.030: 88.7085% ( 182) 00:12:57.270 4.030 - 4.053: 89.7932% ( 139) 00:12:57.270 4.053 - 4.077: 90.8076% ( 130) 00:12:57.270 4.077 - 4.101: 91.9313% ( 144) 00:12:57.270 4.101 - 4.124: 92.7819% ( 109) 00:12:57.270 4.124 - 4.148: 93.6403% ( 110) 00:12:57.270 4.148 - 4.172: 94.2957% ( 84) 00:12:57.270 4.172 - 4.196: 94.8420% ( 70) 00:12:57.270 4.196 - 4.219: 95.1697% ( 42) 00:12:57.270 4.219 - 4.243: 95.4975% ( 42) 00:12:57.270 4.243 - 4.267: 95.6847% ( 24) 00:12:57.270 4.267 - 4.290: 95.8798% ( 25) 00:12:57.270 4.290 - 4.314: 96.0281% ( 19) 00:12:57.270 4.314 - 4.338: 96.1607% ( 17) 00:12:57.270 4.338 - 4.361: 96.2622% ( 13) 00:12:57.270 4.361 - 4.385: 96.3714% ( 14) 00:12:57.270 4.385 - 4.409: 96.4105% ( 5) 00:12:57.270 4.409 - 4.433: 96.4885% ( 10) 00:12:57.270 4.433 - 4.456: 96.5587% ( 9) 00:12:57.270 4.456 - 4.480: 96.5977% ( 5) 00:12:57.270 4.480 - 4.504: 96.6133% ( 2) 00:12:57.270 4.504 - 4.527: 96.6290% ( 2) 00:12:57.270 4.527 - 4.551: 96.6524% ( 3) 00:12:57.270 4.551 - 4.575: 96.6758% ( 3) 00:12:57.270 4.575 - 4.599: 96.6914% ( 2) 00:12:57.270 4.599 - 4.622: 96.6992% ( 1) 00:12:57.270 4.622 - 4.646: 96.7382% ( 5) 00:12:57.270 4.646 - 4.670: 96.7694% ( 4) 00:12:57.270 4.670 - 4.693: 96.7928% ( 3) 00:12:57.270 4.693 - 4.717: 96.8006% ( 1) 00:12:57.270 4.717 - 4.741: 96.8631% ( 8) 00:12:57.270 4.741 - 4.764: 96.8787% ( 2) 00:12:57.270 4.764 - 4.788: 96.9177% ( 5) 00:12:57.270 4.788 - 4.812: 96.9801% ( 8) 00:12:57.270 4.812 - 4.836: 97.0269% ( 6) 00:12:57.270 4.836 - 4.859: 97.0503% ( 3) 00:12:57.270 4.859 - 4.883: 97.0893% ( 5) 00:12:57.270 4.883 - 4.907: 97.1362% ( 6) 00:12:57.270 4.907 - 4.930: 97.1596% ( 3) 00:12:57.270 4.930 - 4.954: 97.1986% ( 5) 00:12:57.270 4.954 - 4.978: 97.2532% ( 7) 00:12:57.270 4.978 - 5.001: 97.2922% ( 5) 00:12:57.270 5.001 - 5.025: 97.3469% ( 7) 00:12:57.270 5.025 - 5.049: 97.3625% ( 2) 00:12:57.270 5.049 - 5.073: 97.4405% ( 10) 00:12:57.270 5.073 - 5.096: 97.4717% ( 4) 00:12:57.270 5.096 - 5.120: 97.4951% ( 3) 00:12:57.270 5.120 - 5.144: 97.5029% ( 1) 00:12:57.270 5.144 - 5.167: 97.5419% ( 5) 00:12:57.270 5.167 - 5.191: 97.5654% ( 3) 00:12:57.270 5.191 - 5.215: 97.5810% ( 2) 00:12:57.270 5.215 - 5.239: 97.6044% ( 3) 00:12:57.270 5.239 - 5.262: 97.6278% ( 3) 00:12:57.270 5.310 - 5.333: 97.6434% ( 2) 00:12:57.270 5.333 - 5.357: 97.6512% ( 1) 00:12:57.270 5.357 - 5.381: 97.6668% ( 2) 00:12:57.270 5.476 - 5.499: 97.6902% ( 3) 00:12:57.270 5.499 - 5.523: 97.6980% ( 1) 00:12:57.270 5.523 - 5.547: 97.7058% ( 1) 00:12:57.270 5.547 - 5.570: 97.7136% ( 1) 00:12:57.270 5.570 - 5.594: 97.7214% ( 1) 00:12:57.270 5.665 - 5.689: 97.7370% ( 2) 00:12:57.270 5.689 - 5.713: 97.7526% ( 2) 00:12:57.270 5.713 - 5.736: 97.7604% ( 1) 00:12:57.270 5.736 - 5.760: 97.7682% ( 1) 00:12:57.270 5.784 - 5.807: 97.7838% ( 2) 00:12:57.270 5.973 - 5.997: 97.7995% ( 2) 00:12:57.270 5.997 - 6.021: 97.8229% ( 3) 00:12:57.270 6.044 - 6.068: 97.8307% ( 1) 00:12:57.270 6.068 - 6.116: 97.8385% ( 1) 00:12:57.270 6.116 - 6.163: 97.8463% ( 1) 00:12:57.270 6.163 - 6.210: 97.8697% ( 3) 00:12:57.270 6.210 - 6.258: 97.8775% ( 1) 00:12:57.270 6.258 - 6.305: 97.8853% ( 1) 00:12:57.270 6.305 - 6.353: 97.9009% ( 2) 00:12:57.270 6.353 - 6.400: 97.9321% ( 4) 00:12:57.270 6.447 - 6.495: 97.9399% ( 1) 00:12:57.270 6.542 - 6.590: 97.9477% ( 1) 00:12:57.270 6.637 - 6.684: 97.9555% ( 1) 00:12:57.270 6.874 - 6.921: 97.9711% ( 2) 00:12:57.270 6.969 - 7.016: 97.9789% ( 1) 00:12:57.271 7.064 - 7.111: 97.9867% ( 1) 00:12:57.271 7.159 - 7.206: 97.9945% ( 1) 00:12:57.271 7.206 - 7.253: 98.0023% ( 1) 00:12:57.271 7.253 - 7.301: 98.0101% ( 1) 00:12:57.271 7.301 - 7.348: 98.0258% ( 2) 00:12:57.271 7.348 - 7.396: 98.0336% ( 1) 00:12:57.271 7.396 - 7.443: 98.0414% ( 1) 00:12:57.271 7.490 - 7.538: 98.0492% ( 1) 00:12:57.271 7.633 - 7.680: 98.0570% ( 1) 00:12:57.271 7.727 - 7.775: 98.0726% ( 2) 00:12:57.271 8.059 - 8.107: 98.0804% ( 1) 00:12:57.271 8.107 - 8.154: 98.0882% ( 1) 00:12:57.271 8.154 - 8.201: 98.0960% ( 1) 00:12:57.271 8.201 - 8.249: 98.1038% ( 1) 00:12:57.271 8.249 - 8.296: 98.1194% ( 2) 00:12:57.271 8.296 - 8.344: 98.1272% ( 1) 00:12:57.271 8.344 - 8.391: 98.1350% ( 1) 00:12:57.271 8.391 - 8.439: 98.1428% ( 1) 00:12:57.271 8.439 - 8.486: 98.1506% ( 1) 00:12:57.271 8.581 - 8.628: 98.1662% ( 2) 00:12:57.271 8.628 - 8.676: 98.1740% ( 1) 00:12:57.271 8.676 - 8.723: 98.1818% ( 1) 00:12:57.271 8.723 - 8.770: 98.2052% ( 3) 00:12:57.271 8.770 - 8.818: 98.2130% ( 1) 00:12:57.271 8.818 - 8.865: 98.2286% ( 2) 00:12:57.271 8.960 - 9.007: 98.2520% ( 3) 00:12:57.271 9.007 - 9.055: 98.2599% ( 1) 00:12:57.271 9.055 - 9.102: 98.2677% ( 1) 00:12:57.271 9.102 - 9.150: 98.2755% ( 1) 00:12:57.271 9.150 - 9.197: 98.2911% ( 2) 00:12:57.271 9.197 - 9.244: 98.2989% ( 1) 00:12:57.271 9.292 - 9.339: 98.3067% ( 1) 00:12:57.271 9.387 - 9.434: 98.3145% ( 1) 00:12:57.271 9.434 - 9.481: 98.3223% ( 1) 00:12:57.271 9.481 - 9.529: 98.3301% ( 1) 00:12:57.271 9.529 - 9.576: 98.3379% ( 1) 00:12:57.271 9.576 - 9.624: 98.3457% ( 1) 00:12:57.271 9.861 - 9.908: 98.3613% ( 2) 00:12:57.271 9.908 - 9.956: 98.3691% ( 1) 00:12:57.271 9.956 - 10.003: 98.3925% ( 3) 00:12:57.271 10.050 - 10.098: 98.4003% ( 1) 00:12:57.271 10.098 - 10.145: 98.4159% ( 2) 00:12:57.271 10.572 - 10.619: 98.4237% ( 1) 00:12:57.271 10.667 - 10.714: 98.4315% ( 1) 00:12:57.271 10.714 - 10.761: 98.4393% ( 1) 00:12:57.271 10.904 - 10.951: 98.4471% ( 1) 00:12:57.271 11.093 - 11.141: 98.4549% ( 1) 00:12:57.271 11.141 - 11.188: 98.4627% ( 1) 00:12:57.271 11.188 - 11.236: 98.4705% ( 1) 00:12:57.271 11.283 - 11.330: 98.4783% ( 1) 00:12:57.271 11.330 - 11.378: 98.4861% ( 1) 00:12:57.271 11.567 - 11.615: 98.4940% ( 1) 00:12:57.271 11.615 - 11.662: 98.5018% ( 1) 00:12:57.271 11.710 - 11.757: 98.5174% ( 2) 00:12:57.271 11.947 - 11.994: 98.5252% ( 1) 00:12:57.271 12.136 - 12.231: 98.5330% ( 1) 00:12:57.271 12.326 - 12.421: 98.5486% ( 2) 00:12:57.271 12.421 - 12.516: 98.5642% ( 2) 00:12:57.271 12.516 - 12.610: 98.5798% ( 2) 00:12:57.271 12.800 - 12.895: 98.5954% ( 2) 00:12:57.271 12.990 - 13.084: 98.6032% ( 1) 00:12:57.271 13.084 - 13.179: 98.6110% ( 1) 00:12:57.271 13.464 - 13.559: 98.6344% ( 3) 00:12:57.271 13.653 - 13.748: 98.6422% ( 1) 00:12:57.271 14.222 - 14.317: 98.6500% ( 1) 00:12:57.271 14.317 - 14.412: 98.6578% ( 1) 00:12:57.271 14.507 - 14.601: 98.6656% ( 1) 00:12:57.271 14.601 - 14.696: 98.6812% ( 2) 00:12:57.271 14.886 - 14.981: 98.6968% ( 2) 00:12:57.271 16.972 - 17.067: 98.7124% ( 2) 00:12:57.271 17.067 - 17.161: 98.7202% ( 1) 00:12:57.271 17.256 - 17.351: 98.7359% ( 2) 00:12:57.271 17.351 - 17.446: 98.7671% ( 4) 00:12:57.271 17.446 - 17.541: 98.8061% ( 5) 00:12:57.271 17.541 - 17.636: 98.8607% ( 7) 00:12:57.271 17.636 - 17.730: 98.9153% ( 7) 00:12:57.271 17.730 - 17.825: 98.9778% ( 8) 00:12:57.271 17.825 - 17.920: 99.0168% ( 5) 00:12:57.271 17.920 - 18.015: 99.0870% ( 9) 00:12:57.271 18.015 - 18.110: 99.1650% ( 10) 00:12:57.271 18.110 - 18.204: 99.2587% ( 12) 00:12:57.271 18.204 - 18.299: 99.3211% ( 8) 00:12:57.271 18.299 - 18.394: 99.3445% ( 3) 00:12:57.271 18.394 - 18.489: 99.4069% ( 8) 00:12:57.271 18.489 - 18.584: 99.5240% ( 15) 00:12:57.271 18.584 - 18.679: 99.5786% ( 7) 00:12:57.271 18.679 - 18.773: 99.6332% ( 7) 00:12:57.271 18.773 - 18.868: 99.6801% ( 6) 00:12:57.271 18.868 - 18.963: 99.7113% ( 4) 00:12:57.271 18.963 - 19.058: 99.7191% ( 1) 00:12:57.271 19.058 - 19.153: 99.7737% ( 7) 00:12:57.271 19.342 - 19.437: 99.7815% ( 1) 00:12:57.271 19.911 - 20.006: 99.7971% ( 2) 00:12:57.271 20.290 - 20.385: 99.8049% ( 1) 00:12:57.271 21.049 - 21.144: 99.8127% ( 1) 00:12:57.271 21.997 - 22.092: 99.8205% ( 1) 00:12:57.271 23.040 - 23.135: 99.8283% ( 1) 00:12:57.271 23.419 - 23.514: 99.8361% ( 1) 00:12:57.271 23.799 - 23.893: 99.8517% ( 2) 00:12:57.271 24.652 - 24.841: 99.8751% ( 3) 00:12:57.271 27.117 - 27.307: 99.8829% ( 1) 00:12:57.271 28.824 - 29.013: 99.8908% ( 1) 00:12:57.271 29.013 - 29.203: 99.8986% ( 1) 00:12:57.271 3980.705 - 4004.978: 99.9610% ( 8) 00:12:57.271 4004.978 - 4029.250: 100.0000% ( 5) 00:12:57.271 00:12:57.271 Complete histogram 00:12:57.271 ================== 00:12:57.271 Range in us Cumulative Count 00:12:57.271 2.062 - 2.074: 0.1795% ( 23) 00:12:57.271 2.074 - 2.086: 24.6742% ( 3139) 00:12:57.271 2.086 - 2.098: 37.5185% ( 1646) 00:12:57.271 2.098 - 2.110: 39.7893% ( 291) 00:12:57.271 2.110 - 2.121: 49.1065% ( 1194) 00:12:57.271 2.121 - 2.133: 51.5802% ( 317) 00:12:57.271 2.133 - 2.145: 54.4674% ( 370) 00:12:57.271 2.145 - 2.157: 67.1791% ( 1629) 00:12:57.271 2.157 - 2.169: 70.5111% ( 427) 00:12:57.271 2.169 - 2.181: 72.2201% ( 219) 00:12:57.271 2.181 - 2.193: 75.7940% ( 458) 00:12:57.271 2.193 - 2.204: 76.6914% ( 115) 00:12:57.271 2.204 - 2.216: 77.7682% ( 138) 00:12:57.271 2.216 - 2.228: 84.0265% ( 802) 00:12:57.271 2.228 - 2.240: 87.6083% ( 459) 00:12:57.271 2.240 - 2.252: 89.0597% ( 186) 00:12:57.271 2.252 - 2.264: 90.9169% ( 238) 00:12:57.271 2.264 - 2.276: 91.5099% ( 76) 00:12:57.271 2.276 - 2.287: 91.9313% ( 54) 00:12:57.271 2.287 - 2.299: 92.7039% ( 99) 00:12:57.271 2.299 - 2.311: 93.7807% ( 138) 00:12:57.271 2.311 - 2.323: 94.4986% ( 92) 00:12:57.271 2.323 - 2.335: 94.5767% ( 10) 00:12:57.271 2.335 - 2.347: 94.6079% ( 4) 00:12:57.271 2.347 - 2.359: 94.6625% ( 7) 00:12:57.271 2.359 - 2.370: 94.7952% ( 17) 00:12:57.271 2.370 - 2.382: 95.0059% ( 27) 00:12:57.271 2.382 - 2.394: 95.4194% ( 53) 00:12:57.271 2.394 - 2.406: 95.5989% ( 23) 00:12:57.271 2.406 - 2.418: 95.6769% ( 10) 00:12:57.271 2.418 - 2.430: 95.8798% ( 26) 00:12:57.271 2.430 - 2.441: 96.1061% ( 29) 00:12:57.271 2.441 - 2.453: 96.3402% ( 30) 00:12:57.271 2.453 - 2.465: 96.6290% ( 37) 00:12:57.271 2.465 - 2.477: 96.8318% ( 26) 00:12:57.271 2.477 - 2.489: 96.9879% ( 20) 00:12:57.271 2.489 - 2.501: 97.1440% ( 20) 00:12:57.271 2.501 - 2.513: 97.2922% ( 19) 00:12:57.271 2.513 - 2.524: 97.4327% ( 18) 00:12:57.271 2.524 - 2.536: 97.5341% ( 13) 00:12:57.271 2.536 - 2.548: 97.6044% ( 9) 00:12:57.271 2.548 - 2.560: 97.7214% ( 15) 00:12:57.271 2.560 - 2.572: 97.8073% ( 11) 00:12:57.271 2.572 - 2.584: 97.8385% ( 4) 00:12:57.271 2.584 - 2.596: 97.8463% ( 1) 00:12:57.271 2.596 - 2.607: 97.9009% ( 7) 00:12:57.271 2.607 - 2.619: 97.9243% ( 3) 00:12:57.271 2.619 - 2.631: 97.9321% ( 1) 00:12:57.271 2.631 - 2.643: 97.9555% ( 3) 00:12:57.271 2.643 - 2.655: 97.9633% ( 1) 00:12:57.271 2.667 - 2.679: 97.9945% ( 4) 00:12:57.271 2.679 - 2.690: 98.0101% ( 2) 00:12:57.271 2.690 - 2.702: 98.0179% ( 1) 00:12:57.271 2.702 - 2.714: 98.0258% ( 1) 00:12:57.271 2.714 - 2.726: 98.0336% ( 1) 00:12:57.271 2.738 - 2.750: 98.0414% ( 1) 00:12:57.271 2.761 - 2.773: 98.0492% ( 1) 00:12:57.271 2.785 - 2.797: 98.0648% ( 2) 00:12:57.271 2.797 - 2.809: 98.0882% ( 3) 00:12:57.271 2.809 - 2.821: 98.0960% ( 1) 00:12:57.271 2.844 - 2.856: 98.1038% ( 1) 00:12:57.271 2.892 - 2.904: 98.1116% ( 1) 00:12:57.271 2.951 - 2.963: 98.1194% ( 1) 00:12:57.271 2.999 - 3.010: 98.1350% ( 2) 00:12:57.271 3.058 - 3.081: 98.1428% ( 1) 00:12:57.271 3.200 - 3.224: 98.1506% ( 1) 00:12:57.271 3.247 - 3.271: 98.1584% ( 1) 00:12:57.271 3.437 - 3.461: 98.1662% ( 1) 00:12:57.271 3.484 - 3.508: 98.1740% ( 1) 00:12:57.271 3.508 - 3.532: 98.1818% ( 1) 00:12:57.271 3.532 - 3.556: 98.1896% ( 1) 00:12:57.271 3.556 - 3.579: 98.2130% ( 3) 00:12:57.271 3.579 - 3.603: 98.2208% ( 1) 00:12:57.271 3.603 - 3.627: 98.2364% ( 2) 00:12:57.271 3.650 - 3.674: 98.2520% ( 2) 00:12:57.271 3.698 - 3.721: 98.2755% ( 3) 00:12:57.271 3.745 - 3.769: 98.2911% ( 2) 00:12:57.271 3.769 - 3.793: 98.2989% ( 1) 00:12:57.271 3.840 - 3.864: 98.3067% ( 1) 00:12:57.271 3.887 - 3.911: 98.3145% ( 1) 00:12:57.271 3.911 - 3.935: 98.3223% ( 1) 00:12:57.271 3.982 - 4.006: 98.3301% ( 1) 00:12:57.271 4.030 - 4.053: 98.3379% ( 1) 00:12:57.272 4.172 - 4.196: 98.3457% ( 1) 00:12:57.272 4.338 - 4.361: 98.3535% ( 1) 00:12:57.272 4.385 - 4.409: 98.3613% ( 1) 00:12:57.272 4.409 - 4.433: 98.3691% ( 1) 00:12:57.272 4.622 - 4.646: 98.3769% ( 1) 00:12:57.272 4.812 - 4.836: 98.3847% ( 1) 00:12:57.272 4.836 - 4.859: 98.4003% ( 2) 00:12:57.272 5.191 - 5.215: 98.4081% ( 1) 00:12:57.272 5.997 - 6.021: 98.4159% ( 1) 00:12:57.272 6.258 - 6.305: 98.4315% ( 2) 00:12:57.272 6.495 - 6.542: 98.4393% ( 1) 00:12:57.272 6.732 - 6.779: 98.4549% ( 2) 00:12:57.272 7.016 - 7.064: 98.4705% ( 2) 00:12:57.272 7.111 - 7.159: 98.4783% ( 1) 00:12:57.272 7.206 - 7.253: 98.4861% ( 1) 00:12:57.272 7.396 - 7.443: 98.4940% ( 1) 00:12:57.272 7.633 - 7.680: 98.5096% ( 2) 00:12:57.272 8.012 - 8.059: 98.5174% ( 1) 00:12:57.272 8.201 - 8.249: 98.5252% ( 1) 00:12:57.272 8.296 - 8.344: 98.5408% ( 2) 00:12:57.272 8.391 - 8.439: 98.5486% ( 1) 00:12:57.272 8.723 - 8.770: 98.5564% ( 1) 00:12:57.272 8.818 - 8.865: 98.5642% ( 1) 00:12:57.272 8.865 - 8.913: 98.5720% ( 1) 00:12:57.272 9.434 - 9.481: 98.5798% ( 1) 00:12:57.272 10.003 - 10.050: 98.5876% ( 1) 00:12:57.272 10.904 - 10.951: 98.5954% ( 1) 00:12:57.272 11.046 - 11.093: 98.6032% ( 1) 00:12:57.272 14.033 - 14.127: 98.6110% ( 1) 00:12:57.272 15.360 - 15.455: 98.6188% ( 1) 00:12:57.272 15.455 - 15.550: 98.6266% ( 1) 00:12:57.272 15.550 - 15.644: 98.6422% ( 2) 00:12:57.272 15.644 - 15.739: 98.6500% ( 1) 00:12:57.272 15.739 - 15.834: 98.6656% ( 2) 00:12:57.272 15.834 - 15.929: 98.6812% ( 2) 00:12:57.272 15.929 - 16.024: 98.7202% ( 5) 00:12:57.272 16.024 - 16.119: 98.7515% ( 4) 00:12:57.272 16.119 - 16.213: 98.7905% ( 5) 00:12:57.272 16.213 - 16.308: 98.8529% ( 8) 00:12:57.272 16.308 - 16.403: 98.9075% ( 7) 00:12:57.272 16.403 - 16.498: 98.9622% ( 7) 00:12:57.272 16.498 - 16.593: 98.9778% ( 2) 00:12:57.272 16.593 - 16.687: 99.0012% ( 3) 00:12:57.272 16.687 - 16.782: 99.0480% ( 6) 00:12:57.272 16.782 - 16.877: 99.0870% ( 5) 00:12:57.272 16.877 - 16.972: 99.1182% ( 4) 00:12:57.272 16.972 - 17.067: 99.1260% ( 1) 00:12:57.272 17.067 - 17.161: 99.1416% ( 2) 00:12:57.272 17.161 - 17.256: 99.1494% ( 1) 00:12:57.272 17.256 - 17.351: 99.1650%[2024-11-26 20:43:00.578395] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:57.272 ( 2) 00:12:57.272 17.351 - 17.446: 99.1728% ( 1) 00:12:57.272 17.446 - 17.541: 99.2041% ( 4) 00:12:57.272 17.541 - 17.636: 99.2119% ( 1) 00:12:57.272 17.636 - 17.730: 99.2275% ( 2) 00:12:57.272 17.730 - 17.825: 99.2353% ( 1) 00:12:57.272 17.920 - 18.015: 99.2509% ( 2) 00:12:57.272 18.110 - 18.204: 99.2665% ( 2) 00:12:57.272 18.204 - 18.299: 99.2821% ( 2) 00:12:57.272 18.299 - 18.394: 99.2899% ( 1) 00:12:57.272 18.489 - 18.584: 99.2977% ( 1) 00:12:57.272 18.679 - 18.773: 99.3055% ( 1) 00:12:57.272 18.773 - 18.868: 99.3133% ( 1) 00:12:57.272 21.807 - 21.902: 99.3211% ( 1) 00:12:57.272 22.376 - 22.471: 99.3289% ( 1) 00:12:57.272 22.471 - 22.566: 99.3367% ( 1) 00:12:57.272 22.566 - 22.661: 99.3445% ( 1) 00:12:57.272 26.359 - 26.548: 99.3523% ( 1) 00:12:57.272 29.772 - 29.961: 99.3601% ( 1) 00:12:57.272 3980.705 - 4004.978: 99.6723% ( 40) 00:12:57.272 4004.978 - 4029.250: 100.0000% ( 42) 00:12:57.272 00:12:57.272 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:12:57.272 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:57.272 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:12:57.272 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:12:57.272 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:57.272 [ 00:12:57.272 { 00:12:57.272 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:57.272 "subtype": "Discovery", 00:12:57.272 "listen_addresses": [], 00:12:57.272 "allow_any_host": true, 00:12:57.272 "hosts": [] 00:12:57.272 }, 00:12:57.272 { 00:12:57.272 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:57.272 "subtype": "NVMe", 00:12:57.272 "listen_addresses": [ 00:12:57.272 { 00:12:57.272 "trtype": "VFIOUSER", 00:12:57.272 "adrfam": "IPv4", 00:12:57.272 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:57.272 "trsvcid": "0" 00:12:57.272 } 00:12:57.272 ], 00:12:57.272 "allow_any_host": true, 00:12:57.272 "hosts": [], 00:12:57.272 "serial_number": "SPDK1", 00:12:57.272 "model_number": "SPDK bdev Controller", 00:12:57.272 "max_namespaces": 32, 00:12:57.272 "min_cntlid": 1, 00:12:57.272 "max_cntlid": 65519, 00:12:57.272 "namespaces": [ 00:12:57.272 { 00:12:57.272 "nsid": 1, 00:12:57.272 "bdev_name": "Malloc1", 00:12:57.272 "name": "Malloc1", 00:12:57.272 "nguid": "E084B91620E44569A80806566BC9C485", 00:12:57.272 "uuid": "e084b916-20e4-4569-a808-06566bc9c485" 00:12:57.272 }, 00:12:57.272 { 00:12:57.272 "nsid": 2, 00:12:57.272 "bdev_name": "Malloc3", 00:12:57.272 "name": "Malloc3", 00:12:57.272 "nguid": "5E30DAAAE2CC4EA68DE13AE976F08B17", 00:12:57.272 "uuid": "5e30daaa-e2cc-4ea6-8de1-3ae976f08b17" 00:12:57.272 } 00:12:57.272 ] 00:12:57.272 }, 00:12:57.272 { 00:12:57.272 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:57.272 "subtype": "NVMe", 00:12:57.272 "listen_addresses": [ 00:12:57.272 { 00:12:57.272 "trtype": "VFIOUSER", 00:12:57.272 "adrfam": "IPv4", 00:12:57.272 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:57.272 "trsvcid": "0" 00:12:57.272 } 00:12:57.272 ], 00:12:57.272 "allow_any_host": true, 00:12:57.272 "hosts": [], 00:12:57.272 "serial_number": "SPDK2", 00:12:57.272 "model_number": "SPDK bdev Controller", 00:12:57.272 "max_namespaces": 32, 00:12:57.272 "min_cntlid": 1, 00:12:57.272 "max_cntlid": 65519, 00:12:57.272 "namespaces": [ 00:12:57.272 { 00:12:57.272 "nsid": 1, 00:12:57.272 "bdev_name": "Malloc2", 00:12:57.272 "name": "Malloc2", 00:12:57.272 "nguid": "B5110E47520949F28789C1EA49776C57", 00:12:57.272 "uuid": "b5110e47-5209-49f2-8789-c1ea49776c57" 00:12:57.272 } 00:12:57.272 ] 00:12:57.272 } 00:12:57.272 ] 00:12:57.272 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:57.272 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1638432 00:12:57.272 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:12:57.272 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:57.272 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:12:57.272 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:57.272 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:57.272 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:12:57.272 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:57.272 20:43:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:12:57.530 [2024-11-26 20:43:01.122812] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:57.787 Malloc4 00:12:57.787 20:43:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:12:58.044 [2024-11-26 20:43:01.526076] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:58.044 20:43:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:58.044 Asynchronous Event Request test 00:12:58.044 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:58.044 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:58.044 Registering asynchronous event callbacks... 00:12:58.044 Starting namespace attribute notice tests for all controllers... 00:12:58.044 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:58.044 aer_cb - Changed Namespace 00:12:58.044 Cleaning up... 00:12:58.302 [ 00:12:58.302 { 00:12:58.302 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:58.302 "subtype": "Discovery", 00:12:58.302 "listen_addresses": [], 00:12:58.302 "allow_any_host": true, 00:12:58.302 "hosts": [] 00:12:58.302 }, 00:12:58.302 { 00:12:58.302 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:58.302 "subtype": "NVMe", 00:12:58.302 "listen_addresses": [ 00:12:58.302 { 00:12:58.302 "trtype": "VFIOUSER", 00:12:58.302 "adrfam": "IPv4", 00:12:58.302 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:58.302 "trsvcid": "0" 00:12:58.302 } 00:12:58.302 ], 00:12:58.302 "allow_any_host": true, 00:12:58.302 "hosts": [], 00:12:58.302 "serial_number": "SPDK1", 00:12:58.302 "model_number": "SPDK bdev Controller", 00:12:58.302 "max_namespaces": 32, 00:12:58.302 "min_cntlid": 1, 00:12:58.302 "max_cntlid": 65519, 00:12:58.302 "namespaces": [ 00:12:58.302 { 00:12:58.302 "nsid": 1, 00:12:58.302 "bdev_name": "Malloc1", 00:12:58.302 "name": "Malloc1", 00:12:58.302 "nguid": "E084B91620E44569A80806566BC9C485", 00:12:58.302 "uuid": "e084b916-20e4-4569-a808-06566bc9c485" 00:12:58.302 }, 00:12:58.302 { 00:12:58.302 "nsid": 2, 00:12:58.302 "bdev_name": "Malloc3", 00:12:58.302 "name": "Malloc3", 00:12:58.302 "nguid": "5E30DAAAE2CC4EA68DE13AE976F08B17", 00:12:58.302 "uuid": "5e30daaa-e2cc-4ea6-8de1-3ae976f08b17" 00:12:58.302 } 00:12:58.302 ] 00:12:58.302 }, 00:12:58.302 { 00:12:58.302 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:58.302 "subtype": "NVMe", 00:12:58.302 "listen_addresses": [ 00:12:58.302 { 00:12:58.302 "trtype": "VFIOUSER", 00:12:58.302 "adrfam": "IPv4", 00:12:58.302 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:58.302 "trsvcid": "0" 00:12:58.302 } 00:12:58.302 ], 00:12:58.302 "allow_any_host": true, 00:12:58.302 "hosts": [], 00:12:58.302 "serial_number": "SPDK2", 00:12:58.302 "model_number": "SPDK bdev Controller", 00:12:58.302 "max_namespaces": 32, 00:12:58.302 "min_cntlid": 1, 00:12:58.302 "max_cntlid": 65519, 00:12:58.302 "namespaces": [ 00:12:58.302 { 00:12:58.302 "nsid": 1, 00:12:58.302 "bdev_name": "Malloc2", 00:12:58.302 "name": "Malloc2", 00:12:58.302 "nguid": "B5110E47520949F28789C1EA49776C57", 00:12:58.302 "uuid": "b5110e47-5209-49f2-8789-c1ea49776c57" 00:12:58.302 }, 00:12:58.302 { 00:12:58.302 "nsid": 2, 00:12:58.302 "bdev_name": "Malloc4", 00:12:58.302 "name": "Malloc4", 00:12:58.302 "nguid": "04ABBAB50E5645379A0C638EC1543F1F", 00:12:58.302 "uuid": "04abbab5-0e56-4537-9a0c-638ec1543f1f" 00:12:58.302 } 00:12:58.302 ] 00:12:58.302 } 00:12:58.302 ] 00:12:58.302 20:43:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1638432 00:12:58.302 20:43:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:12:58.302 20:43:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1632803 00:12:58.302 20:43:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1632803 ']' 00:12:58.302 20:43:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1632803 00:12:58.302 20:43:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:12:58.302 20:43:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:58.302 20:43:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1632803 00:12:58.302 20:43:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:58.302 20:43:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:58.302 20:43:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1632803' 00:12:58.302 killing process with pid 1632803 00:12:58.302 20:43:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1632803 00:12:58.302 20:43:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1632803 00:12:58.559 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:58.559 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:58.559 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:12:58.559 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:12:58.559 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:12:58.559 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1638677 00:12:58.559 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:12:58.559 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1638677' 00:12:58.559 Process pid: 1638677 00:12:58.559 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:58.559 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1638677 00:12:58.559 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1638677 ']' 00:12:58.559 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.560 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:58.560 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.560 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:58.560 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:58.560 [2024-11-26 20:43:02.227970] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:12:58.560 [2024-11-26 20:43:02.229014] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:12:58.560 [2024-11-26 20:43:02.229092] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:58.818 [2024-11-26 20:43:02.297497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:58.818 [2024-11-26 20:43:02.350992] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:58.818 [2024-11-26 20:43:02.351047] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:58.818 [2024-11-26 20:43:02.351074] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:58.818 [2024-11-26 20:43:02.351085] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:58.818 [2024-11-26 20:43:02.351094] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:58.818 [2024-11-26 20:43:02.352544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:58.818 [2024-11-26 20:43:02.352572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:58.818 [2024-11-26 20:43:02.352630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:58.818 [2024-11-26 20:43:02.352633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.818 [2024-11-26 20:43:02.437476] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:12:58.818 [2024-11-26 20:43:02.437670] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:12:58.818 [2024-11-26 20:43:02.437993] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:12:58.818 [2024-11-26 20:43:02.438707] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:12:58.818 [2024-11-26 20:43:02.438948] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:12:58.818 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:58.818 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:12:58.818 20:43:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:00.195 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:13:00.195 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:00.195 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:00.195 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:00.195 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:00.195 20:43:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:00.763 Malloc1 00:13:00.763 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:01.021 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:01.279 20:43:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:01.556 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:01.556 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:01.556 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:01.817 Malloc2 00:13:01.817 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:02.074 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:02.331 20:43:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:02.606 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:13:02.606 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1638677 00:13:02.606 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1638677 ']' 00:13:02.606 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1638677 00:13:02.606 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:13:02.606 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:02.606 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1638677 00:13:02.606 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:02.606 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:02.606 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1638677' 00:13:02.606 killing process with pid 1638677 00:13:02.606 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1638677 00:13:02.935 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1638677 00:13:02.936 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:02.936 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:02.936 00:13:02.936 real 0m54.019s 00:13:02.936 user 3m28.703s 00:13:02.936 sys 0m3.973s 00:13:02.936 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:02.936 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:02.936 ************************************ 00:13:02.936 END TEST nvmf_vfio_user 00:13:02.936 ************************************ 00:13:02.936 20:43:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:02.936 20:43:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:02.936 20:43:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:02.936 20:43:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:03.194 ************************************ 00:13:03.194 START TEST nvmf_vfio_user_nvme_compliance 00:13:03.194 ************************************ 00:13:03.194 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:03.194 * Looking for test storage... 00:13:03.195 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:13:03.195 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:03.195 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:13:03.195 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:03.195 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:03.195 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:03.195 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:03.195 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:03.195 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:13:03.195 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:13:03.195 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:13:03.195 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:13:03.195 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:13:03.195 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:13:03.195 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:13:03.195 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:03.195 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:13:03.195 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:13:03.195 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:03.195 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:03.195 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:13:03.195 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:13:03.195 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:03.195 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:13:03.195 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:13:03.195 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:13:03.195 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:13:03.195 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:03.195 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:13:03.195 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:13:03.195 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:03.195 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:03.195 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:13:03.195 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:03.195 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:03.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.195 --rc genhtml_branch_coverage=1 00:13:03.195 --rc genhtml_function_coverage=1 00:13:03.195 --rc genhtml_legend=1 00:13:03.195 --rc geninfo_all_blocks=1 00:13:03.195 --rc geninfo_unexecuted_blocks=1 00:13:03.195 00:13:03.195 ' 00:13:03.195 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:03.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.195 --rc genhtml_branch_coverage=1 00:13:03.195 --rc genhtml_function_coverage=1 00:13:03.195 --rc genhtml_legend=1 00:13:03.195 --rc geninfo_all_blocks=1 00:13:03.195 --rc geninfo_unexecuted_blocks=1 00:13:03.195 00:13:03.195 ' 00:13:03.195 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:03.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.195 --rc genhtml_branch_coverage=1 00:13:03.195 --rc genhtml_function_coverage=1 00:13:03.195 --rc genhtml_legend=1 00:13:03.195 --rc geninfo_all_blocks=1 00:13:03.195 --rc geninfo_unexecuted_blocks=1 00:13:03.195 00:13:03.195 ' 00:13:03.195 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:03.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.195 --rc genhtml_branch_coverage=1 00:13:03.195 --rc genhtml_function_coverage=1 00:13:03.195 --rc genhtml_legend=1 00:13:03.195 --rc geninfo_all_blocks=1 00:13:03.195 --rc geninfo_unexecuted_blocks=1 00:13:03.195 00:13:03.195 ' 00:13:03.195 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:03.195 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:13:03.195 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:03.195 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:03.195 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:03.195 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:03.195 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:03.195 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:03.195 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:03.195 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:03.195 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:03.195 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:03.195 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:03.195 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:03.195 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:03.196 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:03.196 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:03.196 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:03.196 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:03.196 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:13:03.196 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:03.196 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:03.196 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:03.196 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.196 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.196 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.196 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:13:03.196 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.196 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:13:03.196 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:03.196 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:03.196 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:03.196 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:03.196 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:03.196 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:03.196 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:03.196 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:03.196 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:03.196 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:03.196 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:03.196 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:03.196 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:13:03.196 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:13:03.196 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:13:03.196 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1639285 00:13:03.196 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:03.196 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1639285' 00:13:03.196 Process pid: 1639285 00:13:03.196 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:03.196 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1639285 00:13:03.196 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 1639285 ']' 00:13:03.196 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.196 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:03.196 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.196 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:03.196 20:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:03.196 [2024-11-26 20:43:06.813357] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:13:03.196 [2024-11-26 20:43:06.813448] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:03.196 [2024-11-26 20:43:06.878969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:03.455 [2024-11-26 20:43:06.936653] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:03.455 [2024-11-26 20:43:06.936704] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:03.455 [2024-11-26 20:43:06.936732] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:03.455 [2024-11-26 20:43:06.936743] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:03.455 [2024-11-26 20:43:06.936752] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:03.455 [2024-11-26 20:43:06.938085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:03.455 [2024-11-26 20:43:06.938198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:03.455 [2024-11-26 20:43:06.938203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.455 20:43:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:03.455 20:43:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:13:03.455 20:43:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:13:04.825 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:04.825 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:13:04.825 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:04.825 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.825 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:04.825 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.825 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:13:04.825 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:04.825 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.825 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:04.825 malloc0 00:13:04.825 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.825 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:13:04.825 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.825 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:04.825 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.825 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:04.825 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.825 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:04.825 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.825 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:04.825 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.825 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:04.825 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.825 20:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:13:04.825 00:13:04.825 00:13:04.825 CUnit - A unit testing framework for C - Version 2.1-3 00:13:04.825 http://cunit.sourceforge.net/ 00:13:04.825 00:13:04.825 00:13:04.825 Suite: nvme_compliance 00:13:04.825 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-26 20:43:08.342851] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:04.825 [2024-11-26 20:43:08.344321] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:13:04.825 [2024-11-26 20:43:08.344347] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:13:04.825 [2024-11-26 20:43:08.344360] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:13:04.825 [2024-11-26 20:43:08.345866] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:04.825 passed 00:13:04.825 Test: admin_identify_ctrlr_verify_fused ...[2024-11-26 20:43:08.430451] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:04.826 [2024-11-26 20:43:08.433472] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:04.826 passed 00:13:04.826 Test: admin_identify_ns ...[2024-11-26 20:43:08.519913] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:05.083 [2024-11-26 20:43:08.579324] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:13:05.083 [2024-11-26 20:43:08.587338] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:13:05.084 [2024-11-26 20:43:08.608434] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:05.084 passed 00:13:05.084 Test: admin_get_features_mandatory_features ...[2024-11-26 20:43:08.692498] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:05.084 [2024-11-26 20:43:08.695516] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:05.084 passed 00:13:05.084 Test: admin_get_features_optional_features ...[2024-11-26 20:43:08.779089] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:05.341 [2024-11-26 20:43:08.782122] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:05.341 passed 00:13:05.341 Test: admin_set_features_number_of_queues ...[2024-11-26 20:43:08.866338] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:05.341 [2024-11-26 20:43:08.972389] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:05.341 passed 00:13:05.599 Test: admin_get_log_page_mandatory_logs ...[2024-11-26 20:43:09.055657] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:05.599 [2024-11-26 20:43:09.058683] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:05.599 passed 00:13:05.599 Test: admin_get_log_page_with_lpo ...[2024-11-26 20:43:09.142796] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:05.599 [2024-11-26 20:43:09.211320] ctrlr.c:2699:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:13:05.599 [2024-11-26 20:43:09.224403] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:05.599 passed 00:13:05.856 Test: fabric_property_get ...[2024-11-26 20:43:09.307917] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:05.856 [2024-11-26 20:43:09.309192] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:13:05.856 [2024-11-26 20:43:09.310944] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:05.856 passed 00:13:05.856 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-26 20:43:09.394480] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:05.856 [2024-11-26 20:43:09.395782] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:13:05.856 [2024-11-26 20:43:09.397506] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:05.856 passed 00:13:05.856 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-26 20:43:09.480688] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:06.113 [2024-11-26 20:43:09.564313] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:06.113 [2024-11-26 20:43:09.580331] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:06.113 [2024-11-26 20:43:09.585414] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:06.113 passed 00:13:06.113 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-26 20:43:09.670957] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:06.113 [2024-11-26 20:43:09.672235] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:13:06.113 [2024-11-26 20:43:09.673981] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:06.113 passed 00:13:06.113 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-26 20:43:09.753820] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:06.370 [2024-11-26 20:43:09.829315] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:06.370 [2024-11-26 20:43:09.853313] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:06.370 [2024-11-26 20:43:09.858409] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:06.370 passed 00:13:06.370 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-26 20:43:09.940526] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:06.370 [2024-11-26 20:43:09.941838] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:13:06.370 [2024-11-26 20:43:09.941893] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:13:06.370 [2024-11-26 20:43:09.943548] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:06.370 passed 00:13:06.370 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-26 20:43:10.029368] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:06.627 [2024-11-26 20:43:10.122318] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:13:06.627 [2024-11-26 20:43:10.130331] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:13:06.627 [2024-11-26 20:43:10.138312] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:13:06.627 [2024-11-26 20:43:10.146319] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:13:06.627 [2024-11-26 20:43:10.175540] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:06.627 passed 00:13:06.627 Test: admin_create_io_sq_verify_pc ...[2024-11-26 20:43:10.264772] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:06.627 [2024-11-26 20:43:10.281340] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:13:06.627 [2024-11-26 20:43:10.299180] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:06.884 passed 00:13:06.884 Test: admin_create_io_qp_max_qps ...[2024-11-26 20:43:10.380795] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:07.816 [2024-11-26 20:43:11.489343] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:13:08.381 [2024-11-26 20:43:11.866394] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:08.381 passed 00:13:08.381 Test: admin_create_io_sq_shared_cq ...[2024-11-26 20:43:11.949666] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:08.639 [2024-11-26 20:43:12.081326] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:08.639 [2024-11-26 20:43:12.118416] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:08.639 passed 00:13:08.639 00:13:08.639 Run Summary: Type Total Ran Passed Failed Inactive 00:13:08.639 suites 1 1 n/a 0 0 00:13:08.639 tests 18 18 18 0 0 00:13:08.639 asserts 360 360 360 0 n/a 00:13:08.639 00:13:08.639 Elapsed time = 1.563 seconds 00:13:08.639 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1639285 00:13:08.639 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 1639285 ']' 00:13:08.639 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 1639285 00:13:08.639 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:13:08.639 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:08.639 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1639285 00:13:08.639 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:08.639 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:08.639 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1639285' 00:13:08.639 killing process with pid 1639285 00:13:08.639 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 1639285 00:13:08.639 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 1639285 00:13:08.897 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:13:08.897 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:08.897 00:13:08.897 real 0m5.841s 00:13:08.897 user 0m16.478s 00:13:08.897 sys 0m0.533s 00:13:08.897 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:08.897 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:08.897 ************************************ 00:13:08.897 END TEST nvmf_vfio_user_nvme_compliance 00:13:08.897 ************************************ 00:13:08.897 20:43:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:08.897 20:43:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:08.897 20:43:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:08.897 20:43:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:08.897 ************************************ 00:13:08.897 START TEST nvmf_vfio_user_fuzz 00:13:08.897 ************************************ 00:13:08.897 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:08.897 * Looking for test storage... 00:13:08.897 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:08.897 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:08.897 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:13:08.898 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:09.157 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:09.157 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:09.157 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:09.157 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:09.157 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:13:09.157 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:13:09.157 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:13:09.157 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:13:09.157 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:13:09.157 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:13:09.157 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:13:09.157 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:09.157 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:13:09.157 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:13:09.157 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:09.157 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:09.157 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:13:09.157 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:13:09.157 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:09.157 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:13:09.157 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:13:09.157 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:13:09.157 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:13:09.157 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:09.157 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:13:09.157 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:13:09.157 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:09.157 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:09.157 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:13:09.157 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:09.157 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:09.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.157 --rc genhtml_branch_coverage=1 00:13:09.157 --rc genhtml_function_coverage=1 00:13:09.157 --rc genhtml_legend=1 00:13:09.157 --rc geninfo_all_blocks=1 00:13:09.157 --rc geninfo_unexecuted_blocks=1 00:13:09.157 00:13:09.157 ' 00:13:09.157 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:09.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.157 --rc genhtml_branch_coverage=1 00:13:09.157 --rc genhtml_function_coverage=1 00:13:09.157 --rc genhtml_legend=1 00:13:09.157 --rc geninfo_all_blocks=1 00:13:09.157 --rc geninfo_unexecuted_blocks=1 00:13:09.157 00:13:09.157 ' 00:13:09.157 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:09.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.157 --rc genhtml_branch_coverage=1 00:13:09.157 --rc genhtml_function_coverage=1 00:13:09.157 --rc genhtml_legend=1 00:13:09.157 --rc geninfo_all_blocks=1 00:13:09.157 --rc geninfo_unexecuted_blocks=1 00:13:09.157 00:13:09.157 ' 00:13:09.157 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:09.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.157 --rc genhtml_branch_coverage=1 00:13:09.157 --rc genhtml_function_coverage=1 00:13:09.157 --rc genhtml_legend=1 00:13:09.157 --rc geninfo_all_blocks=1 00:13:09.157 --rc geninfo_unexecuted_blocks=1 00:13:09.157 00:13:09.157 ' 00:13:09.157 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:09.157 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:13:09.157 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:09.157 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:09.157 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:09.157 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:09.157 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:09.157 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:09.157 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:09.157 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:09.157 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:09.157 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:09.157 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:09.157 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:09.157 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:09.157 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:09.157 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:09.157 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:09.157 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:09.157 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:13:09.157 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:09.157 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:09.157 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:09.157 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.157 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.158 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.158 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:13:09.158 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.158 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:13:09.158 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:09.158 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:09.158 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:09.158 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:09.158 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:09.158 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:09.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:09.158 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:09.158 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:09.158 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:09.158 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:09.158 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:09.158 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:09.158 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:09.158 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:09.158 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:09.158 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:09.158 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1640018 00:13:09.158 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:09.158 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1640018' 00:13:09.158 Process pid: 1640018 00:13:09.158 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:09.158 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1640018 00:13:09.158 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 1640018 ']' 00:13:09.158 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.158 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:09.158 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.158 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:09.158 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:09.416 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:09.416 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:13:09.416 20:43:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:10.349 20:43:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:10.349 20:43:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.349 20:43:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:10.349 20:43:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.349 20:43:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:10.349 20:43:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:10.349 20:43:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.349 20:43:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:10.349 malloc0 00:13:10.349 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.349 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:10.349 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.349 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:10.349 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.349 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:10.349 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.349 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:10.349 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.349 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:10.349 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.349 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:10.607 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.607 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:10.607 20:43:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:13:42.681 Fuzzing completed. Shutting down the fuzz application 00:13:42.681 00:13:42.681 Dumping successful admin opcodes: 00:13:42.681 9, 10, 00:13:42.681 Dumping successful io opcodes: 00:13:42.681 0, 00:13:42.681 NS: 0x20000081ef00 I/O qp, Total commands completed: 683469, total successful commands: 2661, random_seed: 2341512064 00:13:42.681 NS: 0x20000081ef00 admin qp, Total commands completed: 139933, total successful commands: 31, random_seed: 603544896 00:13:42.681 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:13:42.681 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.681 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:42.681 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.681 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1640018 00:13:42.681 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 1640018 ']' 00:13:42.681 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 1640018 00:13:42.681 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:13:42.681 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:42.681 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1640018 00:13:42.681 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:42.681 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:42.681 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1640018' 00:13:42.681 killing process with pid 1640018 00:13:42.681 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 1640018 00:13:42.681 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 1640018 00:13:42.681 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:13:42.681 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:13:42.681 00:13:42.681 real 0m32.286s 00:13:42.681 user 0m30.391s 00:13:42.681 sys 0m30.744s 00:13:42.681 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:42.681 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:42.681 ************************************ 00:13:42.681 END TEST nvmf_vfio_user_fuzz 00:13:42.681 ************************************ 00:13:42.681 20:43:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:42.681 20:43:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:42.681 20:43:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:42.681 20:43:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:42.681 ************************************ 00:13:42.681 START TEST nvmf_auth_target 00:13:42.681 ************************************ 00:13:42.681 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:42.681 * Looking for test storage... 00:13:42.681 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:42.681 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:42.681 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:13:42.681 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:42.681 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:42.681 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:42.681 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:42.681 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:42.681 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:13:42.681 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:13:42.681 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:13:42.681 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:13:42.681 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:13:42.681 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:13:42.681 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:13:42.681 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:42.681 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:13:42.681 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:13:42.681 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:42.681 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:42.681 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:13:42.681 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:13:42.681 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:42.681 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:13:42.681 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:13:42.682 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:13:42.682 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:13:42.682 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:42.682 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:13:42.682 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:13:42.682 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:42.682 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:42.682 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:13:42.682 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:42.682 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:42.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.682 --rc genhtml_branch_coverage=1 00:13:42.682 --rc genhtml_function_coverage=1 00:13:42.682 --rc genhtml_legend=1 00:13:42.682 --rc geninfo_all_blocks=1 00:13:42.682 --rc geninfo_unexecuted_blocks=1 00:13:42.682 00:13:42.682 ' 00:13:42.682 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:42.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.682 --rc genhtml_branch_coverage=1 00:13:42.682 --rc genhtml_function_coverage=1 00:13:42.682 --rc genhtml_legend=1 00:13:42.682 --rc geninfo_all_blocks=1 00:13:42.682 --rc geninfo_unexecuted_blocks=1 00:13:42.682 00:13:42.682 ' 00:13:42.682 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:42.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.682 --rc genhtml_branch_coverage=1 00:13:42.682 --rc genhtml_function_coverage=1 00:13:42.682 --rc genhtml_legend=1 00:13:42.682 --rc geninfo_all_blocks=1 00:13:42.682 --rc geninfo_unexecuted_blocks=1 00:13:42.682 00:13:42.682 ' 00:13:42.682 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:42.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:42.682 --rc genhtml_branch_coverage=1 00:13:42.682 --rc genhtml_function_coverage=1 00:13:42.682 --rc genhtml_legend=1 00:13:42.682 --rc geninfo_all_blocks=1 00:13:42.682 --rc geninfo_unexecuted_blocks=1 00:13:42.682 00:13:42.682 ' 00:13:42.682 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:42.682 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:13:42.682 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:42.682 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:42.682 20:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:42.682 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:42.682 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:42.682 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:42.682 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:42.682 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:42.682 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:42.682 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:42.682 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:42.682 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:42.682 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:42.682 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:42.682 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:42.682 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:42.682 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:42.682 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:13:42.682 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:42.682 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:42.682 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:42.682 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.682 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.682 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.682 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:13:42.682 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:42.682 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:13:42.682 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:42.682 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:42.682 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:42.682 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:42.682 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:42.682 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:42.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:42.682 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:42.682 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:42.682 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:42.682 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:13:42.682 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:13:42.682 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:13:42.682 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:42.682 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:13:42.682 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:13:42.682 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:13:42.682 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:13:42.682 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:42.682 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:42.682 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:42.682 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:42.682 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:42.682 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:42.682 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:42.682 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:42.682 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:42.682 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:42.682 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:13:42.682 20:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.618 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:43.618 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:13:43.618 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:43.618 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:43.618 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:43.618 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:43.618 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:43.618 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:13:43.618 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:43.618 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:13:43.618 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:13:43.618 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:13:43.618 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:13:43.618 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:13:43.618 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:13:43.618 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:43.618 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:43.619 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:43.619 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:43.619 Found net devices under 0000:09:00.0: cvl_0_0 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:43.619 Found net devices under 0000:09:00.1: cvl_0_1 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:43.619 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:43.878 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:43.878 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:43.878 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:43.878 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:43.878 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:43.878 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:13:43.878 00:13:43.878 --- 10.0.0.2 ping statistics --- 00:13:43.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.878 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:13:43.878 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:43.878 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:43.878 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:13:43.878 00:13:43.878 --- 10.0.0.1 ping statistics --- 00:13:43.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.878 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:13:43.878 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:43.878 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:13:43.878 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:43.878 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:43.878 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:43.878 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:43.878 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:43.878 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:43.878 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:43.878 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:13:43.878 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:43.878 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:43.878 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.878 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1645477 00:13:43.878 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:13:43.878 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1645477 00:13:43.878 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1645477 ']' 00:13:43.878 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.878 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:43.878 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.878 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:43.878 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.137 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:44.137 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:13:44.137 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:44.137 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:44.137 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.137 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:44.137 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=1645508 00:13:44.137 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:13:44.137 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:44.137 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:13:44.137 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:44.137 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:44.137 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:44.137 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:13:44.137 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:13:44.137 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:44.137 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d2b3d7984ea6b297337376f563c65df7bb101b98856d35f2 00:13:44.137 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:13:44.137 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.okc 00:13:44.137 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d2b3d7984ea6b297337376f563c65df7bb101b98856d35f2 0 00:13:44.137 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d2b3d7984ea6b297337376f563c65df7bb101b98856d35f2 0 00:13:44.137 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:44.137 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:44.137 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d2b3d7984ea6b297337376f563c65df7bb101b98856d35f2 00:13:44.137 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:13:44.137 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:44.137 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.okc 00:13:44.137 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.okc 00:13:44.137 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.okc 00:13:44.137 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:13:44.137 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:44.137 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:44.137 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:44.137 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:13:44.137 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:13:44.137 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:44.137 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c792fec0b7e55adc48656120d73de5896283a0c1d675f182afb85229de134c0f 00:13:44.137 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:13:44.137 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.AvC 00:13:44.137 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c792fec0b7e55adc48656120d73de5896283a0c1d675f182afb85229de134c0f 3 00:13:44.137 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c792fec0b7e55adc48656120d73de5896283a0c1d675f182afb85229de134c0f 3 00:13:44.137 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:44.137 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:44.137 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c792fec0b7e55adc48656120d73de5896283a0c1d675f182afb85229de134c0f 00:13:44.137 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:13:44.137 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:44.137 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.AvC 00:13:44.137 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.AvC 00:13:44.137 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.AvC 00:13:44.137 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:13:44.137 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:44.137 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:44.137 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:44.137 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:13:44.137 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:13:44.137 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:44.138 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6b25e9f343f5ec2954724708442062dd 00:13:44.138 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:13:44.138 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.2KZ 00:13:44.138 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6b25e9f343f5ec2954724708442062dd 1 00:13:44.138 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6b25e9f343f5ec2954724708442062dd 1 00:13:44.138 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:44.138 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:44.138 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6b25e9f343f5ec2954724708442062dd 00:13:44.138 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:13:44.138 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.2KZ 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.2KZ 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.2KZ 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a5dd4cbec6a263b636bd9c7f11903f01394c56efa6294a54 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.btC 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a5dd4cbec6a263b636bd9c7f11903f01394c56efa6294a54 2 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a5dd4cbec6a263b636bd9c7f11903f01394c56efa6294a54 2 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a5dd4cbec6a263b636bd9c7f11903f01394c56efa6294a54 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.btC 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.btC 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.btC 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=505183e5dd530f245a99e7cac1622be00a58a001509de85d 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.xlU 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 505183e5dd530f245a99e7cac1622be00a58a001509de85d 2 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 505183e5dd530f245a99e7cac1622be00a58a001509de85d 2 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=505183e5dd530f245a99e7cac1622be00a58a001509de85d 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.xlU 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.xlU 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.xlU 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=4b0f86b94fdbb5ce67b8d2e1693b3183 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.6wk 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 4b0f86b94fdbb5ce67b8d2e1693b3183 1 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 4b0f86b94fdbb5ce67b8d2e1693b3183 1 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=4b0f86b94fdbb5ce67b8d2e1693b3183 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.6wk 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.6wk 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.6wk 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:44.397 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:44.398 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:44.398 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:13:44.398 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:13:44.398 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:44.398 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=1905b46b3e57f92fd0188d469116250b9dd4d9e02250b360502c7513d72a77df 00:13:44.398 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:13:44.398 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.SXr 00:13:44.398 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 1905b46b3e57f92fd0188d469116250b9dd4d9e02250b360502c7513d72a77df 3 00:13:44.398 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 1905b46b3e57f92fd0188d469116250b9dd4d9e02250b360502c7513d72a77df 3 00:13:44.398 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:44.398 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:44.398 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=1905b46b3e57f92fd0188d469116250b9dd4d9e02250b360502c7513d72a77df 00:13:44.398 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:13:44.398 20:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:44.398 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.SXr 00:13:44.398 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.SXr 00:13:44.398 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.SXr 00:13:44.398 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:13:44.398 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 1645477 00:13:44.398 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1645477 ']' 00:13:44.398 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.398 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:44.398 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.398 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:44.398 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.656 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:44.656 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:13:44.656 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 1645508 /var/tmp/host.sock 00:13:44.656 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1645508 ']' 00:13:44.656 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:13:44.656 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:44.656 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:44.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:44.656 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:44.656 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.913 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:44.913 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:13:44.914 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:13:44.914 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.914 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.172 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.172 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:13:45.172 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.okc 00:13:45.172 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.172 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.172 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.172 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.okc 00:13:45.172 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.okc 00:13:45.430 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.AvC ]] 00:13:45.430 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.AvC 00:13:45.430 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.430 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.430 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.430 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.AvC 00:13:45.430 20:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.AvC 00:13:45.687 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:13:45.687 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.2KZ 00:13:45.687 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.687 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.687 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.687 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.2KZ 00:13:45.687 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.2KZ 00:13:45.944 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.btC ]] 00:13:45.944 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.btC 00:13:45.944 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.944 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.944 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.944 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.btC 00:13:45.944 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.btC 00:13:46.202 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:13:46.202 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.xlU 00:13:46.202 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.202 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.202 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.202 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.xlU 00:13:46.202 20:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.xlU 00:13:46.460 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.6wk ]] 00:13:46.460 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.6wk 00:13:46.460 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.460 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.460 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.460 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.6wk 00:13:46.460 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.6wk 00:13:46.718 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:13:46.718 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.SXr 00:13:46.718 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.718 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.718 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.718 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.SXr 00:13:46.718 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.SXr 00:13:46.976 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:13:46.976 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:13:46.976 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:46.976 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:46.976 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:46.976 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:47.234 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:13:47.234 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:47.234 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:47.234 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:47.234 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:47.234 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:47.234 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:47.234 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.234 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.234 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.234 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:47.234 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:47.234 20:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:47.492 00:13:47.492 20:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:47.492 20:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:47.492 20:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:48.058 20:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:48.058 20:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:48.058 20:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.058 20:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.058 20:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.058 20:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:48.058 { 00:13:48.058 "cntlid": 1, 00:13:48.058 "qid": 0, 00:13:48.058 "state": "enabled", 00:13:48.058 "thread": "nvmf_tgt_poll_group_000", 00:13:48.058 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:13:48.058 "listen_address": { 00:13:48.058 "trtype": "TCP", 00:13:48.058 "adrfam": "IPv4", 00:13:48.058 "traddr": "10.0.0.2", 00:13:48.058 "trsvcid": "4420" 00:13:48.058 }, 00:13:48.058 "peer_address": { 00:13:48.058 "trtype": "TCP", 00:13:48.058 "adrfam": "IPv4", 00:13:48.058 "traddr": "10.0.0.1", 00:13:48.058 "trsvcid": "42280" 00:13:48.058 }, 00:13:48.058 "auth": { 00:13:48.058 "state": "completed", 00:13:48.058 "digest": "sha256", 00:13:48.058 "dhgroup": "null" 00:13:48.058 } 00:13:48.058 } 00:13:48.058 ]' 00:13:48.058 20:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:48.058 20:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:48.058 20:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:48.058 20:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:48.058 20:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:48.058 20:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:48.058 20:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:48.058 20:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:48.316 20:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDJiM2Q3OTg0ZWE2YjI5NzMzNzM3NmY1NjNjNjVkZjdiYjEwMWI5ODg1NmQzNWYyQcmoCQ==: --dhchap-ctrl-secret DHHC-1:03:Yzc5MmZlYzBiN2U1NWFkYzQ4NjU2MTIwZDczZGU1ODk2MjgzYTBjMWQ2NzVmMTgyYWZiODUyMjlkZTEzNGMwZm/Jc0E=: 00:13:48.316 20:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZDJiM2Q3OTg0ZWE2YjI5NzMzNzM3NmY1NjNjNjVkZjdiYjEwMWI5ODg1NmQzNWYyQcmoCQ==: --dhchap-ctrl-secret DHHC-1:03:Yzc5MmZlYzBiN2U1NWFkYzQ4NjU2MTIwZDczZGU1ODk2MjgzYTBjMWQ2NzVmMTgyYWZiODUyMjlkZTEzNGMwZm/Jc0E=: 00:13:49.305 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:49.305 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:49.305 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:49.305 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.305 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.305 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.305 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:49.305 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:49.305 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:49.305 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:13:49.305 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:49.305 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:49.305 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:49.305 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:49.305 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:49.305 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:49.563 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.563 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.563 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.563 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:49.563 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:49.563 20:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:49.821 00:13:49.821 20:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:49.821 20:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:49.821 20:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:50.079 20:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:50.079 20:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:50.079 20:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.079 20:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.079 20:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.079 20:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:50.079 { 00:13:50.079 "cntlid": 3, 00:13:50.079 "qid": 0, 00:13:50.079 "state": "enabled", 00:13:50.079 "thread": "nvmf_tgt_poll_group_000", 00:13:50.079 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:13:50.079 "listen_address": { 00:13:50.079 "trtype": "TCP", 00:13:50.079 "adrfam": "IPv4", 00:13:50.079 "traddr": "10.0.0.2", 00:13:50.079 "trsvcid": "4420" 00:13:50.079 }, 00:13:50.079 "peer_address": { 00:13:50.079 "trtype": "TCP", 00:13:50.079 "adrfam": "IPv4", 00:13:50.079 "traddr": "10.0.0.1", 00:13:50.079 "trsvcid": "42296" 00:13:50.079 }, 00:13:50.079 "auth": { 00:13:50.079 "state": "completed", 00:13:50.079 "digest": "sha256", 00:13:50.079 "dhgroup": "null" 00:13:50.079 } 00:13:50.079 } 00:13:50.079 ]' 00:13:50.079 20:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:50.079 20:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:50.079 20:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:50.079 20:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:50.079 20:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:50.079 20:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:50.079 20:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:50.079 20:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:50.337 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmIyNWU5ZjM0M2Y1ZWMyOTU0NzI0NzA4NDQyMDYyZGS+yavJ: --dhchap-ctrl-secret DHHC-1:02:YTVkZDRjYmVjNmEyNjNiNjM2YmQ5YzdmMTE5MDNmMDEzOTRjNTZlZmE2Mjk0YTU0mew6Ig==: 00:13:50.337 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NmIyNWU5ZjM0M2Y1ZWMyOTU0NzI0NzA4NDQyMDYyZGS+yavJ: --dhchap-ctrl-secret DHHC-1:02:YTVkZDRjYmVjNmEyNjNiNjM2YmQ5YzdmMTE5MDNmMDEzOTRjNTZlZmE2Mjk0YTU0mew6Ig==: 00:13:51.270 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:51.270 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:51.270 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:51.270 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.271 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.271 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.271 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:51.271 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:51.271 20:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:51.528 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:13:51.528 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:51.528 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:51.528 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:51.528 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:51.528 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:51.528 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:51.528 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.528 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.528 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.528 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:51.528 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:51.528 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:52.093 00:13:52.093 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:52.093 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:52.093 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:52.351 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:52.351 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:52.351 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.351 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.351 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.351 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:52.351 { 00:13:52.351 "cntlid": 5, 00:13:52.351 "qid": 0, 00:13:52.351 "state": "enabled", 00:13:52.351 "thread": "nvmf_tgt_poll_group_000", 00:13:52.351 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:13:52.351 "listen_address": { 00:13:52.351 "trtype": "TCP", 00:13:52.351 "adrfam": "IPv4", 00:13:52.351 "traddr": "10.0.0.2", 00:13:52.351 "trsvcid": "4420" 00:13:52.351 }, 00:13:52.351 "peer_address": { 00:13:52.351 "trtype": "TCP", 00:13:52.351 "adrfam": "IPv4", 00:13:52.351 "traddr": "10.0.0.1", 00:13:52.351 "trsvcid": "42318" 00:13:52.351 }, 00:13:52.351 "auth": { 00:13:52.351 "state": "completed", 00:13:52.351 "digest": "sha256", 00:13:52.351 "dhgroup": "null" 00:13:52.351 } 00:13:52.351 } 00:13:52.351 ]' 00:13:52.351 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:52.351 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:52.351 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:52.351 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:52.351 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:52.351 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:52.351 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:52.351 20:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:52.610 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA1MTgzZTVkZDUzMGYyNDVhOTllN2NhYzE2MjJiZTAwYTU4YTAwMTUwOWRlODVkrrWU6Q==: --dhchap-ctrl-secret DHHC-1:01:NGIwZjg2Yjk0ZmRiYjVjZTY3YjhkMmUxNjkzYjMxODOiDw2k: 00:13:52.610 20:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:NTA1MTgzZTVkZDUzMGYyNDVhOTllN2NhYzE2MjJiZTAwYTU4YTAwMTUwOWRlODVkrrWU6Q==: --dhchap-ctrl-secret DHHC-1:01:NGIwZjg2Yjk0ZmRiYjVjZTY3YjhkMmUxNjkzYjMxODOiDw2k: 00:13:53.544 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:53.544 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:53.544 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:53.544 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.544 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.544 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.544 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:53.544 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:53.544 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:53.801 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:13:53.801 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:53.802 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:53.802 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:53.802 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:53.802 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:53.802 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:13:53.802 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.802 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.802 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.802 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:53.802 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:53.802 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:54.059 00:13:54.059 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:54.059 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:54.059 20:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:54.317 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:54.317 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:54.317 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.317 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.574 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.574 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:54.574 { 00:13:54.574 "cntlid": 7, 00:13:54.574 "qid": 0, 00:13:54.574 "state": "enabled", 00:13:54.574 "thread": "nvmf_tgt_poll_group_000", 00:13:54.574 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:13:54.574 "listen_address": { 00:13:54.574 "trtype": "TCP", 00:13:54.574 "adrfam": "IPv4", 00:13:54.574 "traddr": "10.0.0.2", 00:13:54.574 "trsvcid": "4420" 00:13:54.574 }, 00:13:54.574 "peer_address": { 00:13:54.574 "trtype": "TCP", 00:13:54.574 "adrfam": "IPv4", 00:13:54.574 "traddr": "10.0.0.1", 00:13:54.574 "trsvcid": "36394" 00:13:54.574 }, 00:13:54.574 "auth": { 00:13:54.574 "state": "completed", 00:13:54.574 "digest": "sha256", 00:13:54.574 "dhgroup": "null" 00:13:54.574 } 00:13:54.574 } 00:13:54.574 ]' 00:13:54.574 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:54.574 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:54.574 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:54.574 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:54.574 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:54.574 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:54.574 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:54.574 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:54.832 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTkwNWI0NmIzZTU3ZjkyZmQwMTg4ZDQ2OTExNjI1MGI5ZGQ0ZDllMDIyNTBiMzYwNTAyYzc1MTNkNzJhNzdkZuyf8Tc=: 00:13:54.832 20:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MTkwNWI0NmIzZTU3ZjkyZmQwMTg4ZDQ2OTExNjI1MGI5ZGQ0ZDllMDIyNTBiMzYwNTAyYzc1MTNkNzJhNzdkZuyf8Tc=: 00:13:55.765 20:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:55.765 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:55.765 20:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:55.765 20:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.765 20:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.765 20:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.765 20:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:55.765 20:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:55.765 20:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:55.765 20:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:56.024 20:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:13:56.024 20:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:56.024 20:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:56.024 20:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:56.024 20:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:56.024 20:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:56.024 20:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:56.024 20:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.024 20:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.024 20:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.024 20:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:56.024 20:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:56.024 20:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:56.281 00:13:56.539 20:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:56.539 20:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:56.539 20:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:56.798 20:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:56.798 20:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:56.798 20:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.798 20:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.798 20:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.798 20:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:56.798 { 00:13:56.798 "cntlid": 9, 00:13:56.798 "qid": 0, 00:13:56.798 "state": "enabled", 00:13:56.798 "thread": "nvmf_tgt_poll_group_000", 00:13:56.798 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:13:56.798 "listen_address": { 00:13:56.798 "trtype": "TCP", 00:13:56.798 "adrfam": "IPv4", 00:13:56.798 "traddr": "10.0.0.2", 00:13:56.798 "trsvcid": "4420" 00:13:56.798 }, 00:13:56.798 "peer_address": { 00:13:56.798 "trtype": "TCP", 00:13:56.798 "adrfam": "IPv4", 00:13:56.798 "traddr": "10.0.0.1", 00:13:56.798 "trsvcid": "36412" 00:13:56.798 }, 00:13:56.798 "auth": { 00:13:56.798 "state": "completed", 00:13:56.798 "digest": "sha256", 00:13:56.798 "dhgroup": "ffdhe2048" 00:13:56.798 } 00:13:56.798 } 00:13:56.798 ]' 00:13:56.798 20:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:56.798 20:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:56.798 20:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:56.798 20:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:56.798 20:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:56.798 20:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:56.798 20:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:56.798 20:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:57.054 20:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDJiM2Q3OTg0ZWE2YjI5NzMzNzM3NmY1NjNjNjVkZjdiYjEwMWI5ODg1NmQzNWYyQcmoCQ==: --dhchap-ctrl-secret DHHC-1:03:Yzc5MmZlYzBiN2U1NWFkYzQ4NjU2MTIwZDczZGU1ODk2MjgzYTBjMWQ2NzVmMTgyYWZiODUyMjlkZTEzNGMwZm/Jc0E=: 00:13:57.054 20:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZDJiM2Q3OTg0ZWE2YjI5NzMzNzM3NmY1NjNjNjVkZjdiYjEwMWI5ODg1NmQzNWYyQcmoCQ==: --dhchap-ctrl-secret DHHC-1:03:Yzc5MmZlYzBiN2U1NWFkYzQ4NjU2MTIwZDczZGU1ODk2MjgzYTBjMWQ2NzVmMTgyYWZiODUyMjlkZTEzNGMwZm/Jc0E=: 00:13:57.999 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:57.999 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:57.999 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:57.999 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.999 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.999 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.999 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:57.999 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:57.999 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:58.256 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:13:58.256 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:58.256 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:58.256 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:58.256 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:58.256 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:58.257 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:58.257 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.257 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.257 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.257 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:58.257 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:58.257 20:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:58.820 00:13:58.820 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:58.820 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:58.820 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:58.820 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:58.820 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:58.820 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.820 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.076 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.076 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:59.076 { 00:13:59.076 "cntlid": 11, 00:13:59.076 "qid": 0, 00:13:59.076 "state": "enabled", 00:13:59.076 "thread": "nvmf_tgt_poll_group_000", 00:13:59.076 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:13:59.076 "listen_address": { 00:13:59.076 "trtype": "TCP", 00:13:59.076 "adrfam": "IPv4", 00:13:59.076 "traddr": "10.0.0.2", 00:13:59.076 "trsvcid": "4420" 00:13:59.076 }, 00:13:59.076 "peer_address": { 00:13:59.076 "trtype": "TCP", 00:13:59.076 "adrfam": "IPv4", 00:13:59.076 "traddr": "10.0.0.1", 00:13:59.076 "trsvcid": "36442" 00:13:59.076 }, 00:13:59.076 "auth": { 00:13:59.076 "state": "completed", 00:13:59.076 "digest": "sha256", 00:13:59.076 "dhgroup": "ffdhe2048" 00:13:59.076 } 00:13:59.076 } 00:13:59.076 ]' 00:13:59.076 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:59.076 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:59.076 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:59.076 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:59.076 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:59.076 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:59.076 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:59.076 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:59.332 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmIyNWU5ZjM0M2Y1ZWMyOTU0NzI0NzA4NDQyMDYyZGS+yavJ: --dhchap-ctrl-secret DHHC-1:02:YTVkZDRjYmVjNmEyNjNiNjM2YmQ5YzdmMTE5MDNmMDEzOTRjNTZlZmE2Mjk0YTU0mew6Ig==: 00:13:59.332 20:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NmIyNWU5ZjM0M2Y1ZWMyOTU0NzI0NzA4NDQyMDYyZGS+yavJ: --dhchap-ctrl-secret DHHC-1:02:YTVkZDRjYmVjNmEyNjNiNjM2YmQ5YzdmMTE5MDNmMDEzOTRjNTZlZmE2Mjk0YTU0mew6Ig==: 00:14:00.262 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:00.262 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:00.262 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:00.262 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.262 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.262 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.262 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:00.262 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:00.262 20:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:00.519 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:14:00.519 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:00.519 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:00.519 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:00.519 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:00.519 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:00.519 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:00.519 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.519 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.519 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.519 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:00.519 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:00.519 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:00.776 00:14:01.034 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:01.034 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:01.034 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:01.291 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:01.291 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:01.291 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.291 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.291 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.291 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:01.291 { 00:14:01.291 "cntlid": 13, 00:14:01.291 "qid": 0, 00:14:01.291 "state": "enabled", 00:14:01.291 "thread": "nvmf_tgt_poll_group_000", 00:14:01.291 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:01.291 "listen_address": { 00:14:01.291 "trtype": "TCP", 00:14:01.291 "adrfam": "IPv4", 00:14:01.291 "traddr": "10.0.0.2", 00:14:01.291 "trsvcid": "4420" 00:14:01.291 }, 00:14:01.291 "peer_address": { 00:14:01.291 "trtype": "TCP", 00:14:01.291 "adrfam": "IPv4", 00:14:01.291 "traddr": "10.0.0.1", 00:14:01.291 "trsvcid": "36454" 00:14:01.291 }, 00:14:01.291 "auth": { 00:14:01.291 "state": "completed", 00:14:01.291 "digest": "sha256", 00:14:01.291 "dhgroup": "ffdhe2048" 00:14:01.291 } 00:14:01.291 } 00:14:01.291 ]' 00:14:01.291 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:01.291 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:01.291 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:01.291 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:01.291 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:01.291 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:01.291 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:01.291 20:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:01.549 20:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA1MTgzZTVkZDUzMGYyNDVhOTllN2NhYzE2MjJiZTAwYTU4YTAwMTUwOWRlODVkrrWU6Q==: --dhchap-ctrl-secret DHHC-1:01:NGIwZjg2Yjk0ZmRiYjVjZTY3YjhkMmUxNjkzYjMxODOiDw2k: 00:14:01.549 20:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:NTA1MTgzZTVkZDUzMGYyNDVhOTllN2NhYzE2MjJiZTAwYTU4YTAwMTUwOWRlODVkrrWU6Q==: --dhchap-ctrl-secret DHHC-1:01:NGIwZjg2Yjk0ZmRiYjVjZTY3YjhkMmUxNjkzYjMxODOiDw2k: 00:14:02.480 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:02.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:02.480 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:02.480 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.480 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.480 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.480 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:02.480 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:02.480 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:02.739 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:14:02.739 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:02.739 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:02.739 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:02.739 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:02.739 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:02.739 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:14:02.739 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.739 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.739 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.739 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:02.739 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:02.739 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:02.997 00:14:03.255 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:03.255 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:03.255 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:03.513 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:03.513 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:03.513 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.513 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.513 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.513 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:03.513 { 00:14:03.513 "cntlid": 15, 00:14:03.513 "qid": 0, 00:14:03.513 "state": "enabled", 00:14:03.513 "thread": "nvmf_tgt_poll_group_000", 00:14:03.513 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:03.513 "listen_address": { 00:14:03.513 "trtype": "TCP", 00:14:03.513 "adrfam": "IPv4", 00:14:03.513 "traddr": "10.0.0.2", 00:14:03.513 "trsvcid": "4420" 00:14:03.513 }, 00:14:03.513 "peer_address": { 00:14:03.513 "trtype": "TCP", 00:14:03.513 "adrfam": "IPv4", 00:14:03.513 "traddr": "10.0.0.1", 00:14:03.513 "trsvcid": "55632" 00:14:03.513 }, 00:14:03.513 "auth": { 00:14:03.513 "state": "completed", 00:14:03.513 "digest": "sha256", 00:14:03.513 "dhgroup": "ffdhe2048" 00:14:03.513 } 00:14:03.513 } 00:14:03.513 ]' 00:14:03.513 20:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:03.513 20:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:03.513 20:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:03.513 20:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:03.513 20:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:03.513 20:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:03.513 20:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:03.513 20:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:03.771 20:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTkwNWI0NmIzZTU3ZjkyZmQwMTg4ZDQ2OTExNjI1MGI5ZGQ0ZDllMDIyNTBiMzYwNTAyYzc1MTNkNzJhNzdkZuyf8Tc=: 00:14:03.771 20:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MTkwNWI0NmIzZTU3ZjkyZmQwMTg4ZDQ2OTExNjI1MGI5ZGQ0ZDllMDIyNTBiMzYwNTAyYzc1MTNkNzJhNzdkZuyf8Tc=: 00:14:04.705 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:04.705 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:04.705 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:04.705 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.705 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.705 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.705 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:04.705 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:04.705 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:04.705 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:04.963 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:14:04.963 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:04.963 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:04.963 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:04.963 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:04.963 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:04.963 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:04.963 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.963 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.963 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.963 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:04.963 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:04.963 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:05.528 00:14:05.528 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:05.528 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:05.528 20:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:05.786 20:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:05.786 20:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:05.786 20:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.786 20:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.786 20:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.786 20:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:05.786 { 00:14:05.786 "cntlid": 17, 00:14:05.786 "qid": 0, 00:14:05.786 "state": "enabled", 00:14:05.786 "thread": "nvmf_tgt_poll_group_000", 00:14:05.786 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:05.786 "listen_address": { 00:14:05.786 "trtype": "TCP", 00:14:05.786 "adrfam": "IPv4", 00:14:05.786 "traddr": "10.0.0.2", 00:14:05.786 "trsvcid": "4420" 00:14:05.786 }, 00:14:05.786 "peer_address": { 00:14:05.786 "trtype": "TCP", 00:14:05.786 "adrfam": "IPv4", 00:14:05.786 "traddr": "10.0.0.1", 00:14:05.786 "trsvcid": "55674" 00:14:05.786 }, 00:14:05.786 "auth": { 00:14:05.786 "state": "completed", 00:14:05.786 "digest": "sha256", 00:14:05.786 "dhgroup": "ffdhe3072" 00:14:05.786 } 00:14:05.786 } 00:14:05.786 ]' 00:14:05.786 20:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:05.786 20:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:05.786 20:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:05.786 20:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:05.786 20:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:05.786 20:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:05.786 20:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:05.786 20:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:06.044 20:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDJiM2Q3OTg0ZWE2YjI5NzMzNzM3NmY1NjNjNjVkZjdiYjEwMWI5ODg1NmQzNWYyQcmoCQ==: --dhchap-ctrl-secret DHHC-1:03:Yzc5MmZlYzBiN2U1NWFkYzQ4NjU2MTIwZDczZGU1ODk2MjgzYTBjMWQ2NzVmMTgyYWZiODUyMjlkZTEzNGMwZm/Jc0E=: 00:14:06.044 20:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZDJiM2Q3OTg0ZWE2YjI5NzMzNzM3NmY1NjNjNjVkZjdiYjEwMWI5ODg1NmQzNWYyQcmoCQ==: --dhchap-ctrl-secret DHHC-1:03:Yzc5MmZlYzBiN2U1NWFkYzQ4NjU2MTIwZDczZGU1ODk2MjgzYTBjMWQ2NzVmMTgyYWZiODUyMjlkZTEzNGMwZm/Jc0E=: 00:14:06.978 20:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:06.978 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:06.978 20:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:06.978 20:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.978 20:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.978 20:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.978 20:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:06.978 20:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:06.978 20:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:07.236 20:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:14:07.236 20:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:07.236 20:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:07.236 20:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:07.236 20:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:07.236 20:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:07.236 20:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:07.236 20:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.236 20:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.236 20:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.236 20:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:07.236 20:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:07.236 20:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:07.802 00:14:07.802 20:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:07.802 20:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:07.802 20:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:07.802 20:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:07.802 20:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:07.803 20:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.803 20:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.803 20:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.803 20:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:07.803 { 00:14:07.803 "cntlid": 19, 00:14:07.803 "qid": 0, 00:14:07.803 "state": "enabled", 00:14:07.803 "thread": "nvmf_tgt_poll_group_000", 00:14:07.803 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:07.803 "listen_address": { 00:14:07.803 "trtype": "TCP", 00:14:07.803 "adrfam": "IPv4", 00:14:07.803 "traddr": "10.0.0.2", 00:14:07.803 "trsvcid": "4420" 00:14:07.803 }, 00:14:07.803 "peer_address": { 00:14:07.803 "trtype": "TCP", 00:14:07.803 "adrfam": "IPv4", 00:14:07.803 "traddr": "10.0.0.1", 00:14:07.803 "trsvcid": "55704" 00:14:07.803 }, 00:14:07.803 "auth": { 00:14:07.803 "state": "completed", 00:14:07.803 "digest": "sha256", 00:14:07.803 "dhgroup": "ffdhe3072" 00:14:07.803 } 00:14:07.803 } 00:14:07.803 ]' 00:14:07.803 20:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:08.060 20:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:08.060 20:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:08.060 20:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:08.060 20:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:08.060 20:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:08.060 20:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:08.060 20:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:08.319 20:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmIyNWU5ZjM0M2Y1ZWMyOTU0NzI0NzA4NDQyMDYyZGS+yavJ: --dhchap-ctrl-secret DHHC-1:02:YTVkZDRjYmVjNmEyNjNiNjM2YmQ5YzdmMTE5MDNmMDEzOTRjNTZlZmE2Mjk0YTU0mew6Ig==: 00:14:08.319 20:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NmIyNWU5ZjM0M2Y1ZWMyOTU0NzI0NzA4NDQyMDYyZGS+yavJ: --dhchap-ctrl-secret DHHC-1:02:YTVkZDRjYmVjNmEyNjNiNjM2YmQ5YzdmMTE5MDNmMDEzOTRjNTZlZmE2Mjk0YTU0mew6Ig==: 00:14:09.332 20:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:09.332 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:09.332 20:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:09.332 20:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.332 20:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.332 20:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.332 20:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:09.332 20:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:09.332 20:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:09.591 20:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:14:09.591 20:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:09.591 20:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:09.591 20:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:09.591 20:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:09.591 20:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:09.591 20:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:09.591 20:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.591 20:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.591 20:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.591 20:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:09.591 20:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:09.591 20:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:09.849 00:14:09.849 20:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:09.849 20:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:09.849 20:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:10.108 20:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:10.108 20:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:10.108 20:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.108 20:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.108 20:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.108 20:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:10.108 { 00:14:10.108 "cntlid": 21, 00:14:10.108 "qid": 0, 00:14:10.108 "state": "enabled", 00:14:10.108 "thread": "nvmf_tgt_poll_group_000", 00:14:10.108 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:10.108 "listen_address": { 00:14:10.108 "trtype": "TCP", 00:14:10.108 "adrfam": "IPv4", 00:14:10.108 "traddr": "10.0.0.2", 00:14:10.108 "trsvcid": "4420" 00:14:10.108 }, 00:14:10.108 "peer_address": { 00:14:10.108 "trtype": "TCP", 00:14:10.108 "adrfam": "IPv4", 00:14:10.108 "traddr": "10.0.0.1", 00:14:10.108 "trsvcid": "55720" 00:14:10.108 }, 00:14:10.108 "auth": { 00:14:10.108 "state": "completed", 00:14:10.108 "digest": "sha256", 00:14:10.108 "dhgroup": "ffdhe3072" 00:14:10.108 } 00:14:10.108 } 00:14:10.108 ]' 00:14:10.108 20:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:10.108 20:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:10.108 20:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:10.108 20:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:10.108 20:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:10.366 20:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:10.366 20:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:10.366 20:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:10.623 20:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA1MTgzZTVkZDUzMGYyNDVhOTllN2NhYzE2MjJiZTAwYTU4YTAwMTUwOWRlODVkrrWU6Q==: --dhchap-ctrl-secret DHHC-1:01:NGIwZjg2Yjk0ZmRiYjVjZTY3YjhkMmUxNjkzYjMxODOiDw2k: 00:14:10.623 20:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:NTA1MTgzZTVkZDUzMGYyNDVhOTllN2NhYzE2MjJiZTAwYTU4YTAwMTUwOWRlODVkrrWU6Q==: --dhchap-ctrl-secret DHHC-1:01:NGIwZjg2Yjk0ZmRiYjVjZTY3YjhkMmUxNjkzYjMxODOiDw2k: 00:14:11.557 20:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:11.557 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:11.557 20:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:11.557 20:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.557 20:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.557 20:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.557 20:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:11.557 20:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:11.557 20:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:11.815 20:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:14:11.815 20:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:11.815 20:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:11.815 20:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:11.815 20:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:11.815 20:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:11.815 20:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:14:11.815 20:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.815 20:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.815 20:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.815 20:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:11.815 20:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:11.815 20:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:12.073 00:14:12.073 20:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:12.073 20:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:12.073 20:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:12.331 20:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:12.331 20:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:12.331 20:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.331 20:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.331 20:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.331 20:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:12.331 { 00:14:12.331 "cntlid": 23, 00:14:12.331 "qid": 0, 00:14:12.331 "state": "enabled", 00:14:12.331 "thread": "nvmf_tgt_poll_group_000", 00:14:12.331 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:12.331 "listen_address": { 00:14:12.331 "trtype": "TCP", 00:14:12.331 "adrfam": "IPv4", 00:14:12.331 "traddr": "10.0.0.2", 00:14:12.331 "trsvcid": "4420" 00:14:12.331 }, 00:14:12.331 "peer_address": { 00:14:12.331 "trtype": "TCP", 00:14:12.331 "adrfam": "IPv4", 00:14:12.331 "traddr": "10.0.0.1", 00:14:12.331 "trsvcid": "55746" 00:14:12.331 }, 00:14:12.331 "auth": { 00:14:12.331 "state": "completed", 00:14:12.331 "digest": "sha256", 00:14:12.331 "dhgroup": "ffdhe3072" 00:14:12.331 } 00:14:12.331 } 00:14:12.331 ]' 00:14:12.331 20:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:12.331 20:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:12.331 20:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:12.331 20:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:12.331 20:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:12.589 20:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:12.589 20:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:12.589 20:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:12.848 20:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTkwNWI0NmIzZTU3ZjkyZmQwMTg4ZDQ2OTExNjI1MGI5ZGQ0ZDllMDIyNTBiMzYwNTAyYzc1MTNkNzJhNzdkZuyf8Tc=: 00:14:12.848 20:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MTkwNWI0NmIzZTU3ZjkyZmQwMTg4ZDQ2OTExNjI1MGI5ZGQ0ZDllMDIyNTBiMzYwNTAyYzc1MTNkNzJhNzdkZuyf8Tc=: 00:14:13.781 20:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:13.781 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:13.781 20:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:13.781 20:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.781 20:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.781 20:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.781 20:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:13.781 20:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:13.781 20:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:13.781 20:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:14.039 20:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:14:14.039 20:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:14.039 20:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:14.039 20:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:14.039 20:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:14.039 20:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:14.039 20:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:14.039 20:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.039 20:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.039 20:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.039 20:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:14.039 20:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:14.039 20:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:14.297 00:14:14.297 20:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:14.297 20:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:14.298 20:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:14.556 20:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:14.556 20:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:14.556 20:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.556 20:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.556 20:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.556 20:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:14.556 { 00:14:14.556 "cntlid": 25, 00:14:14.556 "qid": 0, 00:14:14.556 "state": "enabled", 00:14:14.556 "thread": "nvmf_tgt_poll_group_000", 00:14:14.556 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:14.556 "listen_address": { 00:14:14.556 "trtype": "TCP", 00:14:14.556 "adrfam": "IPv4", 00:14:14.556 "traddr": "10.0.0.2", 00:14:14.556 "trsvcid": "4420" 00:14:14.556 }, 00:14:14.556 "peer_address": { 00:14:14.556 "trtype": "TCP", 00:14:14.556 "adrfam": "IPv4", 00:14:14.556 "traddr": "10.0.0.1", 00:14:14.556 "trsvcid": "58964" 00:14:14.556 }, 00:14:14.556 "auth": { 00:14:14.556 "state": "completed", 00:14:14.556 "digest": "sha256", 00:14:14.556 "dhgroup": "ffdhe4096" 00:14:14.556 } 00:14:14.556 } 00:14:14.556 ]' 00:14:14.556 20:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:14.556 20:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:14.556 20:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:14.814 20:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:14.814 20:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:14.814 20:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:14.814 20:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:14.814 20:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:15.072 20:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDJiM2Q3OTg0ZWE2YjI5NzMzNzM3NmY1NjNjNjVkZjdiYjEwMWI5ODg1NmQzNWYyQcmoCQ==: --dhchap-ctrl-secret DHHC-1:03:Yzc5MmZlYzBiN2U1NWFkYzQ4NjU2MTIwZDczZGU1ODk2MjgzYTBjMWQ2NzVmMTgyYWZiODUyMjlkZTEzNGMwZm/Jc0E=: 00:14:15.072 20:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZDJiM2Q3OTg0ZWE2YjI5NzMzNzM3NmY1NjNjNjVkZjdiYjEwMWI5ODg1NmQzNWYyQcmoCQ==: --dhchap-ctrl-secret DHHC-1:03:Yzc5MmZlYzBiN2U1NWFkYzQ4NjU2MTIwZDczZGU1ODk2MjgzYTBjMWQ2NzVmMTgyYWZiODUyMjlkZTEzNGMwZm/Jc0E=: 00:14:16.071 20:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:16.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:16.071 20:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:16.071 20:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.071 20:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.071 20:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.071 20:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:16.071 20:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:16.071 20:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:16.357 20:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:14:16.357 20:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:16.357 20:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:16.357 20:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:16.357 20:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:16.357 20:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:16.357 20:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:16.357 20:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.357 20:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.357 20:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.357 20:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:16.357 20:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:16.357 20:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:16.614 00:14:16.614 20:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:16.614 20:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:16.614 20:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:16.872 20:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:16.872 20:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:16.872 20:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.872 20:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.872 20:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.872 20:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:16.872 { 00:14:16.872 "cntlid": 27, 00:14:16.872 "qid": 0, 00:14:16.872 "state": "enabled", 00:14:16.872 "thread": "nvmf_tgt_poll_group_000", 00:14:16.872 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:16.872 "listen_address": { 00:14:16.872 "trtype": "TCP", 00:14:16.872 "adrfam": "IPv4", 00:14:16.872 "traddr": "10.0.0.2", 00:14:16.872 "trsvcid": "4420" 00:14:16.872 }, 00:14:16.872 "peer_address": { 00:14:16.873 "trtype": "TCP", 00:14:16.873 "adrfam": "IPv4", 00:14:16.873 "traddr": "10.0.0.1", 00:14:16.873 "trsvcid": "59000" 00:14:16.873 }, 00:14:16.873 "auth": { 00:14:16.873 "state": "completed", 00:14:16.873 "digest": "sha256", 00:14:16.873 "dhgroup": "ffdhe4096" 00:14:16.873 } 00:14:16.873 } 00:14:16.873 ]' 00:14:16.873 20:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:16.873 20:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:16.873 20:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:16.873 20:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:16.873 20:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:17.131 20:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:17.131 20:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:17.131 20:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:17.389 20:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmIyNWU5ZjM0M2Y1ZWMyOTU0NzI0NzA4NDQyMDYyZGS+yavJ: --dhchap-ctrl-secret DHHC-1:02:YTVkZDRjYmVjNmEyNjNiNjM2YmQ5YzdmMTE5MDNmMDEzOTRjNTZlZmE2Mjk0YTU0mew6Ig==: 00:14:17.389 20:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NmIyNWU5ZjM0M2Y1ZWMyOTU0NzI0NzA4NDQyMDYyZGS+yavJ: --dhchap-ctrl-secret DHHC-1:02:YTVkZDRjYmVjNmEyNjNiNjM2YmQ5YzdmMTE5MDNmMDEzOTRjNTZlZmE2Mjk0YTU0mew6Ig==: 00:14:18.318 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:18.318 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:18.318 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:18.318 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.318 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.318 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.318 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:18.318 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:18.318 20:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:18.576 20:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:14:18.576 20:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:18.576 20:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:18.576 20:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:18.576 20:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:18.576 20:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:18.576 20:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:18.576 20:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.576 20:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.576 20:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.576 20:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:18.576 20:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:18.576 20:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:18.834 00:14:18.834 20:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:18.834 20:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:18.834 20:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:19.091 20:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:19.091 20:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:19.091 20:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.091 20:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.091 20:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.091 20:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:19.091 { 00:14:19.091 "cntlid": 29, 00:14:19.091 "qid": 0, 00:14:19.091 "state": "enabled", 00:14:19.091 "thread": "nvmf_tgt_poll_group_000", 00:14:19.091 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:19.091 "listen_address": { 00:14:19.091 "trtype": "TCP", 00:14:19.091 "adrfam": "IPv4", 00:14:19.091 "traddr": "10.0.0.2", 00:14:19.091 "trsvcid": "4420" 00:14:19.091 }, 00:14:19.091 "peer_address": { 00:14:19.091 "trtype": "TCP", 00:14:19.091 "adrfam": "IPv4", 00:14:19.091 "traddr": "10.0.0.1", 00:14:19.091 "trsvcid": "59018" 00:14:19.091 }, 00:14:19.091 "auth": { 00:14:19.091 "state": "completed", 00:14:19.091 "digest": "sha256", 00:14:19.091 "dhgroup": "ffdhe4096" 00:14:19.091 } 00:14:19.091 } 00:14:19.091 ]' 00:14:19.091 20:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:19.091 20:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:19.091 20:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:19.092 20:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:19.092 20:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:19.350 20:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:19.350 20:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:19.350 20:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:19.607 20:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA1MTgzZTVkZDUzMGYyNDVhOTllN2NhYzE2MjJiZTAwYTU4YTAwMTUwOWRlODVkrrWU6Q==: --dhchap-ctrl-secret DHHC-1:01:NGIwZjg2Yjk0ZmRiYjVjZTY3YjhkMmUxNjkzYjMxODOiDw2k: 00:14:19.607 20:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:NTA1MTgzZTVkZDUzMGYyNDVhOTllN2NhYzE2MjJiZTAwYTU4YTAwMTUwOWRlODVkrrWU6Q==: --dhchap-ctrl-secret DHHC-1:01:NGIwZjg2Yjk0ZmRiYjVjZTY3YjhkMmUxNjkzYjMxODOiDw2k: 00:14:20.540 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:20.540 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:20.540 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:20.540 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.540 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.540 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.540 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:20.540 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:20.540 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:20.797 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:14:20.797 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:20.797 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:20.797 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:20.797 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:20.797 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:20.797 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:14:20.797 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.797 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.797 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.797 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:20.797 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:20.797 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:21.054 00:14:21.054 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:21.054 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:21.054 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:21.312 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:21.312 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:21.312 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.312 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.312 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.312 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:21.312 { 00:14:21.312 "cntlid": 31, 00:14:21.312 "qid": 0, 00:14:21.312 "state": "enabled", 00:14:21.312 "thread": "nvmf_tgt_poll_group_000", 00:14:21.312 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:21.312 "listen_address": { 00:14:21.312 "trtype": "TCP", 00:14:21.312 "adrfam": "IPv4", 00:14:21.312 "traddr": "10.0.0.2", 00:14:21.312 "trsvcid": "4420" 00:14:21.312 }, 00:14:21.312 "peer_address": { 00:14:21.312 "trtype": "TCP", 00:14:21.312 "adrfam": "IPv4", 00:14:21.312 "traddr": "10.0.0.1", 00:14:21.312 "trsvcid": "59042" 00:14:21.312 }, 00:14:21.312 "auth": { 00:14:21.312 "state": "completed", 00:14:21.312 "digest": "sha256", 00:14:21.312 "dhgroup": "ffdhe4096" 00:14:21.312 } 00:14:21.312 } 00:14:21.312 ]' 00:14:21.312 20:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:21.569 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:21.569 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:21.569 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:21.569 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:21.569 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:21.569 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:21.569 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:21.825 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTkwNWI0NmIzZTU3ZjkyZmQwMTg4ZDQ2OTExNjI1MGI5ZGQ0ZDllMDIyNTBiMzYwNTAyYzc1MTNkNzJhNzdkZuyf8Tc=: 00:14:21.825 20:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MTkwNWI0NmIzZTU3ZjkyZmQwMTg4ZDQ2OTExNjI1MGI5ZGQ0ZDllMDIyNTBiMzYwNTAyYzc1MTNkNzJhNzdkZuyf8Tc=: 00:14:22.757 20:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:22.757 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:22.757 20:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:22.757 20:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.757 20:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.757 20:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.757 20:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:22.757 20:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:22.757 20:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:22.757 20:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:23.015 20:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:14:23.015 20:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:23.015 20:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:23.015 20:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:23.015 20:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:23.015 20:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:23.015 20:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:23.015 20:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.015 20:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.015 20:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.015 20:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:23.015 20:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:23.015 20:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:23.578 00:14:23.579 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:23.579 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:23.579 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:23.836 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:23.836 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:23.836 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.836 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.836 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.836 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:23.836 { 00:14:23.836 "cntlid": 33, 00:14:23.836 "qid": 0, 00:14:23.836 "state": "enabled", 00:14:23.836 "thread": "nvmf_tgt_poll_group_000", 00:14:23.836 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:23.836 "listen_address": { 00:14:23.836 "trtype": "TCP", 00:14:23.836 "adrfam": "IPv4", 00:14:23.836 "traddr": "10.0.0.2", 00:14:23.836 "trsvcid": "4420" 00:14:23.836 }, 00:14:23.836 "peer_address": { 00:14:23.836 "trtype": "TCP", 00:14:23.836 "adrfam": "IPv4", 00:14:23.836 "traddr": "10.0.0.1", 00:14:23.836 "trsvcid": "38608" 00:14:23.836 }, 00:14:23.836 "auth": { 00:14:23.836 "state": "completed", 00:14:23.836 "digest": "sha256", 00:14:23.836 "dhgroup": "ffdhe6144" 00:14:23.836 } 00:14:23.836 } 00:14:23.836 ]' 00:14:23.836 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:23.837 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:23.837 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:23.837 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:23.837 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:23.837 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:23.837 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:23.837 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:24.093 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDJiM2Q3OTg0ZWE2YjI5NzMzNzM3NmY1NjNjNjVkZjdiYjEwMWI5ODg1NmQzNWYyQcmoCQ==: --dhchap-ctrl-secret DHHC-1:03:Yzc5MmZlYzBiN2U1NWFkYzQ4NjU2MTIwZDczZGU1ODk2MjgzYTBjMWQ2NzVmMTgyYWZiODUyMjlkZTEzNGMwZm/Jc0E=: 00:14:24.093 20:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZDJiM2Q3OTg0ZWE2YjI5NzMzNzM3NmY1NjNjNjVkZjdiYjEwMWI5ODg1NmQzNWYyQcmoCQ==: --dhchap-ctrl-secret DHHC-1:03:Yzc5MmZlYzBiN2U1NWFkYzQ4NjU2MTIwZDczZGU1ODk2MjgzYTBjMWQ2NzVmMTgyYWZiODUyMjlkZTEzNGMwZm/Jc0E=: 00:14:25.052 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:25.052 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:25.052 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:25.052 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.052 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.052 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.052 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:25.052 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:25.052 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:25.309 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:14:25.309 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:25.309 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:25.309 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:25.309 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:25.309 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:25.309 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:25.309 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.309 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.309 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.309 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:25.310 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:25.310 20:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:25.873 00:14:25.873 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:25.874 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:25.874 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:26.131 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:26.131 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:26.131 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.131 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.131 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.131 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:26.131 { 00:14:26.131 "cntlid": 35, 00:14:26.131 "qid": 0, 00:14:26.131 "state": "enabled", 00:14:26.131 "thread": "nvmf_tgt_poll_group_000", 00:14:26.131 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:26.131 "listen_address": { 00:14:26.131 "trtype": "TCP", 00:14:26.131 "adrfam": "IPv4", 00:14:26.131 "traddr": "10.0.0.2", 00:14:26.131 "trsvcid": "4420" 00:14:26.131 }, 00:14:26.131 "peer_address": { 00:14:26.131 "trtype": "TCP", 00:14:26.131 "adrfam": "IPv4", 00:14:26.131 "traddr": "10.0.0.1", 00:14:26.131 "trsvcid": "38632" 00:14:26.131 }, 00:14:26.131 "auth": { 00:14:26.131 "state": "completed", 00:14:26.131 "digest": "sha256", 00:14:26.131 "dhgroup": "ffdhe6144" 00:14:26.131 } 00:14:26.131 } 00:14:26.131 ]' 00:14:26.131 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:26.131 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:26.131 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:26.389 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:26.389 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:26.390 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:26.390 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:26.390 20:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:26.647 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmIyNWU5ZjM0M2Y1ZWMyOTU0NzI0NzA4NDQyMDYyZGS+yavJ: --dhchap-ctrl-secret DHHC-1:02:YTVkZDRjYmVjNmEyNjNiNjM2YmQ5YzdmMTE5MDNmMDEzOTRjNTZlZmE2Mjk0YTU0mew6Ig==: 00:14:26.647 20:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NmIyNWU5ZjM0M2Y1ZWMyOTU0NzI0NzA4NDQyMDYyZGS+yavJ: --dhchap-ctrl-secret DHHC-1:02:YTVkZDRjYmVjNmEyNjNiNjM2YmQ5YzdmMTE5MDNmMDEzOTRjNTZlZmE2Mjk0YTU0mew6Ig==: 00:14:27.581 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:27.581 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:27.581 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:27.581 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.581 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.581 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.581 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:27.581 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:27.581 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:27.839 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:14:27.839 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:27.839 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:27.839 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:27.839 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:27.839 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:27.839 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:27.839 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.839 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.839 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.839 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:27.839 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:27.839 20:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:28.404 00:14:28.404 20:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:28.404 20:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:28.404 20:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:28.662 20:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:28.662 20:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:28.662 20:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.662 20:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.920 20:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.920 20:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:28.920 { 00:14:28.920 "cntlid": 37, 00:14:28.920 "qid": 0, 00:14:28.920 "state": "enabled", 00:14:28.920 "thread": "nvmf_tgt_poll_group_000", 00:14:28.920 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:28.920 "listen_address": { 00:14:28.920 "trtype": "TCP", 00:14:28.920 "adrfam": "IPv4", 00:14:28.920 "traddr": "10.0.0.2", 00:14:28.920 "trsvcid": "4420" 00:14:28.920 }, 00:14:28.920 "peer_address": { 00:14:28.920 "trtype": "TCP", 00:14:28.920 "adrfam": "IPv4", 00:14:28.920 "traddr": "10.0.0.1", 00:14:28.920 "trsvcid": "38660" 00:14:28.920 }, 00:14:28.920 "auth": { 00:14:28.920 "state": "completed", 00:14:28.920 "digest": "sha256", 00:14:28.920 "dhgroup": "ffdhe6144" 00:14:28.920 } 00:14:28.920 } 00:14:28.920 ]' 00:14:28.920 20:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:28.920 20:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:28.920 20:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:28.920 20:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:28.920 20:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:28.920 20:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:28.920 20:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:28.920 20:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:29.177 20:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA1MTgzZTVkZDUzMGYyNDVhOTllN2NhYzE2MjJiZTAwYTU4YTAwMTUwOWRlODVkrrWU6Q==: --dhchap-ctrl-secret DHHC-1:01:NGIwZjg2Yjk0ZmRiYjVjZTY3YjhkMmUxNjkzYjMxODOiDw2k: 00:14:29.177 20:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:NTA1MTgzZTVkZDUzMGYyNDVhOTllN2NhYzE2MjJiZTAwYTU4YTAwMTUwOWRlODVkrrWU6Q==: --dhchap-ctrl-secret DHHC-1:01:NGIwZjg2Yjk0ZmRiYjVjZTY3YjhkMmUxNjkzYjMxODOiDw2k: 00:14:30.108 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:30.108 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:30.108 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:30.108 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.108 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.108 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.108 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:30.108 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:30.108 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:30.364 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:14:30.364 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:30.364 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:30.364 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:30.364 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:30.364 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:30.364 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:14:30.364 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.364 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.364 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.364 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:30.364 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:30.364 20:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:30.972 00:14:30.972 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:30.973 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:30.973 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:31.230 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:31.230 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:31.230 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.230 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.230 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.230 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:31.230 { 00:14:31.230 "cntlid": 39, 00:14:31.230 "qid": 0, 00:14:31.230 "state": "enabled", 00:14:31.230 "thread": "nvmf_tgt_poll_group_000", 00:14:31.230 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:31.230 "listen_address": { 00:14:31.230 "trtype": "TCP", 00:14:31.230 "adrfam": "IPv4", 00:14:31.230 "traddr": "10.0.0.2", 00:14:31.230 "trsvcid": "4420" 00:14:31.230 }, 00:14:31.230 "peer_address": { 00:14:31.230 "trtype": "TCP", 00:14:31.230 "adrfam": "IPv4", 00:14:31.230 "traddr": "10.0.0.1", 00:14:31.230 "trsvcid": "38678" 00:14:31.230 }, 00:14:31.230 "auth": { 00:14:31.230 "state": "completed", 00:14:31.230 "digest": "sha256", 00:14:31.230 "dhgroup": "ffdhe6144" 00:14:31.230 } 00:14:31.230 } 00:14:31.230 ]' 00:14:31.230 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:31.230 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:31.230 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:31.230 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:31.230 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:31.230 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:31.230 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:31.230 20:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:31.486 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTkwNWI0NmIzZTU3ZjkyZmQwMTg4ZDQ2OTExNjI1MGI5ZGQ0ZDllMDIyNTBiMzYwNTAyYzc1MTNkNzJhNzdkZuyf8Tc=: 00:14:31.486 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MTkwNWI0NmIzZTU3ZjkyZmQwMTg4ZDQ2OTExNjI1MGI5ZGQ0ZDllMDIyNTBiMzYwNTAyYzc1MTNkNzJhNzdkZuyf8Tc=: 00:14:32.416 20:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:32.416 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:32.416 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:32.416 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.416 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.416 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.416 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:32.416 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:32.416 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:32.416 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:32.673 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:14:32.673 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:32.673 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:32.673 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:32.673 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:32.673 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:32.673 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:32.673 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.673 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.673 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.673 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:32.673 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:32.673 20:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:33.607 00:14:33.607 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:33.607 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:33.607 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:33.864 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:33.864 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:33.864 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.864 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.864 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.864 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:33.864 { 00:14:33.864 "cntlid": 41, 00:14:33.865 "qid": 0, 00:14:33.865 "state": "enabled", 00:14:33.865 "thread": "nvmf_tgt_poll_group_000", 00:14:33.865 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:33.865 "listen_address": { 00:14:33.865 "trtype": "TCP", 00:14:33.865 "adrfam": "IPv4", 00:14:33.865 "traddr": "10.0.0.2", 00:14:33.865 "trsvcid": "4420" 00:14:33.865 }, 00:14:33.865 "peer_address": { 00:14:33.865 "trtype": "TCP", 00:14:33.865 "adrfam": "IPv4", 00:14:33.865 "traddr": "10.0.0.1", 00:14:33.865 "trsvcid": "56702" 00:14:33.865 }, 00:14:33.865 "auth": { 00:14:33.865 "state": "completed", 00:14:33.865 "digest": "sha256", 00:14:33.865 "dhgroup": "ffdhe8192" 00:14:33.865 } 00:14:33.865 } 00:14:33.865 ]' 00:14:33.865 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:33.865 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:33.865 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:34.122 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:34.122 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:34.122 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:34.122 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:34.122 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:34.380 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDJiM2Q3OTg0ZWE2YjI5NzMzNzM3NmY1NjNjNjVkZjdiYjEwMWI5ODg1NmQzNWYyQcmoCQ==: --dhchap-ctrl-secret DHHC-1:03:Yzc5MmZlYzBiN2U1NWFkYzQ4NjU2MTIwZDczZGU1ODk2MjgzYTBjMWQ2NzVmMTgyYWZiODUyMjlkZTEzNGMwZm/Jc0E=: 00:14:34.380 20:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZDJiM2Q3OTg0ZWE2YjI5NzMzNzM3NmY1NjNjNjVkZjdiYjEwMWI5ODg1NmQzNWYyQcmoCQ==: --dhchap-ctrl-secret DHHC-1:03:Yzc5MmZlYzBiN2U1NWFkYzQ4NjU2MTIwZDczZGU1ODk2MjgzYTBjMWQ2NzVmMTgyYWZiODUyMjlkZTEzNGMwZm/Jc0E=: 00:14:35.313 20:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:35.313 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:35.313 20:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:35.313 20:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.313 20:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.313 20:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.313 20:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:35.313 20:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:35.313 20:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:35.571 20:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:14:35.571 20:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:35.571 20:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:35.571 20:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:35.571 20:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:35.571 20:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:35.571 20:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:35.571 20:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.571 20:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.571 20:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.571 20:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:35.571 20:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:35.571 20:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:36.504 00:14:36.504 20:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:36.504 20:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:36.504 20:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:36.762 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:36.762 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:36.762 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.762 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.762 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.762 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:36.762 { 00:14:36.762 "cntlid": 43, 00:14:36.762 "qid": 0, 00:14:36.762 "state": "enabled", 00:14:36.762 "thread": "nvmf_tgt_poll_group_000", 00:14:36.762 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:36.762 "listen_address": { 00:14:36.762 "trtype": "TCP", 00:14:36.762 "adrfam": "IPv4", 00:14:36.762 "traddr": "10.0.0.2", 00:14:36.762 "trsvcid": "4420" 00:14:36.762 }, 00:14:36.762 "peer_address": { 00:14:36.762 "trtype": "TCP", 00:14:36.762 "adrfam": "IPv4", 00:14:36.762 "traddr": "10.0.0.1", 00:14:36.762 "trsvcid": "56732" 00:14:36.762 }, 00:14:36.762 "auth": { 00:14:36.762 "state": "completed", 00:14:36.762 "digest": "sha256", 00:14:36.762 "dhgroup": "ffdhe8192" 00:14:36.762 } 00:14:36.762 } 00:14:36.762 ]' 00:14:36.762 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:36.762 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:36.762 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:36.762 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:36.762 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:36.762 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:36.762 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:36.762 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:37.021 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmIyNWU5ZjM0M2Y1ZWMyOTU0NzI0NzA4NDQyMDYyZGS+yavJ: --dhchap-ctrl-secret DHHC-1:02:YTVkZDRjYmVjNmEyNjNiNjM2YmQ5YzdmMTE5MDNmMDEzOTRjNTZlZmE2Mjk0YTU0mew6Ig==: 00:14:37.021 20:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NmIyNWU5ZjM0M2Y1ZWMyOTU0NzI0NzA4NDQyMDYyZGS+yavJ: --dhchap-ctrl-secret DHHC-1:02:YTVkZDRjYmVjNmEyNjNiNjM2YmQ5YzdmMTE5MDNmMDEzOTRjNTZlZmE2Mjk0YTU0mew6Ig==: 00:14:37.953 20:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:37.953 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:37.953 20:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:37.953 20:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.953 20:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.953 20:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.953 20:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:37.953 20:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:37.953 20:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:38.210 20:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:14:38.210 20:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:38.210 20:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:38.210 20:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:38.210 20:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:38.210 20:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:38.210 20:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:38.210 20:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.210 20:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.469 20:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.469 20:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:38.469 20:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:38.469 20:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:39.402 00:14:39.402 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:39.402 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:39.403 20:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:39.403 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:39.403 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:39.403 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.403 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.403 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.403 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:39.403 { 00:14:39.403 "cntlid": 45, 00:14:39.403 "qid": 0, 00:14:39.403 "state": "enabled", 00:14:39.403 "thread": "nvmf_tgt_poll_group_000", 00:14:39.403 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:39.403 "listen_address": { 00:14:39.403 "trtype": "TCP", 00:14:39.403 "adrfam": "IPv4", 00:14:39.403 "traddr": "10.0.0.2", 00:14:39.403 "trsvcid": "4420" 00:14:39.403 }, 00:14:39.403 "peer_address": { 00:14:39.403 "trtype": "TCP", 00:14:39.403 "adrfam": "IPv4", 00:14:39.403 "traddr": "10.0.0.1", 00:14:39.403 "trsvcid": "56768" 00:14:39.403 }, 00:14:39.403 "auth": { 00:14:39.403 "state": "completed", 00:14:39.403 "digest": "sha256", 00:14:39.403 "dhgroup": "ffdhe8192" 00:14:39.403 } 00:14:39.403 } 00:14:39.403 ]' 00:14:39.403 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:39.661 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:39.661 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:39.661 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:39.661 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:39.661 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:39.661 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:39.661 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:39.919 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA1MTgzZTVkZDUzMGYyNDVhOTllN2NhYzE2MjJiZTAwYTU4YTAwMTUwOWRlODVkrrWU6Q==: --dhchap-ctrl-secret DHHC-1:01:NGIwZjg2Yjk0ZmRiYjVjZTY3YjhkMmUxNjkzYjMxODOiDw2k: 00:14:39.919 20:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:NTA1MTgzZTVkZDUzMGYyNDVhOTllN2NhYzE2MjJiZTAwYTU4YTAwMTUwOWRlODVkrrWU6Q==: --dhchap-ctrl-secret DHHC-1:01:NGIwZjg2Yjk0ZmRiYjVjZTY3YjhkMmUxNjkzYjMxODOiDw2k: 00:14:40.852 20:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:40.852 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:40.852 20:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:40.852 20:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.852 20:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.852 20:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.852 20:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:40.852 20:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:40.852 20:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:41.110 20:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:14:41.110 20:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:41.110 20:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:41.110 20:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:41.110 20:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:41.110 20:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:41.110 20:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:14:41.110 20:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.110 20:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.110 20:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.110 20:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:41.110 20:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:41.110 20:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:42.043 00:14:42.043 20:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:42.043 20:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:42.043 20:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:42.301 20:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:42.301 20:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:42.301 20:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.301 20:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.301 20:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.301 20:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:42.301 { 00:14:42.301 "cntlid": 47, 00:14:42.301 "qid": 0, 00:14:42.301 "state": "enabled", 00:14:42.301 "thread": "nvmf_tgt_poll_group_000", 00:14:42.301 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:42.301 "listen_address": { 00:14:42.301 "trtype": "TCP", 00:14:42.301 "adrfam": "IPv4", 00:14:42.301 "traddr": "10.0.0.2", 00:14:42.301 "trsvcid": "4420" 00:14:42.301 }, 00:14:42.301 "peer_address": { 00:14:42.301 "trtype": "TCP", 00:14:42.301 "adrfam": "IPv4", 00:14:42.301 "traddr": "10.0.0.1", 00:14:42.301 "trsvcid": "56796" 00:14:42.301 }, 00:14:42.301 "auth": { 00:14:42.301 "state": "completed", 00:14:42.301 "digest": "sha256", 00:14:42.301 "dhgroup": "ffdhe8192" 00:14:42.301 } 00:14:42.301 } 00:14:42.301 ]' 00:14:42.301 20:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:42.301 20:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:42.301 20:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:42.301 20:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:42.301 20:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:42.301 20:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:42.301 20:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:42.301 20:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:42.559 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTkwNWI0NmIzZTU3ZjkyZmQwMTg4ZDQ2OTExNjI1MGI5ZGQ0ZDllMDIyNTBiMzYwNTAyYzc1MTNkNzJhNzdkZuyf8Tc=: 00:14:42.559 20:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MTkwNWI0NmIzZTU3ZjkyZmQwMTg4ZDQ2OTExNjI1MGI5ZGQ0ZDllMDIyNTBiMzYwNTAyYzc1MTNkNzJhNzdkZuyf8Tc=: 00:14:43.492 20:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:43.492 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:43.492 20:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:43.492 20:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.492 20:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.492 20:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.492 20:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:43.492 20:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:43.492 20:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:43.493 20:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:43.493 20:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:43.750 20:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:14:43.750 20:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:43.750 20:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:43.750 20:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:43.750 20:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:43.750 20:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:43.750 20:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:43.750 20:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.751 20:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.751 20:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.751 20:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:43.751 20:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:43.751 20:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:44.008 00:14:44.008 20:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:44.008 20:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:44.008 20:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:44.266 20:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:44.266 20:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:44.266 20:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.266 20:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.267 20:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.267 20:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:44.267 { 00:14:44.267 "cntlid": 49, 00:14:44.267 "qid": 0, 00:14:44.267 "state": "enabled", 00:14:44.267 "thread": "nvmf_tgt_poll_group_000", 00:14:44.267 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:44.267 "listen_address": { 00:14:44.267 "trtype": "TCP", 00:14:44.267 "adrfam": "IPv4", 00:14:44.267 "traddr": "10.0.0.2", 00:14:44.267 "trsvcid": "4420" 00:14:44.267 }, 00:14:44.267 "peer_address": { 00:14:44.267 "trtype": "TCP", 00:14:44.267 "adrfam": "IPv4", 00:14:44.267 "traddr": "10.0.0.1", 00:14:44.267 "trsvcid": "60766" 00:14:44.267 }, 00:14:44.267 "auth": { 00:14:44.267 "state": "completed", 00:14:44.267 "digest": "sha384", 00:14:44.267 "dhgroup": "null" 00:14:44.267 } 00:14:44.267 } 00:14:44.267 ]' 00:14:44.267 20:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:44.524 20:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:44.524 20:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:44.524 20:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:44.524 20:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:44.524 20:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:44.524 20:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:44.524 20:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:44.782 20:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDJiM2Q3OTg0ZWE2YjI5NzMzNzM3NmY1NjNjNjVkZjdiYjEwMWI5ODg1NmQzNWYyQcmoCQ==: --dhchap-ctrl-secret DHHC-1:03:Yzc5MmZlYzBiN2U1NWFkYzQ4NjU2MTIwZDczZGU1ODk2MjgzYTBjMWQ2NzVmMTgyYWZiODUyMjlkZTEzNGMwZm/Jc0E=: 00:14:44.782 20:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZDJiM2Q3OTg0ZWE2YjI5NzMzNzM3NmY1NjNjNjVkZjdiYjEwMWI5ODg1NmQzNWYyQcmoCQ==: --dhchap-ctrl-secret DHHC-1:03:Yzc5MmZlYzBiN2U1NWFkYzQ4NjU2MTIwZDczZGU1ODk2MjgzYTBjMWQ2NzVmMTgyYWZiODUyMjlkZTEzNGMwZm/Jc0E=: 00:14:45.714 20:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:45.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:45.714 20:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:45.714 20:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.714 20:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.714 20:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.714 20:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:45.714 20:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:45.714 20:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:45.971 20:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:14:45.971 20:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:45.971 20:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:45.971 20:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:45.971 20:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:45.971 20:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:45.971 20:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:45.971 20:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.971 20:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.971 20:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.971 20:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:45.971 20:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:45.971 20:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:46.229 00:14:46.229 20:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:46.229 20:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:46.229 20:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:46.797 20:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:46.797 20:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:46.797 20:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.797 20:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.797 20:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.797 20:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:46.797 { 00:14:46.797 "cntlid": 51, 00:14:46.797 "qid": 0, 00:14:46.797 "state": "enabled", 00:14:46.797 "thread": "nvmf_tgt_poll_group_000", 00:14:46.797 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:46.797 "listen_address": { 00:14:46.797 "trtype": "TCP", 00:14:46.797 "adrfam": "IPv4", 00:14:46.797 "traddr": "10.0.0.2", 00:14:46.797 "trsvcid": "4420" 00:14:46.797 }, 00:14:46.797 "peer_address": { 00:14:46.797 "trtype": "TCP", 00:14:46.797 "adrfam": "IPv4", 00:14:46.797 "traddr": "10.0.0.1", 00:14:46.797 "trsvcid": "60784" 00:14:46.797 }, 00:14:46.797 "auth": { 00:14:46.798 "state": "completed", 00:14:46.798 "digest": "sha384", 00:14:46.798 "dhgroup": "null" 00:14:46.798 } 00:14:46.798 } 00:14:46.798 ]' 00:14:46.798 20:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:46.798 20:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:46.798 20:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:46.798 20:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:46.798 20:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:46.798 20:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:46.798 20:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:46.798 20:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:47.056 20:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmIyNWU5ZjM0M2Y1ZWMyOTU0NzI0NzA4NDQyMDYyZGS+yavJ: --dhchap-ctrl-secret DHHC-1:02:YTVkZDRjYmVjNmEyNjNiNjM2YmQ5YzdmMTE5MDNmMDEzOTRjNTZlZmE2Mjk0YTU0mew6Ig==: 00:14:47.056 20:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NmIyNWU5ZjM0M2Y1ZWMyOTU0NzI0NzA4NDQyMDYyZGS+yavJ: --dhchap-ctrl-secret DHHC-1:02:YTVkZDRjYmVjNmEyNjNiNjM2YmQ5YzdmMTE5MDNmMDEzOTRjNTZlZmE2Mjk0YTU0mew6Ig==: 00:14:47.989 20:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:47.989 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:47.989 20:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:47.989 20:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.989 20:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.989 20:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.989 20:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:47.989 20:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:47.989 20:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:48.247 20:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:14:48.247 20:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:48.247 20:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:48.247 20:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:48.247 20:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:48.247 20:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:48.247 20:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:48.247 20:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.247 20:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.247 20:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.247 20:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:48.247 20:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:48.247 20:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:48.505 00:14:48.505 20:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:48.505 20:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:48.505 20:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:48.763 20:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:48.763 20:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:48.763 20:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.763 20:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.763 20:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.763 20:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:48.763 { 00:14:48.763 "cntlid": 53, 00:14:48.763 "qid": 0, 00:14:48.763 "state": "enabled", 00:14:48.763 "thread": "nvmf_tgt_poll_group_000", 00:14:48.763 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:48.763 "listen_address": { 00:14:48.763 "trtype": "TCP", 00:14:48.763 "adrfam": "IPv4", 00:14:48.763 "traddr": "10.0.0.2", 00:14:48.763 "trsvcid": "4420" 00:14:48.763 }, 00:14:48.763 "peer_address": { 00:14:48.763 "trtype": "TCP", 00:14:48.763 "adrfam": "IPv4", 00:14:48.763 "traddr": "10.0.0.1", 00:14:48.763 "trsvcid": "60804" 00:14:48.763 }, 00:14:48.763 "auth": { 00:14:48.763 "state": "completed", 00:14:48.763 "digest": "sha384", 00:14:48.763 "dhgroup": "null" 00:14:48.763 } 00:14:48.763 } 00:14:48.763 ]' 00:14:48.763 20:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:48.763 20:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:48.763 20:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:48.763 20:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:48.763 20:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:49.021 20:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:49.021 20:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:49.021 20:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:49.279 20:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA1MTgzZTVkZDUzMGYyNDVhOTllN2NhYzE2MjJiZTAwYTU4YTAwMTUwOWRlODVkrrWU6Q==: --dhchap-ctrl-secret DHHC-1:01:NGIwZjg2Yjk0ZmRiYjVjZTY3YjhkMmUxNjkzYjMxODOiDw2k: 00:14:49.279 20:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:NTA1MTgzZTVkZDUzMGYyNDVhOTllN2NhYzE2MjJiZTAwYTU4YTAwMTUwOWRlODVkrrWU6Q==: --dhchap-ctrl-secret DHHC-1:01:NGIwZjg2Yjk0ZmRiYjVjZTY3YjhkMmUxNjkzYjMxODOiDw2k: 00:14:50.212 20:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:50.212 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:50.212 20:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:50.212 20:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.212 20:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.212 20:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.212 20:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:50.212 20:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:50.212 20:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:50.470 20:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:14:50.470 20:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:50.470 20:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:50.470 20:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:50.470 20:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:50.470 20:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:50.470 20:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:14:50.470 20:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.470 20:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.470 20:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.470 20:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:50.470 20:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:50.470 20:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:50.728 00:14:50.728 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:50.728 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:50.728 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:50.985 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:50.985 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:50.985 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.985 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.985 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.985 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:50.985 { 00:14:50.985 "cntlid": 55, 00:14:50.985 "qid": 0, 00:14:50.985 "state": "enabled", 00:14:50.985 "thread": "nvmf_tgt_poll_group_000", 00:14:50.985 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:50.985 "listen_address": { 00:14:50.985 "trtype": "TCP", 00:14:50.985 "adrfam": "IPv4", 00:14:50.985 "traddr": "10.0.0.2", 00:14:50.985 "trsvcid": "4420" 00:14:50.985 }, 00:14:50.985 "peer_address": { 00:14:50.985 "trtype": "TCP", 00:14:50.985 "adrfam": "IPv4", 00:14:50.985 "traddr": "10.0.0.1", 00:14:50.985 "trsvcid": "60826" 00:14:50.985 }, 00:14:50.985 "auth": { 00:14:50.985 "state": "completed", 00:14:50.985 "digest": "sha384", 00:14:50.985 "dhgroup": "null" 00:14:50.985 } 00:14:50.985 } 00:14:50.985 ]' 00:14:50.985 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:50.985 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:50.985 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:50.985 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:50.985 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:50.985 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:50.985 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:50.985 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:51.551 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTkwNWI0NmIzZTU3ZjkyZmQwMTg4ZDQ2OTExNjI1MGI5ZGQ0ZDllMDIyNTBiMzYwNTAyYzc1MTNkNzJhNzdkZuyf8Tc=: 00:14:51.551 20:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MTkwNWI0NmIzZTU3ZjkyZmQwMTg4ZDQ2OTExNjI1MGI5ZGQ0ZDllMDIyNTBiMzYwNTAyYzc1MTNkNzJhNzdkZuyf8Tc=: 00:14:52.117 20:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:52.375 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:52.375 20:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:52.375 20:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.375 20:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.375 20:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.375 20:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:52.375 20:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:52.375 20:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:52.375 20:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:52.633 20:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:14:52.633 20:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:52.633 20:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:52.633 20:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:52.633 20:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:52.633 20:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:52.633 20:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.633 20:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.633 20:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.633 20:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.633 20:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.633 20:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.633 20:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.890 00:14:52.890 20:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:52.890 20:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:52.890 20:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:53.147 20:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:53.147 20:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:53.147 20:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.147 20:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.147 20:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.147 20:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:53.147 { 00:14:53.147 "cntlid": 57, 00:14:53.147 "qid": 0, 00:14:53.147 "state": "enabled", 00:14:53.147 "thread": "nvmf_tgt_poll_group_000", 00:14:53.147 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:53.147 "listen_address": { 00:14:53.147 "trtype": "TCP", 00:14:53.147 "adrfam": "IPv4", 00:14:53.147 "traddr": "10.0.0.2", 00:14:53.147 "trsvcid": "4420" 00:14:53.147 }, 00:14:53.147 "peer_address": { 00:14:53.147 "trtype": "TCP", 00:14:53.147 "adrfam": "IPv4", 00:14:53.147 "traddr": "10.0.0.1", 00:14:53.147 "trsvcid": "48252" 00:14:53.147 }, 00:14:53.147 "auth": { 00:14:53.147 "state": "completed", 00:14:53.147 "digest": "sha384", 00:14:53.147 "dhgroup": "ffdhe2048" 00:14:53.147 } 00:14:53.147 } 00:14:53.147 ]' 00:14:53.147 20:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:53.148 20:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:53.148 20:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:53.148 20:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:53.148 20:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:53.404 20:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:53.404 20:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:53.404 20:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:53.662 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDJiM2Q3OTg0ZWE2YjI5NzMzNzM3NmY1NjNjNjVkZjdiYjEwMWI5ODg1NmQzNWYyQcmoCQ==: --dhchap-ctrl-secret DHHC-1:03:Yzc5MmZlYzBiN2U1NWFkYzQ4NjU2MTIwZDczZGU1ODk2MjgzYTBjMWQ2NzVmMTgyYWZiODUyMjlkZTEzNGMwZm/Jc0E=: 00:14:53.662 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZDJiM2Q3OTg0ZWE2YjI5NzMzNzM3NmY1NjNjNjVkZjdiYjEwMWI5ODg1NmQzNWYyQcmoCQ==: --dhchap-ctrl-secret DHHC-1:03:Yzc5MmZlYzBiN2U1NWFkYzQ4NjU2MTIwZDczZGU1ODk2MjgzYTBjMWQ2NzVmMTgyYWZiODUyMjlkZTEzNGMwZm/Jc0E=: 00:14:54.594 20:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:54.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:54.594 20:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:54.594 20:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.594 20:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.594 20:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.594 20:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:54.594 20:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:54.594 20:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:54.862 20:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:14:54.862 20:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:54.862 20:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:54.862 20:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:54.862 20:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:54.862 20:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:54.862 20:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:54.862 20:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.862 20:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.862 20:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.862 20:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:54.862 20:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:54.862 20:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:55.141 00:14:55.141 20:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:55.141 20:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:55.141 20:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:55.398 20:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:55.398 20:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:55.398 20:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.398 20:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.398 20:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.398 20:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:55.398 { 00:14:55.398 "cntlid": 59, 00:14:55.398 "qid": 0, 00:14:55.398 "state": "enabled", 00:14:55.398 "thread": "nvmf_tgt_poll_group_000", 00:14:55.398 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:55.398 "listen_address": { 00:14:55.398 "trtype": "TCP", 00:14:55.398 "adrfam": "IPv4", 00:14:55.399 "traddr": "10.0.0.2", 00:14:55.399 "trsvcid": "4420" 00:14:55.399 }, 00:14:55.399 "peer_address": { 00:14:55.399 "trtype": "TCP", 00:14:55.399 "adrfam": "IPv4", 00:14:55.399 "traddr": "10.0.0.1", 00:14:55.399 "trsvcid": "48270" 00:14:55.399 }, 00:14:55.399 "auth": { 00:14:55.399 "state": "completed", 00:14:55.399 "digest": "sha384", 00:14:55.399 "dhgroup": "ffdhe2048" 00:14:55.399 } 00:14:55.399 } 00:14:55.399 ]' 00:14:55.399 20:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:55.399 20:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:55.399 20:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:55.399 20:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:55.399 20:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:55.399 20:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:55.399 20:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:55.658 20:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:55.919 20:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmIyNWU5ZjM0M2Y1ZWMyOTU0NzI0NzA4NDQyMDYyZGS+yavJ: --dhchap-ctrl-secret DHHC-1:02:YTVkZDRjYmVjNmEyNjNiNjM2YmQ5YzdmMTE5MDNmMDEzOTRjNTZlZmE2Mjk0YTU0mew6Ig==: 00:14:55.919 20:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NmIyNWU5ZjM0M2Y1ZWMyOTU0NzI0NzA4NDQyMDYyZGS+yavJ: --dhchap-ctrl-secret DHHC-1:02:YTVkZDRjYmVjNmEyNjNiNjM2YmQ5YzdmMTE5MDNmMDEzOTRjNTZlZmE2Mjk0YTU0mew6Ig==: 00:14:56.852 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:56.852 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:56.852 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:56.852 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.852 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.852 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.852 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:56.852 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:56.852 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:57.110 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:14:57.110 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:57.110 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:57.110 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:57.110 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:57.110 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:57.110 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:57.110 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.110 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.110 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.110 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:57.110 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:57.110 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:57.369 00:14:57.369 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:57.369 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:57.369 20:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:57.627 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:57.627 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:57.627 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.627 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.627 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.627 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:57.627 { 00:14:57.627 "cntlid": 61, 00:14:57.627 "qid": 0, 00:14:57.627 "state": "enabled", 00:14:57.627 "thread": "nvmf_tgt_poll_group_000", 00:14:57.627 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:57.627 "listen_address": { 00:14:57.627 "trtype": "TCP", 00:14:57.627 "adrfam": "IPv4", 00:14:57.627 "traddr": "10.0.0.2", 00:14:57.627 "trsvcid": "4420" 00:14:57.627 }, 00:14:57.627 "peer_address": { 00:14:57.627 "trtype": "TCP", 00:14:57.627 "adrfam": "IPv4", 00:14:57.627 "traddr": "10.0.0.1", 00:14:57.627 "trsvcid": "48294" 00:14:57.627 }, 00:14:57.627 "auth": { 00:14:57.627 "state": "completed", 00:14:57.627 "digest": "sha384", 00:14:57.627 "dhgroup": "ffdhe2048" 00:14:57.627 } 00:14:57.627 } 00:14:57.627 ]' 00:14:57.628 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:57.628 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:57.628 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:57.628 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:57.628 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:57.885 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:57.885 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:57.885 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:58.143 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA1MTgzZTVkZDUzMGYyNDVhOTllN2NhYzE2MjJiZTAwYTU4YTAwMTUwOWRlODVkrrWU6Q==: --dhchap-ctrl-secret DHHC-1:01:NGIwZjg2Yjk0ZmRiYjVjZTY3YjhkMmUxNjkzYjMxODOiDw2k: 00:14:58.143 20:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:NTA1MTgzZTVkZDUzMGYyNDVhOTllN2NhYzE2MjJiZTAwYTU4YTAwMTUwOWRlODVkrrWU6Q==: --dhchap-ctrl-secret DHHC-1:01:NGIwZjg2Yjk0ZmRiYjVjZTY3YjhkMmUxNjkzYjMxODOiDw2k: 00:14:59.078 20:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:59.078 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:59.078 20:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:59.078 20:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.078 20:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.078 20:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.078 20:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:59.078 20:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:59.078 20:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:59.336 20:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:14:59.336 20:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:59.336 20:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:59.336 20:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:59.336 20:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:59.336 20:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:59.336 20:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:14:59.336 20:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.336 20:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.336 20:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.336 20:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:59.336 20:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:59.336 20:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:59.594 00:14:59.594 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:59.594 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:59.594 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:59.852 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:59.852 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:59.852 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.852 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.852 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.852 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:59.852 { 00:14:59.852 "cntlid": 63, 00:14:59.852 "qid": 0, 00:14:59.852 "state": "enabled", 00:14:59.852 "thread": "nvmf_tgt_poll_group_000", 00:14:59.852 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:14:59.852 "listen_address": { 00:14:59.852 "trtype": "TCP", 00:14:59.852 "adrfam": "IPv4", 00:14:59.852 "traddr": "10.0.0.2", 00:14:59.852 "trsvcid": "4420" 00:14:59.852 }, 00:14:59.852 "peer_address": { 00:14:59.852 "trtype": "TCP", 00:14:59.852 "adrfam": "IPv4", 00:14:59.852 "traddr": "10.0.0.1", 00:14:59.852 "trsvcid": "48342" 00:14:59.852 }, 00:14:59.852 "auth": { 00:14:59.852 "state": "completed", 00:14:59.852 "digest": "sha384", 00:14:59.852 "dhgroup": "ffdhe2048" 00:14:59.852 } 00:14:59.852 } 00:14:59.852 ]' 00:14:59.852 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:59.852 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:59.852 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:59.852 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:59.852 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:00.110 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:00.110 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:00.110 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.369 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTkwNWI0NmIzZTU3ZjkyZmQwMTg4ZDQ2OTExNjI1MGI5ZGQ0ZDllMDIyNTBiMzYwNTAyYzc1MTNkNzJhNzdkZuyf8Tc=: 00:15:00.369 20:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MTkwNWI0NmIzZTU3ZjkyZmQwMTg4ZDQ2OTExNjI1MGI5ZGQ0ZDllMDIyNTBiMzYwNTAyYzc1MTNkNzJhNzdkZuyf8Tc=: 00:15:01.301 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:01.301 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:01.301 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:01.301 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.301 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.301 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.302 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:01.302 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:01.302 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:01.302 20:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:01.559 20:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:15:01.559 20:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:01.559 20:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:01.559 20:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:01.559 20:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:01.559 20:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:01.560 20:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.560 20:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.560 20:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.560 20:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.560 20:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.560 20:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.560 20:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.818 00:15:01.818 20:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:01.818 20:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:01.818 20:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:02.075 20:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:02.075 20:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:02.075 20:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.075 20:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.075 20:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.075 20:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:02.075 { 00:15:02.075 "cntlid": 65, 00:15:02.075 "qid": 0, 00:15:02.075 "state": "enabled", 00:15:02.075 "thread": "nvmf_tgt_poll_group_000", 00:15:02.075 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:02.075 "listen_address": { 00:15:02.075 "trtype": "TCP", 00:15:02.075 "adrfam": "IPv4", 00:15:02.075 "traddr": "10.0.0.2", 00:15:02.075 "trsvcid": "4420" 00:15:02.075 }, 00:15:02.075 "peer_address": { 00:15:02.075 "trtype": "TCP", 00:15:02.075 "adrfam": "IPv4", 00:15:02.075 "traddr": "10.0.0.1", 00:15:02.075 "trsvcid": "48360" 00:15:02.075 }, 00:15:02.075 "auth": { 00:15:02.075 "state": "completed", 00:15:02.075 "digest": "sha384", 00:15:02.075 "dhgroup": "ffdhe3072" 00:15:02.075 } 00:15:02.075 } 00:15:02.075 ]' 00:15:02.075 20:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:02.076 20:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:02.076 20:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:02.076 20:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:02.076 20:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:02.333 20:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:02.334 20:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:02.334 20:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:02.592 20:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDJiM2Q3OTg0ZWE2YjI5NzMzNzM3NmY1NjNjNjVkZjdiYjEwMWI5ODg1NmQzNWYyQcmoCQ==: --dhchap-ctrl-secret DHHC-1:03:Yzc5MmZlYzBiN2U1NWFkYzQ4NjU2MTIwZDczZGU1ODk2MjgzYTBjMWQ2NzVmMTgyYWZiODUyMjlkZTEzNGMwZm/Jc0E=: 00:15:02.592 20:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZDJiM2Q3OTg0ZWE2YjI5NzMzNzM3NmY1NjNjNjVkZjdiYjEwMWI5ODg1NmQzNWYyQcmoCQ==: --dhchap-ctrl-secret DHHC-1:03:Yzc5MmZlYzBiN2U1NWFkYzQ4NjU2MTIwZDczZGU1ODk2MjgzYTBjMWQ2NzVmMTgyYWZiODUyMjlkZTEzNGMwZm/Jc0E=: 00:15:03.528 20:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:03.528 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:03.528 20:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:03.528 20:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.528 20:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.528 20:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.528 20:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:03.528 20:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:03.528 20:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:03.785 20:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:15:03.785 20:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:03.785 20:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:03.785 20:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:03.785 20:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:03.785 20:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:03.785 20:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:03.785 20:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.785 20:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.785 20:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.785 20:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:03.785 20:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:03.785 20:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:04.043 00:15:04.043 20:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:04.043 20:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:04.043 20:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:04.301 20:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:04.301 20:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:04.301 20:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.301 20:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.301 20:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.301 20:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:04.301 { 00:15:04.301 "cntlid": 67, 00:15:04.301 "qid": 0, 00:15:04.301 "state": "enabled", 00:15:04.301 "thread": "nvmf_tgt_poll_group_000", 00:15:04.301 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:04.301 "listen_address": { 00:15:04.301 "trtype": "TCP", 00:15:04.301 "adrfam": "IPv4", 00:15:04.301 "traddr": "10.0.0.2", 00:15:04.301 "trsvcid": "4420" 00:15:04.301 }, 00:15:04.301 "peer_address": { 00:15:04.301 "trtype": "TCP", 00:15:04.301 "adrfam": "IPv4", 00:15:04.301 "traddr": "10.0.0.1", 00:15:04.301 "trsvcid": "54334" 00:15:04.301 }, 00:15:04.301 "auth": { 00:15:04.301 "state": "completed", 00:15:04.301 "digest": "sha384", 00:15:04.301 "dhgroup": "ffdhe3072" 00:15:04.301 } 00:15:04.301 } 00:15:04.301 ]' 00:15:04.301 20:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:04.301 20:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:04.301 20:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:04.301 20:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:04.301 20:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:04.558 20:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:04.558 20:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:04.558 20:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:04.817 20:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmIyNWU5ZjM0M2Y1ZWMyOTU0NzI0NzA4NDQyMDYyZGS+yavJ: --dhchap-ctrl-secret DHHC-1:02:YTVkZDRjYmVjNmEyNjNiNjM2YmQ5YzdmMTE5MDNmMDEzOTRjNTZlZmE2Mjk0YTU0mew6Ig==: 00:15:04.817 20:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NmIyNWU5ZjM0M2Y1ZWMyOTU0NzI0NzA4NDQyMDYyZGS+yavJ: --dhchap-ctrl-secret DHHC-1:02:YTVkZDRjYmVjNmEyNjNiNjM2YmQ5YzdmMTE5MDNmMDEzOTRjNTZlZmE2Mjk0YTU0mew6Ig==: 00:15:05.749 20:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:05.749 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:05.749 20:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:05.749 20:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.749 20:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.749 20:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.749 20:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:05.749 20:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:05.749 20:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:06.006 20:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:15:06.006 20:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:06.006 20:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:06.006 20:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:06.006 20:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:06.006 20:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.006 20:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:06.006 20:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.006 20:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.006 20:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.006 20:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:06.006 20:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:06.006 20:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:06.264 00:15:06.264 20:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:06.264 20:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:06.264 20:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:06.522 20:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.522 20:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:06.522 20:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.522 20:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.522 20:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.522 20:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:06.522 { 00:15:06.522 "cntlid": 69, 00:15:06.522 "qid": 0, 00:15:06.522 "state": "enabled", 00:15:06.522 "thread": "nvmf_tgt_poll_group_000", 00:15:06.522 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:06.522 "listen_address": { 00:15:06.522 "trtype": "TCP", 00:15:06.522 "adrfam": "IPv4", 00:15:06.522 "traddr": "10.0.0.2", 00:15:06.522 "trsvcid": "4420" 00:15:06.522 }, 00:15:06.522 "peer_address": { 00:15:06.522 "trtype": "TCP", 00:15:06.522 "adrfam": "IPv4", 00:15:06.522 "traddr": "10.0.0.1", 00:15:06.522 "trsvcid": "54366" 00:15:06.522 }, 00:15:06.522 "auth": { 00:15:06.522 "state": "completed", 00:15:06.522 "digest": "sha384", 00:15:06.522 "dhgroup": "ffdhe3072" 00:15:06.522 } 00:15:06.522 } 00:15:06.522 ]' 00:15:06.522 20:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:06.522 20:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:06.522 20:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:06.522 20:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:06.522 20:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:06.779 20:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:06.779 20:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.779 20:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:07.036 20:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA1MTgzZTVkZDUzMGYyNDVhOTllN2NhYzE2MjJiZTAwYTU4YTAwMTUwOWRlODVkrrWU6Q==: --dhchap-ctrl-secret DHHC-1:01:NGIwZjg2Yjk0ZmRiYjVjZTY3YjhkMmUxNjkzYjMxODOiDw2k: 00:15:07.036 20:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:NTA1MTgzZTVkZDUzMGYyNDVhOTllN2NhYzE2MjJiZTAwYTU4YTAwMTUwOWRlODVkrrWU6Q==: --dhchap-ctrl-secret DHHC-1:01:NGIwZjg2Yjk0ZmRiYjVjZTY3YjhkMmUxNjkzYjMxODOiDw2k: 00:15:07.968 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:07.968 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:07.968 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:07.968 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.968 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.968 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.968 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:07.968 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:07.968 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:08.226 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:15:08.226 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:08.226 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:08.226 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:08.226 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:08.226 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:08.226 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:08.226 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.226 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.226 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.226 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:08.226 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:08.226 20:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:08.484 00:15:08.484 20:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:08.484 20:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:08.484 20:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:08.777 20:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:08.777 20:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:08.777 20:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.777 20:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.777 20:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.777 20:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:08.777 { 00:15:08.777 "cntlid": 71, 00:15:08.777 "qid": 0, 00:15:08.777 "state": "enabled", 00:15:08.777 "thread": "nvmf_tgt_poll_group_000", 00:15:08.777 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:08.777 "listen_address": { 00:15:08.777 "trtype": "TCP", 00:15:08.777 "adrfam": "IPv4", 00:15:08.777 "traddr": "10.0.0.2", 00:15:08.777 "trsvcid": "4420" 00:15:08.777 }, 00:15:08.777 "peer_address": { 00:15:08.777 "trtype": "TCP", 00:15:08.777 "adrfam": "IPv4", 00:15:08.777 "traddr": "10.0.0.1", 00:15:08.777 "trsvcid": "54392" 00:15:08.777 }, 00:15:08.777 "auth": { 00:15:08.777 "state": "completed", 00:15:08.777 "digest": "sha384", 00:15:08.777 "dhgroup": "ffdhe3072" 00:15:08.777 } 00:15:08.777 } 00:15:08.777 ]' 00:15:08.777 20:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:08.777 20:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:08.777 20:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:08.777 20:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:08.777 20:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:09.034 20:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:09.034 20:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.034 20:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:09.291 20:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTkwNWI0NmIzZTU3ZjkyZmQwMTg4ZDQ2OTExNjI1MGI5ZGQ0ZDllMDIyNTBiMzYwNTAyYzc1MTNkNzJhNzdkZuyf8Tc=: 00:15:09.291 20:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MTkwNWI0NmIzZTU3ZjkyZmQwMTg4ZDQ2OTExNjI1MGI5ZGQ0ZDllMDIyNTBiMzYwNTAyYzc1MTNkNzJhNzdkZuyf8Tc=: 00:15:10.222 20:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.222 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.222 20:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:10.223 20:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.223 20:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.223 20:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.223 20:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:10.223 20:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:10.223 20:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:10.223 20:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:10.223 20:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:15:10.223 20:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:10.223 20:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:10.223 20:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:10.223 20:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:10.223 20:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:10.223 20:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:10.223 20:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.223 20:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.223 20:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.223 20:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:10.223 20:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:10.223 20:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:10.785 00:15:10.785 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:10.785 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:10.785 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:11.042 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:11.042 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:11.042 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.042 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.042 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.042 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:11.042 { 00:15:11.042 "cntlid": 73, 00:15:11.042 "qid": 0, 00:15:11.042 "state": "enabled", 00:15:11.042 "thread": "nvmf_tgt_poll_group_000", 00:15:11.042 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:11.042 "listen_address": { 00:15:11.042 "trtype": "TCP", 00:15:11.042 "adrfam": "IPv4", 00:15:11.042 "traddr": "10.0.0.2", 00:15:11.042 "trsvcid": "4420" 00:15:11.042 }, 00:15:11.042 "peer_address": { 00:15:11.042 "trtype": "TCP", 00:15:11.042 "adrfam": "IPv4", 00:15:11.042 "traddr": "10.0.0.1", 00:15:11.042 "trsvcid": "54434" 00:15:11.042 }, 00:15:11.042 "auth": { 00:15:11.042 "state": "completed", 00:15:11.042 "digest": "sha384", 00:15:11.042 "dhgroup": "ffdhe4096" 00:15:11.042 } 00:15:11.042 } 00:15:11.042 ]' 00:15:11.042 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:11.042 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:11.042 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:11.042 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:11.042 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:11.042 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:11.042 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:11.042 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:11.299 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDJiM2Q3OTg0ZWE2YjI5NzMzNzM3NmY1NjNjNjVkZjdiYjEwMWI5ODg1NmQzNWYyQcmoCQ==: --dhchap-ctrl-secret DHHC-1:03:Yzc5MmZlYzBiN2U1NWFkYzQ4NjU2MTIwZDczZGU1ODk2MjgzYTBjMWQ2NzVmMTgyYWZiODUyMjlkZTEzNGMwZm/Jc0E=: 00:15:11.299 20:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZDJiM2Q3OTg0ZWE2YjI5NzMzNzM3NmY1NjNjNjVkZjdiYjEwMWI5ODg1NmQzNWYyQcmoCQ==: --dhchap-ctrl-secret DHHC-1:03:Yzc5MmZlYzBiN2U1NWFkYzQ4NjU2MTIwZDczZGU1ODk2MjgzYTBjMWQ2NzVmMTgyYWZiODUyMjlkZTEzNGMwZm/Jc0E=: 00:15:12.230 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:12.230 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:12.230 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:12.230 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.230 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.230 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.230 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:12.230 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:12.230 20:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:12.488 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:15:12.488 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:12.488 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:12.488 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:12.488 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:12.488 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:12.488 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:12.488 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.488 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.488 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.488 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:12.488 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:12.488 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:13.053 00:15:13.053 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:13.053 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:13.053 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:13.323 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:13.323 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:13.323 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.323 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.323 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.323 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:13.323 { 00:15:13.324 "cntlid": 75, 00:15:13.324 "qid": 0, 00:15:13.324 "state": "enabled", 00:15:13.324 "thread": "nvmf_tgt_poll_group_000", 00:15:13.324 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:13.324 "listen_address": { 00:15:13.324 "trtype": "TCP", 00:15:13.324 "adrfam": "IPv4", 00:15:13.324 "traddr": "10.0.0.2", 00:15:13.324 "trsvcid": "4420" 00:15:13.324 }, 00:15:13.324 "peer_address": { 00:15:13.324 "trtype": "TCP", 00:15:13.324 "adrfam": "IPv4", 00:15:13.324 "traddr": "10.0.0.1", 00:15:13.324 "trsvcid": "59886" 00:15:13.324 }, 00:15:13.324 "auth": { 00:15:13.324 "state": "completed", 00:15:13.324 "digest": "sha384", 00:15:13.324 "dhgroup": "ffdhe4096" 00:15:13.324 } 00:15:13.324 } 00:15:13.324 ]' 00:15:13.324 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:13.324 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:13.324 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:13.324 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:13.324 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:13.324 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:13.324 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:13.324 20:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:13.612 20:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmIyNWU5ZjM0M2Y1ZWMyOTU0NzI0NzA4NDQyMDYyZGS+yavJ: --dhchap-ctrl-secret DHHC-1:02:YTVkZDRjYmVjNmEyNjNiNjM2YmQ5YzdmMTE5MDNmMDEzOTRjNTZlZmE2Mjk0YTU0mew6Ig==: 00:15:13.613 20:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NmIyNWU5ZjM0M2Y1ZWMyOTU0NzI0NzA4NDQyMDYyZGS+yavJ: --dhchap-ctrl-secret DHHC-1:02:YTVkZDRjYmVjNmEyNjNiNjM2YmQ5YzdmMTE5MDNmMDEzOTRjNTZlZmE2Mjk0YTU0mew6Ig==: 00:15:14.545 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:14.545 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:14.545 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:14.545 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.545 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.545 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.545 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:14.545 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:14.545 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:14.803 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:15:14.803 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:14.803 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:14.803 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:14.803 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:14.803 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:14.803 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:14.803 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.803 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.803 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.803 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:14.803 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:14.803 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:15.369 00:15:15.369 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:15.369 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:15.369 20:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:15.629 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:15.629 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:15.629 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.629 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.629 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.629 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:15.629 { 00:15:15.629 "cntlid": 77, 00:15:15.629 "qid": 0, 00:15:15.629 "state": "enabled", 00:15:15.629 "thread": "nvmf_tgt_poll_group_000", 00:15:15.629 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:15.629 "listen_address": { 00:15:15.629 "trtype": "TCP", 00:15:15.629 "adrfam": "IPv4", 00:15:15.629 "traddr": "10.0.0.2", 00:15:15.629 "trsvcid": "4420" 00:15:15.629 }, 00:15:15.629 "peer_address": { 00:15:15.629 "trtype": "TCP", 00:15:15.629 "adrfam": "IPv4", 00:15:15.629 "traddr": "10.0.0.1", 00:15:15.629 "trsvcid": "59914" 00:15:15.629 }, 00:15:15.629 "auth": { 00:15:15.629 "state": "completed", 00:15:15.629 "digest": "sha384", 00:15:15.629 "dhgroup": "ffdhe4096" 00:15:15.629 } 00:15:15.629 } 00:15:15.629 ]' 00:15:15.629 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:15.629 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:15.629 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:15.629 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:15.629 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:15.629 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:15.629 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:15.629 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.886 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA1MTgzZTVkZDUzMGYyNDVhOTllN2NhYzE2MjJiZTAwYTU4YTAwMTUwOWRlODVkrrWU6Q==: --dhchap-ctrl-secret DHHC-1:01:NGIwZjg2Yjk0ZmRiYjVjZTY3YjhkMmUxNjkzYjMxODOiDw2k: 00:15:15.886 20:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:NTA1MTgzZTVkZDUzMGYyNDVhOTllN2NhYzE2MjJiZTAwYTU4YTAwMTUwOWRlODVkrrWU6Q==: --dhchap-ctrl-secret DHHC-1:01:NGIwZjg2Yjk0ZmRiYjVjZTY3YjhkMmUxNjkzYjMxODOiDw2k: 00:15:16.819 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:16.820 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:16.820 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:16.820 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.820 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.820 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.820 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:16.820 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:16.820 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:17.078 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:15:17.078 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:17.078 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:17.078 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:17.078 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:17.078 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:17.078 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:17.078 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.078 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.078 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.078 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:17.078 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:17.078 20:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:17.644 00:15:17.644 20:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:17.644 20:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:17.644 20:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.902 20:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.902 20:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.902 20:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.902 20:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.902 20:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.902 20:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:17.902 { 00:15:17.902 "cntlid": 79, 00:15:17.902 "qid": 0, 00:15:17.902 "state": "enabled", 00:15:17.902 "thread": "nvmf_tgt_poll_group_000", 00:15:17.902 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:17.902 "listen_address": { 00:15:17.902 "trtype": "TCP", 00:15:17.902 "adrfam": "IPv4", 00:15:17.902 "traddr": "10.0.0.2", 00:15:17.902 "trsvcid": "4420" 00:15:17.902 }, 00:15:17.902 "peer_address": { 00:15:17.902 "trtype": "TCP", 00:15:17.902 "adrfam": "IPv4", 00:15:17.902 "traddr": "10.0.0.1", 00:15:17.902 "trsvcid": "59952" 00:15:17.902 }, 00:15:17.902 "auth": { 00:15:17.902 "state": "completed", 00:15:17.902 "digest": "sha384", 00:15:17.902 "dhgroup": "ffdhe4096" 00:15:17.902 } 00:15:17.902 } 00:15:17.902 ]' 00:15:17.902 20:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:17.902 20:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:17.902 20:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:17.902 20:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:17.902 20:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:17.902 20:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:17.902 20:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.902 20:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.160 20:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTkwNWI0NmIzZTU3ZjkyZmQwMTg4ZDQ2OTExNjI1MGI5ZGQ0ZDllMDIyNTBiMzYwNTAyYzc1MTNkNzJhNzdkZuyf8Tc=: 00:15:18.160 20:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MTkwNWI0NmIzZTU3ZjkyZmQwMTg4ZDQ2OTExNjI1MGI5ZGQ0ZDllMDIyNTBiMzYwNTAyYzc1MTNkNzJhNzdkZuyf8Tc=: 00:15:19.094 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:19.094 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:19.094 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:19.094 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.094 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.094 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.094 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:19.094 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:19.094 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:19.094 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:19.353 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:15:19.353 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:19.353 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:19.353 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:19.353 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:19.353 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:19.353 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:19.353 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.353 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.353 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.353 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:19.353 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:19.353 20:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:19.918 00:15:19.918 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:19.918 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:19.918 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.176 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.176 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.176 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.176 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.176 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.176 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:20.176 { 00:15:20.176 "cntlid": 81, 00:15:20.176 "qid": 0, 00:15:20.176 "state": "enabled", 00:15:20.176 "thread": "nvmf_tgt_poll_group_000", 00:15:20.176 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:20.176 "listen_address": { 00:15:20.176 "trtype": "TCP", 00:15:20.176 "adrfam": "IPv4", 00:15:20.176 "traddr": "10.0.0.2", 00:15:20.176 "trsvcid": "4420" 00:15:20.176 }, 00:15:20.176 "peer_address": { 00:15:20.176 "trtype": "TCP", 00:15:20.176 "adrfam": "IPv4", 00:15:20.176 "traddr": "10.0.0.1", 00:15:20.176 "trsvcid": "59976" 00:15:20.176 }, 00:15:20.176 "auth": { 00:15:20.176 "state": "completed", 00:15:20.176 "digest": "sha384", 00:15:20.176 "dhgroup": "ffdhe6144" 00:15:20.176 } 00:15:20.176 } 00:15:20.176 ]' 00:15:20.176 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:20.176 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:20.176 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:20.434 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:20.434 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:20.434 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.434 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.434 20:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:20.692 20:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDJiM2Q3OTg0ZWE2YjI5NzMzNzM3NmY1NjNjNjVkZjdiYjEwMWI5ODg1NmQzNWYyQcmoCQ==: --dhchap-ctrl-secret DHHC-1:03:Yzc5MmZlYzBiN2U1NWFkYzQ4NjU2MTIwZDczZGU1ODk2MjgzYTBjMWQ2NzVmMTgyYWZiODUyMjlkZTEzNGMwZm/Jc0E=: 00:15:20.692 20:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZDJiM2Q3OTg0ZWE2YjI5NzMzNzM3NmY1NjNjNjVkZjdiYjEwMWI5ODg1NmQzNWYyQcmoCQ==: --dhchap-ctrl-secret DHHC-1:03:Yzc5MmZlYzBiN2U1NWFkYzQ4NjU2MTIwZDczZGU1ODk2MjgzYTBjMWQ2NzVmMTgyYWZiODUyMjlkZTEzNGMwZm/Jc0E=: 00:15:21.627 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.627 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.627 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:21.627 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.627 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.627 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.627 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:21.627 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:21.627 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:21.884 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:15:21.884 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:21.884 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:21.884 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:21.884 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:21.884 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:21.884 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.884 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.884 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.884 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.884 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.884 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.884 20:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:22.449 00:15:22.449 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:22.449 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:22.449 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.705 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.705 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.705 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.705 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.705 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.705 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:22.705 { 00:15:22.705 "cntlid": 83, 00:15:22.705 "qid": 0, 00:15:22.705 "state": "enabled", 00:15:22.705 "thread": "nvmf_tgt_poll_group_000", 00:15:22.705 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:22.705 "listen_address": { 00:15:22.705 "trtype": "TCP", 00:15:22.705 "adrfam": "IPv4", 00:15:22.705 "traddr": "10.0.0.2", 00:15:22.705 "trsvcid": "4420" 00:15:22.705 }, 00:15:22.705 "peer_address": { 00:15:22.705 "trtype": "TCP", 00:15:22.705 "adrfam": "IPv4", 00:15:22.705 "traddr": "10.0.0.1", 00:15:22.705 "trsvcid": "59992" 00:15:22.705 }, 00:15:22.705 "auth": { 00:15:22.705 "state": "completed", 00:15:22.705 "digest": "sha384", 00:15:22.705 "dhgroup": "ffdhe6144" 00:15:22.705 } 00:15:22.705 } 00:15:22.705 ]' 00:15:22.705 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:22.962 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:22.962 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:22.962 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:22.962 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:22.962 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.962 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.962 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:23.220 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmIyNWU5ZjM0M2Y1ZWMyOTU0NzI0NzA4NDQyMDYyZGS+yavJ: --dhchap-ctrl-secret DHHC-1:02:YTVkZDRjYmVjNmEyNjNiNjM2YmQ5YzdmMTE5MDNmMDEzOTRjNTZlZmE2Mjk0YTU0mew6Ig==: 00:15:23.220 20:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NmIyNWU5ZjM0M2Y1ZWMyOTU0NzI0NzA4NDQyMDYyZGS+yavJ: --dhchap-ctrl-secret DHHC-1:02:YTVkZDRjYmVjNmEyNjNiNjM2YmQ5YzdmMTE5MDNmMDEzOTRjNTZlZmE2Mjk0YTU0mew6Ig==: 00:15:24.154 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:24.154 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:24.154 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:24.154 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.154 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.154 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.154 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:24.154 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:24.154 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:24.413 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:15:24.413 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:24.413 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:24.413 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:24.413 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:24.413 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:24.413 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:24.413 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.413 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.413 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.413 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:24.413 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:24.413 20:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:24.979 00:15:24.979 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:24.979 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:24.979 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:25.237 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.237 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:25.237 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.237 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.237 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.237 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:25.237 { 00:15:25.237 "cntlid": 85, 00:15:25.237 "qid": 0, 00:15:25.237 "state": "enabled", 00:15:25.237 "thread": "nvmf_tgt_poll_group_000", 00:15:25.237 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:25.237 "listen_address": { 00:15:25.237 "trtype": "TCP", 00:15:25.237 "adrfam": "IPv4", 00:15:25.237 "traddr": "10.0.0.2", 00:15:25.237 "trsvcid": "4420" 00:15:25.237 }, 00:15:25.237 "peer_address": { 00:15:25.237 "trtype": "TCP", 00:15:25.237 "adrfam": "IPv4", 00:15:25.237 "traddr": "10.0.0.1", 00:15:25.237 "trsvcid": "35170" 00:15:25.237 }, 00:15:25.237 "auth": { 00:15:25.237 "state": "completed", 00:15:25.237 "digest": "sha384", 00:15:25.237 "dhgroup": "ffdhe6144" 00:15:25.237 } 00:15:25.237 } 00:15:25.237 ]' 00:15:25.237 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:25.237 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:25.237 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:25.237 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:25.237 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:25.237 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:25.237 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:25.237 20:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:25.494 20:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA1MTgzZTVkZDUzMGYyNDVhOTllN2NhYzE2MjJiZTAwYTU4YTAwMTUwOWRlODVkrrWU6Q==: --dhchap-ctrl-secret DHHC-1:01:NGIwZjg2Yjk0ZmRiYjVjZTY3YjhkMmUxNjkzYjMxODOiDw2k: 00:15:25.495 20:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:NTA1MTgzZTVkZDUzMGYyNDVhOTllN2NhYzE2MjJiZTAwYTU4YTAwMTUwOWRlODVkrrWU6Q==: --dhchap-ctrl-secret DHHC-1:01:NGIwZjg2Yjk0ZmRiYjVjZTY3YjhkMmUxNjkzYjMxODOiDw2k: 00:15:26.425 20:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:26.425 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:26.425 20:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:26.425 20:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.425 20:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.425 20:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.425 20:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:26.426 20:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:26.426 20:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:26.684 20:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:15:26.684 20:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:26.684 20:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:26.684 20:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:26.684 20:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:26.684 20:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:26.684 20:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:26.684 20:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.684 20:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.684 20:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.684 20:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:26.684 20:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:26.684 20:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:27.251 00:15:27.251 20:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:27.251 20:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:27.251 20:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:27.510 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.510 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:27.510 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.510 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.510 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.510 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:27.510 { 00:15:27.510 "cntlid": 87, 00:15:27.510 "qid": 0, 00:15:27.510 "state": "enabled", 00:15:27.510 "thread": "nvmf_tgt_poll_group_000", 00:15:27.510 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:27.510 "listen_address": { 00:15:27.510 "trtype": "TCP", 00:15:27.510 "adrfam": "IPv4", 00:15:27.510 "traddr": "10.0.0.2", 00:15:27.510 "trsvcid": "4420" 00:15:27.510 }, 00:15:27.510 "peer_address": { 00:15:27.510 "trtype": "TCP", 00:15:27.510 "adrfam": "IPv4", 00:15:27.510 "traddr": "10.0.0.1", 00:15:27.510 "trsvcid": "35208" 00:15:27.510 }, 00:15:27.510 "auth": { 00:15:27.510 "state": "completed", 00:15:27.510 "digest": "sha384", 00:15:27.510 "dhgroup": "ffdhe6144" 00:15:27.510 } 00:15:27.510 } 00:15:27.510 ]' 00:15:27.510 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:27.768 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:27.768 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:27.768 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:27.768 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:27.768 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:27.768 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.768 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:28.025 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTkwNWI0NmIzZTU3ZjkyZmQwMTg4ZDQ2OTExNjI1MGI5ZGQ0ZDllMDIyNTBiMzYwNTAyYzc1MTNkNzJhNzdkZuyf8Tc=: 00:15:28.025 20:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MTkwNWI0NmIzZTU3ZjkyZmQwMTg4ZDQ2OTExNjI1MGI5ZGQ0ZDllMDIyNTBiMzYwNTAyYzc1MTNkNzJhNzdkZuyf8Tc=: 00:15:28.963 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:28.963 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:28.963 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:28.963 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.963 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.963 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.963 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:28.963 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:28.963 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:28.963 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:29.529 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:15:29.529 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:29.529 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:29.529 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:29.529 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:29.529 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:29.529 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:29.529 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.529 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.529 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.529 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:29.529 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:29.529 20:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:30.095 00:15:30.353 20:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:30.353 20:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:30.353 20:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.611 20:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.611 20:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:30.611 20:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.611 20:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.611 20:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.611 20:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:30.611 { 00:15:30.611 "cntlid": 89, 00:15:30.611 "qid": 0, 00:15:30.611 "state": "enabled", 00:15:30.611 "thread": "nvmf_tgt_poll_group_000", 00:15:30.611 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:30.611 "listen_address": { 00:15:30.611 "trtype": "TCP", 00:15:30.611 "adrfam": "IPv4", 00:15:30.611 "traddr": "10.0.0.2", 00:15:30.611 "trsvcid": "4420" 00:15:30.611 }, 00:15:30.611 "peer_address": { 00:15:30.611 "trtype": "TCP", 00:15:30.611 "adrfam": "IPv4", 00:15:30.611 "traddr": "10.0.0.1", 00:15:30.611 "trsvcid": "35224" 00:15:30.611 }, 00:15:30.611 "auth": { 00:15:30.611 "state": "completed", 00:15:30.611 "digest": "sha384", 00:15:30.611 "dhgroup": "ffdhe8192" 00:15:30.611 } 00:15:30.611 } 00:15:30.611 ]' 00:15:30.611 20:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:30.611 20:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:30.611 20:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:30.611 20:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:30.611 20:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:30.611 20:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:30.611 20:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.611 20:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:30.869 20:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDJiM2Q3OTg0ZWE2YjI5NzMzNzM3NmY1NjNjNjVkZjdiYjEwMWI5ODg1NmQzNWYyQcmoCQ==: --dhchap-ctrl-secret DHHC-1:03:Yzc5MmZlYzBiN2U1NWFkYzQ4NjU2MTIwZDczZGU1ODk2MjgzYTBjMWQ2NzVmMTgyYWZiODUyMjlkZTEzNGMwZm/Jc0E=: 00:15:30.869 20:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZDJiM2Q3OTg0ZWE2YjI5NzMzNzM3NmY1NjNjNjVkZjdiYjEwMWI5ODg1NmQzNWYyQcmoCQ==: --dhchap-ctrl-secret DHHC-1:03:Yzc5MmZlYzBiN2U1NWFkYzQ4NjU2MTIwZDczZGU1ODk2MjgzYTBjMWQ2NzVmMTgyYWZiODUyMjlkZTEzNGMwZm/Jc0E=: 00:15:31.802 20:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:31.802 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:31.802 20:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:31.802 20:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.802 20:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.802 20:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.802 20:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:31.802 20:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:31.802 20:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:32.060 20:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:15:32.060 20:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:32.060 20:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:32.060 20:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:32.060 20:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:32.060 20:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:32.060 20:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:32.060 20:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.060 20:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.060 20:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.060 20:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:32.060 20:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:32.060 20:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:32.994 00:15:32.994 20:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:32.994 20:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:32.994 20:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:33.252 20:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.252 20:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:33.252 20:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.252 20:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.252 20:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.252 20:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:33.252 { 00:15:33.252 "cntlid": 91, 00:15:33.252 "qid": 0, 00:15:33.252 "state": "enabled", 00:15:33.252 "thread": "nvmf_tgt_poll_group_000", 00:15:33.252 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:33.252 "listen_address": { 00:15:33.252 "trtype": "TCP", 00:15:33.252 "adrfam": "IPv4", 00:15:33.252 "traddr": "10.0.0.2", 00:15:33.252 "trsvcid": "4420" 00:15:33.252 }, 00:15:33.252 "peer_address": { 00:15:33.252 "trtype": "TCP", 00:15:33.252 "adrfam": "IPv4", 00:15:33.252 "traddr": "10.0.0.1", 00:15:33.252 "trsvcid": "35252" 00:15:33.252 }, 00:15:33.252 "auth": { 00:15:33.252 "state": "completed", 00:15:33.252 "digest": "sha384", 00:15:33.252 "dhgroup": "ffdhe8192" 00:15:33.252 } 00:15:33.252 } 00:15:33.252 ]' 00:15:33.252 20:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:33.252 20:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:33.252 20:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:33.252 20:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:33.252 20:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:33.252 20:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:33.252 20:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:33.252 20:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:33.510 20:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmIyNWU5ZjM0M2Y1ZWMyOTU0NzI0NzA4NDQyMDYyZGS+yavJ: --dhchap-ctrl-secret DHHC-1:02:YTVkZDRjYmVjNmEyNjNiNjM2YmQ5YzdmMTE5MDNmMDEzOTRjNTZlZmE2Mjk0YTU0mew6Ig==: 00:15:33.510 20:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NmIyNWU5ZjM0M2Y1ZWMyOTU0NzI0NzA4NDQyMDYyZGS+yavJ: --dhchap-ctrl-secret DHHC-1:02:YTVkZDRjYmVjNmEyNjNiNjM2YmQ5YzdmMTE5MDNmMDEzOTRjNTZlZmE2Mjk0YTU0mew6Ig==: 00:15:34.444 20:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:34.444 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:34.444 20:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:34.444 20:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.444 20:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.444 20:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.444 20:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:34.444 20:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:34.444 20:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:34.702 20:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:15:34.702 20:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:34.702 20:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:34.702 20:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:34.702 20:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:34.702 20:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:34.702 20:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:34.702 20:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.702 20:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.703 20:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.703 20:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:34.703 20:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:34.703 20:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:35.641 00:15:35.641 20:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:35.641 20:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:35.641 20:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.902 20:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.902 20:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.902 20:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.902 20:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.902 20:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.902 20:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:35.902 { 00:15:35.902 "cntlid": 93, 00:15:35.902 "qid": 0, 00:15:35.903 "state": "enabled", 00:15:35.903 "thread": "nvmf_tgt_poll_group_000", 00:15:35.903 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:35.903 "listen_address": { 00:15:35.903 "trtype": "TCP", 00:15:35.903 "adrfam": "IPv4", 00:15:35.903 "traddr": "10.0.0.2", 00:15:35.903 "trsvcid": "4420" 00:15:35.903 }, 00:15:35.903 "peer_address": { 00:15:35.903 "trtype": "TCP", 00:15:35.903 "adrfam": "IPv4", 00:15:35.903 "traddr": "10.0.0.1", 00:15:35.903 "trsvcid": "43208" 00:15:35.903 }, 00:15:35.903 "auth": { 00:15:35.903 "state": "completed", 00:15:35.903 "digest": "sha384", 00:15:35.903 "dhgroup": "ffdhe8192" 00:15:35.903 } 00:15:35.903 } 00:15:35.903 ]' 00:15:35.903 20:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:35.903 20:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:35.903 20:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:36.160 20:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:36.160 20:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:36.160 20:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:36.160 20:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.160 20:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.417 20:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA1MTgzZTVkZDUzMGYyNDVhOTllN2NhYzE2MjJiZTAwYTU4YTAwMTUwOWRlODVkrrWU6Q==: --dhchap-ctrl-secret DHHC-1:01:NGIwZjg2Yjk0ZmRiYjVjZTY3YjhkMmUxNjkzYjMxODOiDw2k: 00:15:36.418 20:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:NTA1MTgzZTVkZDUzMGYyNDVhOTllN2NhYzE2MjJiZTAwYTU4YTAwMTUwOWRlODVkrrWU6Q==: --dhchap-ctrl-secret DHHC-1:01:NGIwZjg2Yjk0ZmRiYjVjZTY3YjhkMmUxNjkzYjMxODOiDw2k: 00:15:37.350 20:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:37.350 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:37.350 20:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:37.350 20:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.350 20:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.350 20:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.350 20:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:37.350 20:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:37.350 20:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:37.608 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:15:37.608 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:37.608 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:37.608 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:37.608 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:37.608 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:37.608 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:37.608 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.608 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.608 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.608 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:37.608 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:37.608 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:38.542 00:15:38.542 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:38.542 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:38.543 20:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:38.800 20:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.800 20:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:38.800 20:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.800 20:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.800 20:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.800 20:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:38.800 { 00:15:38.800 "cntlid": 95, 00:15:38.800 "qid": 0, 00:15:38.800 "state": "enabled", 00:15:38.800 "thread": "nvmf_tgt_poll_group_000", 00:15:38.800 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:38.800 "listen_address": { 00:15:38.800 "trtype": "TCP", 00:15:38.800 "adrfam": "IPv4", 00:15:38.800 "traddr": "10.0.0.2", 00:15:38.800 "trsvcid": "4420" 00:15:38.800 }, 00:15:38.800 "peer_address": { 00:15:38.800 "trtype": "TCP", 00:15:38.800 "adrfam": "IPv4", 00:15:38.800 "traddr": "10.0.0.1", 00:15:38.800 "trsvcid": "43230" 00:15:38.800 }, 00:15:38.800 "auth": { 00:15:38.800 "state": "completed", 00:15:38.800 "digest": "sha384", 00:15:38.800 "dhgroup": "ffdhe8192" 00:15:38.800 } 00:15:38.800 } 00:15:38.800 ]' 00:15:38.800 20:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:38.800 20:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:38.801 20:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:38.801 20:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:38.801 20:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:38.801 20:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:38.801 20:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:38.801 20:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.059 20:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTkwNWI0NmIzZTU3ZjkyZmQwMTg4ZDQ2OTExNjI1MGI5ZGQ0ZDllMDIyNTBiMzYwNTAyYzc1MTNkNzJhNzdkZuyf8Tc=: 00:15:39.059 20:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MTkwNWI0NmIzZTU3ZjkyZmQwMTg4ZDQ2OTExNjI1MGI5ZGQ0ZDllMDIyNTBiMzYwNTAyYzc1MTNkNzJhNzdkZuyf8Tc=: 00:15:39.992 20:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:39.992 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:39.993 20:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:39.993 20:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.993 20:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.993 20:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.993 20:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:39.993 20:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:39.993 20:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:39.993 20:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:39.993 20:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:40.250 20:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:15:40.250 20:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:40.251 20:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:40.251 20:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:40.251 20:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:40.251 20:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.251 20:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.251 20:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.251 20:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.251 20:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.251 20:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.251 20:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.251 20:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.508 00:15:40.767 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:40.767 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:40.767 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.035 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.035 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:41.035 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.035 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.035 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.035 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:41.035 { 00:15:41.035 "cntlid": 97, 00:15:41.035 "qid": 0, 00:15:41.035 "state": "enabled", 00:15:41.035 "thread": "nvmf_tgt_poll_group_000", 00:15:41.035 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:41.035 "listen_address": { 00:15:41.035 "trtype": "TCP", 00:15:41.035 "adrfam": "IPv4", 00:15:41.035 "traddr": "10.0.0.2", 00:15:41.035 "trsvcid": "4420" 00:15:41.035 }, 00:15:41.035 "peer_address": { 00:15:41.035 "trtype": "TCP", 00:15:41.035 "adrfam": "IPv4", 00:15:41.035 "traddr": "10.0.0.1", 00:15:41.035 "trsvcid": "43274" 00:15:41.035 }, 00:15:41.035 "auth": { 00:15:41.035 "state": "completed", 00:15:41.035 "digest": "sha512", 00:15:41.035 "dhgroup": "null" 00:15:41.035 } 00:15:41.035 } 00:15:41.035 ]' 00:15:41.035 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:41.035 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:41.035 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:41.035 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:41.035 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:41.035 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:41.035 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.035 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.346 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDJiM2Q3OTg0ZWE2YjI5NzMzNzM3NmY1NjNjNjVkZjdiYjEwMWI5ODg1NmQzNWYyQcmoCQ==: --dhchap-ctrl-secret DHHC-1:03:Yzc5MmZlYzBiN2U1NWFkYzQ4NjU2MTIwZDczZGU1ODk2MjgzYTBjMWQ2NzVmMTgyYWZiODUyMjlkZTEzNGMwZm/Jc0E=: 00:15:41.346 20:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZDJiM2Q3OTg0ZWE2YjI5NzMzNzM3NmY1NjNjNjVkZjdiYjEwMWI5ODg1NmQzNWYyQcmoCQ==: --dhchap-ctrl-secret DHHC-1:03:Yzc5MmZlYzBiN2U1NWFkYzQ4NjU2MTIwZDczZGU1ODk2MjgzYTBjMWQ2NzVmMTgyYWZiODUyMjlkZTEzNGMwZm/Jc0E=: 00:15:42.303 20:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:42.303 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:42.303 20:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:42.303 20:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.303 20:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.303 20:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.303 20:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:42.303 20:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:42.303 20:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:42.561 20:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:15:42.562 20:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:42.562 20:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:42.562 20:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:42.562 20:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:42.562 20:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:42.562 20:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.562 20:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.562 20:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.562 20:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.562 20:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.562 20:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:42.562 20:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:43.127 00:15:43.127 20:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:43.127 20:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:43.127 20:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.385 20:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.385 20:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.385 20:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.385 20:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.385 20:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.385 20:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:43.385 { 00:15:43.385 "cntlid": 99, 00:15:43.385 "qid": 0, 00:15:43.385 "state": "enabled", 00:15:43.385 "thread": "nvmf_tgt_poll_group_000", 00:15:43.385 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:43.385 "listen_address": { 00:15:43.385 "trtype": "TCP", 00:15:43.385 "adrfam": "IPv4", 00:15:43.385 "traddr": "10.0.0.2", 00:15:43.385 "trsvcid": "4420" 00:15:43.385 }, 00:15:43.385 "peer_address": { 00:15:43.385 "trtype": "TCP", 00:15:43.385 "adrfam": "IPv4", 00:15:43.385 "traddr": "10.0.0.1", 00:15:43.385 "trsvcid": "48238" 00:15:43.385 }, 00:15:43.385 "auth": { 00:15:43.385 "state": "completed", 00:15:43.385 "digest": "sha512", 00:15:43.385 "dhgroup": "null" 00:15:43.385 } 00:15:43.385 } 00:15:43.385 ]' 00:15:43.385 20:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:43.385 20:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:43.385 20:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:43.385 20:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:43.385 20:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:43.385 20:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:43.385 20:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:43.385 20:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.643 20:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmIyNWU5ZjM0M2Y1ZWMyOTU0NzI0NzA4NDQyMDYyZGS+yavJ: --dhchap-ctrl-secret DHHC-1:02:YTVkZDRjYmVjNmEyNjNiNjM2YmQ5YzdmMTE5MDNmMDEzOTRjNTZlZmE2Mjk0YTU0mew6Ig==: 00:15:43.643 20:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NmIyNWU5ZjM0M2Y1ZWMyOTU0NzI0NzA4NDQyMDYyZGS+yavJ: --dhchap-ctrl-secret DHHC-1:02:YTVkZDRjYmVjNmEyNjNiNjM2YmQ5YzdmMTE5MDNmMDEzOTRjNTZlZmE2Mjk0YTU0mew6Ig==: 00:15:44.579 20:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:44.579 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:44.579 20:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:44.579 20:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.579 20:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.579 20:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.579 20:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:44.579 20:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:44.579 20:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:44.836 20:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:15:44.836 20:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:44.836 20:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:44.836 20:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:44.836 20:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:44.837 20:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:44.837 20:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.837 20:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.837 20:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.837 20:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.837 20:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.837 20:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:44.837 20:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:45.403 00:15:45.403 20:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:45.403 20:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:45.403 20:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.661 20:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.661 20:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.661 20:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.661 20:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.661 20:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.662 20:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:45.662 { 00:15:45.662 "cntlid": 101, 00:15:45.662 "qid": 0, 00:15:45.662 "state": "enabled", 00:15:45.662 "thread": "nvmf_tgt_poll_group_000", 00:15:45.662 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:45.662 "listen_address": { 00:15:45.662 "trtype": "TCP", 00:15:45.662 "adrfam": "IPv4", 00:15:45.662 "traddr": "10.0.0.2", 00:15:45.662 "trsvcid": "4420" 00:15:45.662 }, 00:15:45.662 "peer_address": { 00:15:45.662 "trtype": "TCP", 00:15:45.662 "adrfam": "IPv4", 00:15:45.662 "traddr": "10.0.0.1", 00:15:45.662 "trsvcid": "48260" 00:15:45.662 }, 00:15:45.662 "auth": { 00:15:45.662 "state": "completed", 00:15:45.662 "digest": "sha512", 00:15:45.662 "dhgroup": "null" 00:15:45.662 } 00:15:45.662 } 00:15:45.662 ]' 00:15:45.662 20:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:45.662 20:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:45.662 20:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:45.662 20:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:45.662 20:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:45.662 20:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.662 20:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.662 20:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.919 20:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA1MTgzZTVkZDUzMGYyNDVhOTllN2NhYzE2MjJiZTAwYTU4YTAwMTUwOWRlODVkrrWU6Q==: --dhchap-ctrl-secret DHHC-1:01:NGIwZjg2Yjk0ZmRiYjVjZTY3YjhkMmUxNjkzYjMxODOiDw2k: 00:15:45.919 20:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:NTA1MTgzZTVkZDUzMGYyNDVhOTllN2NhYzE2MjJiZTAwYTU4YTAwMTUwOWRlODVkrrWU6Q==: --dhchap-ctrl-secret DHHC-1:01:NGIwZjg2Yjk0ZmRiYjVjZTY3YjhkMmUxNjkzYjMxODOiDw2k: 00:15:46.851 20:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.851 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.851 20:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:46.851 20:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.851 20:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.851 20:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.851 20:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:46.851 20:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:46.851 20:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:47.416 20:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:15:47.416 20:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:47.416 20:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:47.416 20:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:47.416 20:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:47.416 20:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.416 20:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:47.416 20:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.416 20:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.416 20:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.416 20:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:47.416 20:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:47.416 20:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:47.673 00:15:47.673 20:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:47.673 20:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:47.673 20:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.930 20:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.930 20:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.930 20:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.930 20:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.930 20:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.930 20:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:47.930 { 00:15:47.930 "cntlid": 103, 00:15:47.930 "qid": 0, 00:15:47.930 "state": "enabled", 00:15:47.930 "thread": "nvmf_tgt_poll_group_000", 00:15:47.930 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:47.930 "listen_address": { 00:15:47.930 "trtype": "TCP", 00:15:47.930 "adrfam": "IPv4", 00:15:47.930 "traddr": "10.0.0.2", 00:15:47.930 "trsvcid": "4420" 00:15:47.930 }, 00:15:47.930 "peer_address": { 00:15:47.930 "trtype": "TCP", 00:15:47.930 "adrfam": "IPv4", 00:15:47.930 "traddr": "10.0.0.1", 00:15:47.930 "trsvcid": "48286" 00:15:47.930 }, 00:15:47.930 "auth": { 00:15:47.930 "state": "completed", 00:15:47.930 "digest": "sha512", 00:15:47.930 "dhgroup": "null" 00:15:47.930 } 00:15:47.930 } 00:15:47.930 ]' 00:15:47.930 20:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:47.930 20:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:47.930 20:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:47.930 20:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:47.930 20:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:47.930 20:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.930 20:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.930 20:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.493 20:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTkwNWI0NmIzZTU3ZjkyZmQwMTg4ZDQ2OTExNjI1MGI5ZGQ0ZDllMDIyNTBiMzYwNTAyYzc1MTNkNzJhNzdkZuyf8Tc=: 00:15:48.493 20:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MTkwNWI0NmIzZTU3ZjkyZmQwMTg4ZDQ2OTExNjI1MGI5ZGQ0ZDllMDIyNTBiMzYwNTAyYzc1MTNkNzJhNzdkZuyf8Tc=: 00:15:49.057 20:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.057 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.057 20:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:49.057 20:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.057 20:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.057 20:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.057 20:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:49.314 20:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:49.314 20:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:49.314 20:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:49.571 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:15:49.571 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:49.571 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:49.571 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:49.571 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:49.571 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:49.571 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.571 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.571 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.571 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.571 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.571 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.571 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.828 00:15:49.828 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:49.828 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.828 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:50.085 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.085 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.085 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.085 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.085 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.085 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:50.085 { 00:15:50.085 "cntlid": 105, 00:15:50.085 "qid": 0, 00:15:50.085 "state": "enabled", 00:15:50.085 "thread": "nvmf_tgt_poll_group_000", 00:15:50.085 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:50.085 "listen_address": { 00:15:50.085 "trtype": "TCP", 00:15:50.085 "adrfam": "IPv4", 00:15:50.085 "traddr": "10.0.0.2", 00:15:50.085 "trsvcid": "4420" 00:15:50.085 }, 00:15:50.085 "peer_address": { 00:15:50.085 "trtype": "TCP", 00:15:50.085 "adrfam": "IPv4", 00:15:50.085 "traddr": "10.0.0.1", 00:15:50.085 "trsvcid": "48298" 00:15:50.085 }, 00:15:50.085 "auth": { 00:15:50.085 "state": "completed", 00:15:50.085 "digest": "sha512", 00:15:50.085 "dhgroup": "ffdhe2048" 00:15:50.085 } 00:15:50.085 } 00:15:50.085 ]' 00:15:50.085 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:50.085 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:50.085 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:50.342 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:50.342 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:50.342 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.342 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.342 20:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.599 20:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDJiM2Q3OTg0ZWE2YjI5NzMzNzM3NmY1NjNjNjVkZjdiYjEwMWI5ODg1NmQzNWYyQcmoCQ==: --dhchap-ctrl-secret DHHC-1:03:Yzc5MmZlYzBiN2U1NWFkYzQ4NjU2MTIwZDczZGU1ODk2MjgzYTBjMWQ2NzVmMTgyYWZiODUyMjlkZTEzNGMwZm/Jc0E=: 00:15:50.599 20:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZDJiM2Q3OTg0ZWE2YjI5NzMzNzM3NmY1NjNjNjVkZjdiYjEwMWI5ODg1NmQzNWYyQcmoCQ==: --dhchap-ctrl-secret DHHC-1:03:Yzc5MmZlYzBiN2U1NWFkYzQ4NjU2MTIwZDczZGU1ODk2MjgzYTBjMWQ2NzVmMTgyYWZiODUyMjlkZTEzNGMwZm/Jc0E=: 00:15:51.531 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.531 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.531 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:51.531 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.531 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.531 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.531 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:51.531 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:51.531 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:51.789 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:15:51.789 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:51.789 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:51.789 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:51.789 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:51.789 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.789 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.789 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.789 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.789 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.789 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.789 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.789 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:52.047 00:15:52.047 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:52.047 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:52.047 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.304 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.304 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.304 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.304 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.304 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.304 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:52.304 { 00:15:52.304 "cntlid": 107, 00:15:52.304 "qid": 0, 00:15:52.304 "state": "enabled", 00:15:52.304 "thread": "nvmf_tgt_poll_group_000", 00:15:52.304 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:52.304 "listen_address": { 00:15:52.304 "trtype": "TCP", 00:15:52.304 "adrfam": "IPv4", 00:15:52.304 "traddr": "10.0.0.2", 00:15:52.304 "trsvcid": "4420" 00:15:52.304 }, 00:15:52.304 "peer_address": { 00:15:52.304 "trtype": "TCP", 00:15:52.304 "adrfam": "IPv4", 00:15:52.305 "traddr": "10.0.0.1", 00:15:52.305 "trsvcid": "48322" 00:15:52.305 }, 00:15:52.305 "auth": { 00:15:52.305 "state": "completed", 00:15:52.305 "digest": "sha512", 00:15:52.305 "dhgroup": "ffdhe2048" 00:15:52.305 } 00:15:52.305 } 00:15:52.305 ]' 00:15:52.305 20:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:52.562 20:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:52.562 20:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:52.562 20:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:52.562 20:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:52.562 20:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.562 20:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.562 20:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.819 20:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmIyNWU5ZjM0M2Y1ZWMyOTU0NzI0NzA4NDQyMDYyZGS+yavJ: --dhchap-ctrl-secret DHHC-1:02:YTVkZDRjYmVjNmEyNjNiNjM2YmQ5YzdmMTE5MDNmMDEzOTRjNTZlZmE2Mjk0YTU0mew6Ig==: 00:15:52.819 20:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NmIyNWU5ZjM0M2Y1ZWMyOTU0NzI0NzA4NDQyMDYyZGS+yavJ: --dhchap-ctrl-secret DHHC-1:02:YTVkZDRjYmVjNmEyNjNiNjM2YmQ5YzdmMTE5MDNmMDEzOTRjNTZlZmE2Mjk0YTU0mew6Ig==: 00:15:53.752 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.752 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.752 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:53.752 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.752 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.752 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.752 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:53.752 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:53.752 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:54.010 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:15:54.010 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:54.010 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:54.010 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:54.010 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:54.010 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.010 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:54.010 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.010 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.010 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.010 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:54.010 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:54.010 20:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:54.576 00:15:54.576 20:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:54.576 20:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:54.576 20:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.834 20:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.834 20:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.834 20:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.834 20:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.834 20:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.834 20:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:54.834 { 00:15:54.834 "cntlid": 109, 00:15:54.834 "qid": 0, 00:15:54.834 "state": "enabled", 00:15:54.834 "thread": "nvmf_tgt_poll_group_000", 00:15:54.834 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:54.834 "listen_address": { 00:15:54.834 "trtype": "TCP", 00:15:54.834 "adrfam": "IPv4", 00:15:54.834 "traddr": "10.0.0.2", 00:15:54.834 "trsvcid": "4420" 00:15:54.834 }, 00:15:54.834 "peer_address": { 00:15:54.834 "trtype": "TCP", 00:15:54.834 "adrfam": "IPv4", 00:15:54.834 "traddr": "10.0.0.1", 00:15:54.834 "trsvcid": "38956" 00:15:54.834 }, 00:15:54.834 "auth": { 00:15:54.834 "state": "completed", 00:15:54.834 "digest": "sha512", 00:15:54.834 "dhgroup": "ffdhe2048" 00:15:54.834 } 00:15:54.834 } 00:15:54.834 ]' 00:15:54.834 20:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:54.834 20:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:54.834 20:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:54.834 20:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:54.834 20:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:54.834 20:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.834 20:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.834 20:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.091 20:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA1MTgzZTVkZDUzMGYyNDVhOTllN2NhYzE2MjJiZTAwYTU4YTAwMTUwOWRlODVkrrWU6Q==: --dhchap-ctrl-secret DHHC-1:01:NGIwZjg2Yjk0ZmRiYjVjZTY3YjhkMmUxNjkzYjMxODOiDw2k: 00:15:55.091 20:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:NTA1MTgzZTVkZDUzMGYyNDVhOTllN2NhYzE2MjJiZTAwYTU4YTAwMTUwOWRlODVkrrWU6Q==: --dhchap-ctrl-secret DHHC-1:01:NGIwZjg2Yjk0ZmRiYjVjZTY3YjhkMmUxNjkzYjMxODOiDw2k: 00:15:56.027 20:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.027 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.027 20:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:56.027 20:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.027 20:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.027 20:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.027 20:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:56.027 20:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:56.027 20:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:56.286 20:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:15:56.286 20:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:56.286 20:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:56.286 20:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:56.286 20:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:56.286 20:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.286 20:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:15:56.286 20:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.286 20:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.286 20:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.286 20:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:56.286 20:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:56.286 20:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:56.544 00:15:56.544 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:56.544 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:56.544 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.108 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.108 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.108 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.108 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.108 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.108 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:57.108 { 00:15:57.108 "cntlid": 111, 00:15:57.108 "qid": 0, 00:15:57.108 "state": "enabled", 00:15:57.108 "thread": "nvmf_tgt_poll_group_000", 00:15:57.108 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:57.108 "listen_address": { 00:15:57.108 "trtype": "TCP", 00:15:57.108 "adrfam": "IPv4", 00:15:57.108 "traddr": "10.0.0.2", 00:15:57.108 "trsvcid": "4420" 00:15:57.108 }, 00:15:57.108 "peer_address": { 00:15:57.108 "trtype": "TCP", 00:15:57.108 "adrfam": "IPv4", 00:15:57.108 "traddr": "10.0.0.1", 00:15:57.108 "trsvcid": "38976" 00:15:57.108 }, 00:15:57.108 "auth": { 00:15:57.108 "state": "completed", 00:15:57.108 "digest": "sha512", 00:15:57.108 "dhgroup": "ffdhe2048" 00:15:57.108 } 00:15:57.108 } 00:15:57.108 ]' 00:15:57.108 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:57.108 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:57.108 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:57.108 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:57.108 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:57.108 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.108 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.109 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.366 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTkwNWI0NmIzZTU3ZjkyZmQwMTg4ZDQ2OTExNjI1MGI5ZGQ0ZDllMDIyNTBiMzYwNTAyYzc1MTNkNzJhNzdkZuyf8Tc=: 00:15:57.366 20:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MTkwNWI0NmIzZTU3ZjkyZmQwMTg4ZDQ2OTExNjI1MGI5ZGQ0ZDllMDIyNTBiMzYwNTAyYzc1MTNkNzJhNzdkZuyf8Tc=: 00:15:58.299 20:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.299 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.299 20:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:58.299 20:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.299 20:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.299 20:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.299 20:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:58.299 20:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:58.299 20:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:58.299 20:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:58.557 20:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:15:58.557 20:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:58.557 20:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:58.557 20:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:58.557 20:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:58.557 20:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.557 20:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.558 20:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.558 20:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.558 20:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.558 20:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.558 20:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.558 20:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.123 00:15:59.123 20:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:59.123 20:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:59.123 20:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.381 20:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.381 20:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.381 20:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.381 20:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.381 20:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.381 20:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:59.381 { 00:15:59.381 "cntlid": 113, 00:15:59.381 "qid": 0, 00:15:59.381 "state": "enabled", 00:15:59.381 "thread": "nvmf_tgt_poll_group_000", 00:15:59.381 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:15:59.381 "listen_address": { 00:15:59.381 "trtype": "TCP", 00:15:59.381 "adrfam": "IPv4", 00:15:59.381 "traddr": "10.0.0.2", 00:15:59.381 "trsvcid": "4420" 00:15:59.381 }, 00:15:59.381 "peer_address": { 00:15:59.381 "trtype": "TCP", 00:15:59.381 "adrfam": "IPv4", 00:15:59.381 "traddr": "10.0.0.1", 00:15:59.381 "trsvcid": "39010" 00:15:59.381 }, 00:15:59.381 "auth": { 00:15:59.381 "state": "completed", 00:15:59.381 "digest": "sha512", 00:15:59.381 "dhgroup": "ffdhe3072" 00:15:59.381 } 00:15:59.381 } 00:15:59.381 ]' 00:15:59.381 20:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:59.381 20:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:59.381 20:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:59.381 20:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:59.381 20:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:59.381 20:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.381 20:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.381 20:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.640 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDJiM2Q3OTg0ZWE2YjI5NzMzNzM3NmY1NjNjNjVkZjdiYjEwMWI5ODg1NmQzNWYyQcmoCQ==: --dhchap-ctrl-secret DHHC-1:03:Yzc5MmZlYzBiN2U1NWFkYzQ4NjU2MTIwZDczZGU1ODk2MjgzYTBjMWQ2NzVmMTgyYWZiODUyMjlkZTEzNGMwZm/Jc0E=: 00:15:59.640 20:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZDJiM2Q3OTg0ZWE2YjI5NzMzNzM3NmY1NjNjNjVkZjdiYjEwMWI5ODg1NmQzNWYyQcmoCQ==: --dhchap-ctrl-secret DHHC-1:03:Yzc5MmZlYzBiN2U1NWFkYzQ4NjU2MTIwZDczZGU1ODk2MjgzYTBjMWQ2NzVmMTgyYWZiODUyMjlkZTEzNGMwZm/Jc0E=: 00:16:00.574 20:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.574 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.574 20:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:00.574 20:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.574 20:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.574 20:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.574 20:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:00.574 20:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:00.575 20:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:00.832 20:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:16:00.832 20:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:00.832 20:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:00.832 20:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:00.832 20:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:00.832 20:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.832 20:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.832 20:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.832 20:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.832 20:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.832 20:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.832 20:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.832 20:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.397 00:16:01.397 20:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:01.397 20:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:01.397 20:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.655 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.655 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.655 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.655 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.655 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.655 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:01.655 { 00:16:01.655 "cntlid": 115, 00:16:01.655 "qid": 0, 00:16:01.655 "state": "enabled", 00:16:01.655 "thread": "nvmf_tgt_poll_group_000", 00:16:01.655 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:01.655 "listen_address": { 00:16:01.655 "trtype": "TCP", 00:16:01.655 "adrfam": "IPv4", 00:16:01.655 "traddr": "10.0.0.2", 00:16:01.655 "trsvcid": "4420" 00:16:01.655 }, 00:16:01.655 "peer_address": { 00:16:01.655 "trtype": "TCP", 00:16:01.655 "adrfam": "IPv4", 00:16:01.655 "traddr": "10.0.0.1", 00:16:01.655 "trsvcid": "39040" 00:16:01.655 }, 00:16:01.655 "auth": { 00:16:01.655 "state": "completed", 00:16:01.655 "digest": "sha512", 00:16:01.655 "dhgroup": "ffdhe3072" 00:16:01.655 } 00:16:01.655 } 00:16:01.655 ]' 00:16:01.655 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:01.655 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:01.656 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:01.656 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:01.656 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:01.656 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.656 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.656 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.221 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmIyNWU5ZjM0M2Y1ZWMyOTU0NzI0NzA4NDQyMDYyZGS+yavJ: --dhchap-ctrl-secret DHHC-1:02:YTVkZDRjYmVjNmEyNjNiNjM2YmQ5YzdmMTE5MDNmMDEzOTRjNTZlZmE2Mjk0YTU0mew6Ig==: 00:16:02.222 20:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NmIyNWU5ZjM0M2Y1ZWMyOTU0NzI0NzA4NDQyMDYyZGS+yavJ: --dhchap-ctrl-secret DHHC-1:02:YTVkZDRjYmVjNmEyNjNiNjM2YmQ5YzdmMTE5MDNmMDEzOTRjNTZlZmE2Mjk0YTU0mew6Ig==: 00:16:03.155 20:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.155 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.155 20:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:03.155 20:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.155 20:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.155 20:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.155 20:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:03.155 20:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:03.155 20:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:03.412 20:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:16:03.412 20:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:03.412 20:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:03.412 20:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:03.413 20:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:03.413 20:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.413 20:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.413 20:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.413 20:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.413 20:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.413 20:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.413 20:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.413 20:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.671 00:16:03.671 20:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:03.671 20:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:03.671 20:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.931 20:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.931 20:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.931 20:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.931 20:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.931 20:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.931 20:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:03.931 { 00:16:03.931 "cntlid": 117, 00:16:03.931 "qid": 0, 00:16:03.931 "state": "enabled", 00:16:03.931 "thread": "nvmf_tgt_poll_group_000", 00:16:03.931 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:03.931 "listen_address": { 00:16:03.931 "trtype": "TCP", 00:16:03.931 "adrfam": "IPv4", 00:16:03.931 "traddr": "10.0.0.2", 00:16:03.931 "trsvcid": "4420" 00:16:03.931 }, 00:16:03.931 "peer_address": { 00:16:03.931 "trtype": "TCP", 00:16:03.931 "adrfam": "IPv4", 00:16:03.931 "traddr": "10.0.0.1", 00:16:03.931 "trsvcid": "50558" 00:16:03.931 }, 00:16:03.931 "auth": { 00:16:03.931 "state": "completed", 00:16:03.931 "digest": "sha512", 00:16:03.931 "dhgroup": "ffdhe3072" 00:16:03.931 } 00:16:03.931 } 00:16:03.931 ]' 00:16:03.931 20:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:04.192 20:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:04.192 20:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:04.192 20:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:04.192 20:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:04.192 20:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.192 20:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.192 20:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.451 20:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA1MTgzZTVkZDUzMGYyNDVhOTllN2NhYzE2MjJiZTAwYTU4YTAwMTUwOWRlODVkrrWU6Q==: --dhchap-ctrl-secret DHHC-1:01:NGIwZjg2Yjk0ZmRiYjVjZTY3YjhkMmUxNjkzYjMxODOiDw2k: 00:16:04.451 20:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:NTA1MTgzZTVkZDUzMGYyNDVhOTllN2NhYzE2MjJiZTAwYTU4YTAwMTUwOWRlODVkrrWU6Q==: --dhchap-ctrl-secret DHHC-1:01:NGIwZjg2Yjk0ZmRiYjVjZTY3YjhkMmUxNjkzYjMxODOiDw2k: 00:16:05.464 20:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.464 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.464 20:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:05.464 20:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.464 20:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.464 20:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.464 20:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:05.464 20:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:05.464 20:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:05.721 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:16:05.721 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:05.721 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:05.721 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:05.721 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:05.721 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.721 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:05.721 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.721 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.721 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.721 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:05.721 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:05.721 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:05.978 00:16:05.978 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:05.978 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:05.978 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.236 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.236 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.236 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.236 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.236 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.236 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:06.236 { 00:16:06.236 "cntlid": 119, 00:16:06.236 "qid": 0, 00:16:06.236 "state": "enabled", 00:16:06.236 "thread": "nvmf_tgt_poll_group_000", 00:16:06.236 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:06.236 "listen_address": { 00:16:06.236 "trtype": "TCP", 00:16:06.236 "adrfam": "IPv4", 00:16:06.236 "traddr": "10.0.0.2", 00:16:06.236 "trsvcid": "4420" 00:16:06.237 }, 00:16:06.237 "peer_address": { 00:16:06.237 "trtype": "TCP", 00:16:06.237 "adrfam": "IPv4", 00:16:06.237 "traddr": "10.0.0.1", 00:16:06.237 "trsvcid": "50576" 00:16:06.237 }, 00:16:06.237 "auth": { 00:16:06.237 "state": "completed", 00:16:06.237 "digest": "sha512", 00:16:06.237 "dhgroup": "ffdhe3072" 00:16:06.237 } 00:16:06.237 } 00:16:06.237 ]' 00:16:06.237 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:06.237 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:06.237 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:06.237 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:06.237 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:06.494 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.494 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.494 20:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.752 20:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTkwNWI0NmIzZTU3ZjkyZmQwMTg4ZDQ2OTExNjI1MGI5ZGQ0ZDllMDIyNTBiMzYwNTAyYzc1MTNkNzJhNzdkZuyf8Tc=: 00:16:06.752 20:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MTkwNWI0NmIzZTU3ZjkyZmQwMTg4ZDQ2OTExNjI1MGI5ZGQ0ZDllMDIyNTBiMzYwNTAyYzc1MTNkNzJhNzdkZuyf8Tc=: 00:16:07.684 20:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.684 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.684 20:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:07.684 20:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.684 20:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.684 20:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.684 20:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:07.684 20:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:07.684 20:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:07.684 20:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:07.941 20:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:16:07.941 20:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:07.941 20:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:07.941 20:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:07.941 20:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:07.941 20:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.941 20:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.941 20:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.941 20:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.941 20:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.941 20:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.941 20:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.941 20:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.506 00:16:08.506 20:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:08.506 20:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:08.506 20:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.764 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.764 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.764 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.764 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.764 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.764 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:08.764 { 00:16:08.764 "cntlid": 121, 00:16:08.764 "qid": 0, 00:16:08.764 "state": "enabled", 00:16:08.764 "thread": "nvmf_tgt_poll_group_000", 00:16:08.764 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:08.764 "listen_address": { 00:16:08.764 "trtype": "TCP", 00:16:08.764 "adrfam": "IPv4", 00:16:08.764 "traddr": "10.0.0.2", 00:16:08.764 "trsvcid": "4420" 00:16:08.764 }, 00:16:08.764 "peer_address": { 00:16:08.764 "trtype": "TCP", 00:16:08.764 "adrfam": "IPv4", 00:16:08.764 "traddr": "10.0.0.1", 00:16:08.764 "trsvcid": "50606" 00:16:08.764 }, 00:16:08.764 "auth": { 00:16:08.764 "state": "completed", 00:16:08.764 "digest": "sha512", 00:16:08.764 "dhgroup": "ffdhe4096" 00:16:08.764 } 00:16:08.764 } 00:16:08.764 ]' 00:16:08.764 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:08.764 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:08.764 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:08.764 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:08.764 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:08.764 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.764 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.764 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.021 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDJiM2Q3OTg0ZWE2YjI5NzMzNzM3NmY1NjNjNjVkZjdiYjEwMWI5ODg1NmQzNWYyQcmoCQ==: --dhchap-ctrl-secret DHHC-1:03:Yzc5MmZlYzBiN2U1NWFkYzQ4NjU2MTIwZDczZGU1ODk2MjgzYTBjMWQ2NzVmMTgyYWZiODUyMjlkZTEzNGMwZm/Jc0E=: 00:16:09.021 20:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZDJiM2Q3OTg0ZWE2YjI5NzMzNzM3NmY1NjNjNjVkZjdiYjEwMWI5ODg1NmQzNWYyQcmoCQ==: --dhchap-ctrl-secret DHHC-1:03:Yzc5MmZlYzBiN2U1NWFkYzQ4NjU2MTIwZDczZGU1ODk2MjgzYTBjMWQ2NzVmMTgyYWZiODUyMjlkZTEzNGMwZm/Jc0E=: 00:16:09.954 20:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.954 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.954 20:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:09.954 20:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.954 20:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.954 20:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.954 20:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:09.954 20:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:09.954 20:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:10.520 20:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:16:10.520 20:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:10.520 20:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:10.520 20:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:10.520 20:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:10.520 20:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.520 20:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.520 20:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.520 20:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.520 20:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.520 20:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.520 20:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.520 20:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.777 00:16:10.777 20:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:10.777 20:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:10.777 20:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.036 20:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.036 20:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.036 20:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.036 20:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.036 20:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.036 20:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:11.036 { 00:16:11.036 "cntlid": 123, 00:16:11.036 "qid": 0, 00:16:11.036 "state": "enabled", 00:16:11.036 "thread": "nvmf_tgt_poll_group_000", 00:16:11.036 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:11.036 "listen_address": { 00:16:11.036 "trtype": "TCP", 00:16:11.036 "adrfam": "IPv4", 00:16:11.036 "traddr": "10.0.0.2", 00:16:11.036 "trsvcid": "4420" 00:16:11.036 }, 00:16:11.036 "peer_address": { 00:16:11.036 "trtype": "TCP", 00:16:11.036 "adrfam": "IPv4", 00:16:11.036 "traddr": "10.0.0.1", 00:16:11.036 "trsvcid": "50634" 00:16:11.036 }, 00:16:11.036 "auth": { 00:16:11.036 "state": "completed", 00:16:11.036 "digest": "sha512", 00:16:11.036 "dhgroup": "ffdhe4096" 00:16:11.036 } 00:16:11.036 } 00:16:11.036 ]' 00:16:11.036 20:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:11.036 20:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:11.036 20:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:11.036 20:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:11.036 20:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:11.293 20:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.293 20:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.293 20:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.551 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmIyNWU5ZjM0M2Y1ZWMyOTU0NzI0NzA4NDQyMDYyZGS+yavJ: --dhchap-ctrl-secret DHHC-1:02:YTVkZDRjYmVjNmEyNjNiNjM2YmQ5YzdmMTE5MDNmMDEzOTRjNTZlZmE2Mjk0YTU0mew6Ig==: 00:16:11.551 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NmIyNWU5ZjM0M2Y1ZWMyOTU0NzI0NzA4NDQyMDYyZGS+yavJ: --dhchap-ctrl-secret DHHC-1:02:YTVkZDRjYmVjNmEyNjNiNjM2YmQ5YzdmMTE5MDNmMDEzOTRjNTZlZmE2Mjk0YTU0mew6Ig==: 00:16:12.484 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.484 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.484 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:12.484 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.484 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.484 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.484 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:12.484 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:12.484 20:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:12.741 20:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:16:12.741 20:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:12.741 20:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:12.741 20:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:12.741 20:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:12.741 20:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.741 20:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.741 20:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.741 20:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.741 20:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.741 20:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.741 20:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.741 20:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.999 00:16:12.999 20:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:12.999 20:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:12.999 20:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.563 20:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.563 20:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.563 20:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.563 20:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.563 20:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.563 20:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:13.563 { 00:16:13.563 "cntlid": 125, 00:16:13.563 "qid": 0, 00:16:13.563 "state": "enabled", 00:16:13.563 "thread": "nvmf_tgt_poll_group_000", 00:16:13.563 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:13.563 "listen_address": { 00:16:13.563 "trtype": "TCP", 00:16:13.563 "adrfam": "IPv4", 00:16:13.563 "traddr": "10.0.0.2", 00:16:13.563 "trsvcid": "4420" 00:16:13.563 }, 00:16:13.563 "peer_address": { 00:16:13.563 "trtype": "TCP", 00:16:13.563 "adrfam": "IPv4", 00:16:13.563 "traddr": "10.0.0.1", 00:16:13.563 "trsvcid": "40574" 00:16:13.563 }, 00:16:13.563 "auth": { 00:16:13.563 "state": "completed", 00:16:13.563 "digest": "sha512", 00:16:13.563 "dhgroup": "ffdhe4096" 00:16:13.563 } 00:16:13.563 } 00:16:13.563 ]' 00:16:13.563 20:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:13.563 20:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:13.563 20:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:13.563 20:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:13.563 20:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:13.563 20:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.563 20:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.563 20:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.820 20:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA1MTgzZTVkZDUzMGYyNDVhOTllN2NhYzE2MjJiZTAwYTU4YTAwMTUwOWRlODVkrrWU6Q==: --dhchap-ctrl-secret DHHC-1:01:NGIwZjg2Yjk0ZmRiYjVjZTY3YjhkMmUxNjkzYjMxODOiDw2k: 00:16:13.820 20:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:NTA1MTgzZTVkZDUzMGYyNDVhOTllN2NhYzE2MjJiZTAwYTU4YTAwMTUwOWRlODVkrrWU6Q==: --dhchap-ctrl-secret DHHC-1:01:NGIwZjg2Yjk0ZmRiYjVjZTY3YjhkMmUxNjkzYjMxODOiDw2k: 00:16:14.753 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.753 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.753 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:14.753 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.753 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.753 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.753 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:14.753 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:14.753 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:15.011 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:16:15.011 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:15.011 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:15.011 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:15.011 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:15.011 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.011 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:15.011 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.011 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.011 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.011 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:15.011 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:15.011 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:15.575 00:16:15.575 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:15.575 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:15.575 20:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.575 20:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.575 20:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.576 20:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.576 20:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.833 20:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.833 20:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:15.833 { 00:16:15.833 "cntlid": 127, 00:16:15.833 "qid": 0, 00:16:15.833 "state": "enabled", 00:16:15.833 "thread": "nvmf_tgt_poll_group_000", 00:16:15.833 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:15.833 "listen_address": { 00:16:15.833 "trtype": "TCP", 00:16:15.833 "adrfam": "IPv4", 00:16:15.833 "traddr": "10.0.0.2", 00:16:15.833 "trsvcid": "4420" 00:16:15.833 }, 00:16:15.833 "peer_address": { 00:16:15.833 "trtype": "TCP", 00:16:15.833 "adrfam": "IPv4", 00:16:15.833 "traddr": "10.0.0.1", 00:16:15.833 "trsvcid": "40586" 00:16:15.833 }, 00:16:15.833 "auth": { 00:16:15.833 "state": "completed", 00:16:15.833 "digest": "sha512", 00:16:15.833 "dhgroup": "ffdhe4096" 00:16:15.833 } 00:16:15.833 } 00:16:15.833 ]' 00:16:15.833 20:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:15.833 20:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:15.833 20:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:15.833 20:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:15.833 20:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:15.833 20:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.833 20:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.833 20:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.089 20:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTkwNWI0NmIzZTU3ZjkyZmQwMTg4ZDQ2OTExNjI1MGI5ZGQ0ZDllMDIyNTBiMzYwNTAyYzc1MTNkNzJhNzdkZuyf8Tc=: 00:16:16.089 20:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MTkwNWI0NmIzZTU3ZjkyZmQwMTg4ZDQ2OTExNjI1MGI5ZGQ0ZDllMDIyNTBiMzYwNTAyYzc1MTNkNzJhNzdkZuyf8Tc=: 00:16:17.022 20:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.022 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.022 20:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:17.022 20:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.022 20:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.022 20:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.022 20:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:17.022 20:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:17.022 20:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:17.022 20:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:17.280 20:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:16:17.280 20:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:17.280 20:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:17.280 20:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:17.280 20:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:17.280 20:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.280 20:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.280 20:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.280 20:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.280 20:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.280 20:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.280 20:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.280 20:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.845 00:16:17.845 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:17.845 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:17.845 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.102 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.102 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.102 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.102 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.102 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.102 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:18.102 { 00:16:18.102 "cntlid": 129, 00:16:18.102 "qid": 0, 00:16:18.102 "state": "enabled", 00:16:18.102 "thread": "nvmf_tgt_poll_group_000", 00:16:18.102 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:18.102 "listen_address": { 00:16:18.102 "trtype": "TCP", 00:16:18.102 "adrfam": "IPv4", 00:16:18.102 "traddr": "10.0.0.2", 00:16:18.102 "trsvcid": "4420" 00:16:18.102 }, 00:16:18.102 "peer_address": { 00:16:18.103 "trtype": "TCP", 00:16:18.103 "adrfam": "IPv4", 00:16:18.103 "traddr": "10.0.0.1", 00:16:18.103 "trsvcid": "40616" 00:16:18.103 }, 00:16:18.103 "auth": { 00:16:18.103 "state": "completed", 00:16:18.103 "digest": "sha512", 00:16:18.103 "dhgroup": "ffdhe6144" 00:16:18.103 } 00:16:18.103 } 00:16:18.103 ]' 00:16:18.103 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:18.103 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:18.103 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:18.103 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:18.103 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:18.360 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.360 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.360 20:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.618 20:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDJiM2Q3OTg0ZWE2YjI5NzMzNzM3NmY1NjNjNjVkZjdiYjEwMWI5ODg1NmQzNWYyQcmoCQ==: --dhchap-ctrl-secret DHHC-1:03:Yzc5MmZlYzBiN2U1NWFkYzQ4NjU2MTIwZDczZGU1ODk2MjgzYTBjMWQ2NzVmMTgyYWZiODUyMjlkZTEzNGMwZm/Jc0E=: 00:16:18.618 20:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZDJiM2Q3OTg0ZWE2YjI5NzMzNzM3NmY1NjNjNjVkZjdiYjEwMWI5ODg1NmQzNWYyQcmoCQ==: --dhchap-ctrl-secret DHHC-1:03:Yzc5MmZlYzBiN2U1NWFkYzQ4NjU2MTIwZDczZGU1ODk2MjgzYTBjMWQ2NzVmMTgyYWZiODUyMjlkZTEzNGMwZm/Jc0E=: 00:16:19.550 20:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.550 20:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:19.550 20:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.550 20:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.550 20:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.550 20:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:19.550 20:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:19.550 20:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:19.808 20:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:16:19.808 20:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:19.808 20:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:19.808 20:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:19.808 20:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:19.808 20:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.808 20:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.808 20:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.808 20:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.808 20:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.808 20:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.808 20:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.808 20:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.373 00:16:20.373 20:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:20.373 20:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:20.373 20:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.630 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.630 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.630 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.630 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.630 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.630 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:20.630 { 00:16:20.630 "cntlid": 131, 00:16:20.630 "qid": 0, 00:16:20.630 "state": "enabled", 00:16:20.630 "thread": "nvmf_tgt_poll_group_000", 00:16:20.630 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:20.630 "listen_address": { 00:16:20.630 "trtype": "TCP", 00:16:20.630 "adrfam": "IPv4", 00:16:20.630 "traddr": "10.0.0.2", 00:16:20.630 "trsvcid": "4420" 00:16:20.630 }, 00:16:20.630 "peer_address": { 00:16:20.631 "trtype": "TCP", 00:16:20.631 "adrfam": "IPv4", 00:16:20.631 "traddr": "10.0.0.1", 00:16:20.631 "trsvcid": "40632" 00:16:20.631 }, 00:16:20.631 "auth": { 00:16:20.631 "state": "completed", 00:16:20.631 "digest": "sha512", 00:16:20.631 "dhgroup": "ffdhe6144" 00:16:20.631 } 00:16:20.631 } 00:16:20.631 ]' 00:16:20.631 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:20.631 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:20.631 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:20.631 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:20.631 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:20.631 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.631 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.631 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.889 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmIyNWU5ZjM0M2Y1ZWMyOTU0NzI0NzA4NDQyMDYyZGS+yavJ: --dhchap-ctrl-secret DHHC-1:02:YTVkZDRjYmVjNmEyNjNiNjM2YmQ5YzdmMTE5MDNmMDEzOTRjNTZlZmE2Mjk0YTU0mew6Ig==: 00:16:20.889 20:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NmIyNWU5ZjM0M2Y1ZWMyOTU0NzI0NzA4NDQyMDYyZGS+yavJ: --dhchap-ctrl-secret DHHC-1:02:YTVkZDRjYmVjNmEyNjNiNjM2YmQ5YzdmMTE5MDNmMDEzOTRjNTZlZmE2Mjk0YTU0mew6Ig==: 00:16:21.822 20:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.822 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.822 20:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:21.822 20:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.822 20:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.822 20:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.822 20:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:21.822 20:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:21.822 20:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:22.080 20:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:16:22.080 20:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:22.080 20:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:22.080 20:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:22.080 20:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:22.080 20:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.080 20:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.080 20:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.080 20:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.080 20:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.080 20:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.080 20:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.080 20:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.645 00:16:22.645 20:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:22.645 20:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.646 20:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:22.904 20:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.904 20:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.904 20:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.904 20:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.904 20:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.904 20:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:22.904 { 00:16:22.904 "cntlid": 133, 00:16:22.904 "qid": 0, 00:16:22.904 "state": "enabled", 00:16:22.904 "thread": "nvmf_tgt_poll_group_000", 00:16:22.904 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:22.904 "listen_address": { 00:16:22.904 "trtype": "TCP", 00:16:22.904 "adrfam": "IPv4", 00:16:22.904 "traddr": "10.0.0.2", 00:16:22.904 "trsvcid": "4420" 00:16:22.904 }, 00:16:22.904 "peer_address": { 00:16:22.904 "trtype": "TCP", 00:16:22.904 "adrfam": "IPv4", 00:16:22.904 "traddr": "10.0.0.1", 00:16:22.904 "trsvcid": "40660" 00:16:22.904 }, 00:16:22.904 "auth": { 00:16:22.904 "state": "completed", 00:16:22.904 "digest": "sha512", 00:16:22.904 "dhgroup": "ffdhe6144" 00:16:22.904 } 00:16:22.904 } 00:16:22.904 ]' 00:16:22.904 20:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:22.904 20:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:22.904 20:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.162 20:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:23.162 20:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.162 20:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.162 20:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.162 20:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.421 20:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA1MTgzZTVkZDUzMGYyNDVhOTllN2NhYzE2MjJiZTAwYTU4YTAwMTUwOWRlODVkrrWU6Q==: --dhchap-ctrl-secret DHHC-1:01:NGIwZjg2Yjk0ZmRiYjVjZTY3YjhkMmUxNjkzYjMxODOiDw2k: 00:16:23.421 20:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:NTA1MTgzZTVkZDUzMGYyNDVhOTllN2NhYzE2MjJiZTAwYTU4YTAwMTUwOWRlODVkrrWU6Q==: --dhchap-ctrl-secret DHHC-1:01:NGIwZjg2Yjk0ZmRiYjVjZTY3YjhkMmUxNjkzYjMxODOiDw2k: 00:16:24.354 20:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.354 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.354 20:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:24.354 20:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.354 20:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.354 20:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.354 20:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:24.354 20:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:24.354 20:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:24.620 20:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:16:24.620 20:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:24.620 20:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:24.620 20:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:24.620 20:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:24.620 20:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.620 20:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:24.620 20:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.620 20:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.620 20:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.620 20:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:24.620 20:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:24.620 20:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:25.186 00:16:25.186 20:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:25.186 20:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:25.186 20:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.443 20:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.443 20:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.443 20:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.443 20:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.443 20:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.443 20:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:25.443 { 00:16:25.443 "cntlid": 135, 00:16:25.443 "qid": 0, 00:16:25.443 "state": "enabled", 00:16:25.443 "thread": "nvmf_tgt_poll_group_000", 00:16:25.443 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:25.443 "listen_address": { 00:16:25.443 "trtype": "TCP", 00:16:25.443 "adrfam": "IPv4", 00:16:25.443 "traddr": "10.0.0.2", 00:16:25.443 "trsvcid": "4420" 00:16:25.443 }, 00:16:25.443 "peer_address": { 00:16:25.443 "trtype": "TCP", 00:16:25.443 "adrfam": "IPv4", 00:16:25.443 "traddr": "10.0.0.1", 00:16:25.443 "trsvcid": "44922" 00:16:25.443 }, 00:16:25.443 "auth": { 00:16:25.443 "state": "completed", 00:16:25.443 "digest": "sha512", 00:16:25.443 "dhgroup": "ffdhe6144" 00:16:25.444 } 00:16:25.444 } 00:16:25.444 ]' 00:16:25.444 20:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:25.444 20:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:25.444 20:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:25.444 20:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:25.444 20:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:25.444 20:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.444 20:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.444 20:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.701 20:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTkwNWI0NmIzZTU3ZjkyZmQwMTg4ZDQ2OTExNjI1MGI5ZGQ0ZDllMDIyNTBiMzYwNTAyYzc1MTNkNzJhNzdkZuyf8Tc=: 00:16:25.701 20:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MTkwNWI0NmIzZTU3ZjkyZmQwMTg4ZDQ2OTExNjI1MGI5ZGQ0ZDllMDIyNTBiMzYwNTAyYzc1MTNkNzJhNzdkZuyf8Tc=: 00:16:26.634 20:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.634 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.634 20:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:26.634 20:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.634 20:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.634 20:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.634 20:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:26.634 20:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:26.634 20:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:26.634 20:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:26.892 20:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:16:26.892 20:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:26.892 20:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:26.892 20:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:26.892 20:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:26.892 20:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.892 20:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.892 20:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.892 20:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.892 20:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.892 20:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.892 20:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.892 20:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.824 00:16:27.824 20:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:27.824 20:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:27.824 20:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.082 20:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.082 20:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.082 20:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.082 20:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.082 20:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.082 20:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.082 { 00:16:28.082 "cntlid": 137, 00:16:28.082 "qid": 0, 00:16:28.082 "state": "enabled", 00:16:28.082 "thread": "nvmf_tgt_poll_group_000", 00:16:28.082 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:28.082 "listen_address": { 00:16:28.082 "trtype": "TCP", 00:16:28.082 "adrfam": "IPv4", 00:16:28.082 "traddr": "10.0.0.2", 00:16:28.082 "trsvcid": "4420" 00:16:28.082 }, 00:16:28.082 "peer_address": { 00:16:28.082 "trtype": "TCP", 00:16:28.082 "adrfam": "IPv4", 00:16:28.082 "traddr": "10.0.0.1", 00:16:28.082 "trsvcid": "44954" 00:16:28.082 }, 00:16:28.082 "auth": { 00:16:28.082 "state": "completed", 00:16:28.082 "digest": "sha512", 00:16:28.082 "dhgroup": "ffdhe8192" 00:16:28.082 } 00:16:28.082 } 00:16:28.082 ]' 00:16:28.082 20:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.082 20:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:28.082 20:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.082 20:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:28.082 20:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.082 20:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.082 20:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.082 20:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.339 20:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDJiM2Q3OTg0ZWE2YjI5NzMzNzM3NmY1NjNjNjVkZjdiYjEwMWI5ODg1NmQzNWYyQcmoCQ==: --dhchap-ctrl-secret DHHC-1:03:Yzc5MmZlYzBiN2U1NWFkYzQ4NjU2MTIwZDczZGU1ODk2MjgzYTBjMWQ2NzVmMTgyYWZiODUyMjlkZTEzNGMwZm/Jc0E=: 00:16:28.339 20:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZDJiM2Q3OTg0ZWE2YjI5NzMzNzM3NmY1NjNjNjVkZjdiYjEwMWI5ODg1NmQzNWYyQcmoCQ==: --dhchap-ctrl-secret DHHC-1:03:Yzc5MmZlYzBiN2U1NWFkYzQ4NjU2MTIwZDczZGU1ODk2MjgzYTBjMWQ2NzVmMTgyYWZiODUyMjlkZTEzNGMwZm/Jc0E=: 00:16:29.268 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.269 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.269 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:29.269 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.269 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.269 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.269 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.269 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:29.269 20:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:29.525 20:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:16:29.525 20:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.525 20:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:29.525 20:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:29.525 20:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:29.525 20:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.525 20:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.525 20:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.525 20:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.525 20:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.525 20:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.525 20:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.525 20:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.468 00:16:30.468 20:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:30.468 20:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:30.468 20:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.785 20:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.785 20:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.785 20:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.785 20:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.785 20:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.785 20:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.785 { 00:16:30.785 "cntlid": 139, 00:16:30.785 "qid": 0, 00:16:30.785 "state": "enabled", 00:16:30.785 "thread": "nvmf_tgt_poll_group_000", 00:16:30.785 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:30.785 "listen_address": { 00:16:30.785 "trtype": "TCP", 00:16:30.785 "adrfam": "IPv4", 00:16:30.785 "traddr": "10.0.0.2", 00:16:30.785 "trsvcid": "4420" 00:16:30.785 }, 00:16:30.785 "peer_address": { 00:16:30.785 "trtype": "TCP", 00:16:30.785 "adrfam": "IPv4", 00:16:30.785 "traddr": "10.0.0.1", 00:16:30.785 "trsvcid": "44982" 00:16:30.785 }, 00:16:30.785 "auth": { 00:16:30.785 "state": "completed", 00:16:30.785 "digest": "sha512", 00:16:30.785 "dhgroup": "ffdhe8192" 00:16:30.785 } 00:16:30.785 } 00:16:30.785 ]' 00:16:30.785 20:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.785 20:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:30.785 20:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.785 20:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:30.785 20:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.785 20:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.785 20:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.785 20:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.042 20:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NmIyNWU5ZjM0M2Y1ZWMyOTU0NzI0NzA4NDQyMDYyZGS+yavJ: --dhchap-ctrl-secret DHHC-1:02:YTVkZDRjYmVjNmEyNjNiNjM2YmQ5YzdmMTE5MDNmMDEzOTRjNTZlZmE2Mjk0YTU0mew6Ig==: 00:16:31.042 20:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NmIyNWU5ZjM0M2Y1ZWMyOTU0NzI0NzA4NDQyMDYyZGS+yavJ: --dhchap-ctrl-secret DHHC-1:02:YTVkZDRjYmVjNmEyNjNiNjM2YmQ5YzdmMTE5MDNmMDEzOTRjNTZlZmE2Mjk0YTU0mew6Ig==: 00:16:31.973 20:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.232 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.232 20:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:32.232 20:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.232 20:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.232 20:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.232 20:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.232 20:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:32.232 20:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:32.490 20:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:16:32.490 20:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:32.490 20:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:32.490 20:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:32.490 20:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:32.490 20:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.490 20:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.490 20:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.490 20:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.490 20:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.490 20:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.490 20:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.490 20:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.423 00:16:33.423 20:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.423 20:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.423 20:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.681 20:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.681 20:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.681 20:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.681 20:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.681 20:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.681 20:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:33.681 { 00:16:33.681 "cntlid": 141, 00:16:33.681 "qid": 0, 00:16:33.681 "state": "enabled", 00:16:33.681 "thread": "nvmf_tgt_poll_group_000", 00:16:33.681 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:33.681 "listen_address": { 00:16:33.681 "trtype": "TCP", 00:16:33.681 "adrfam": "IPv4", 00:16:33.681 "traddr": "10.0.0.2", 00:16:33.681 "trsvcid": "4420" 00:16:33.681 }, 00:16:33.681 "peer_address": { 00:16:33.681 "trtype": "TCP", 00:16:33.681 "adrfam": "IPv4", 00:16:33.681 "traddr": "10.0.0.1", 00:16:33.681 "trsvcid": "41626" 00:16:33.681 }, 00:16:33.681 "auth": { 00:16:33.681 "state": "completed", 00:16:33.681 "digest": "sha512", 00:16:33.681 "dhgroup": "ffdhe8192" 00:16:33.681 } 00:16:33.681 } 00:16:33.681 ]' 00:16:33.681 20:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:33.681 20:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:33.681 20:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:33.681 20:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:33.681 20:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:33.682 20:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.682 20:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.682 20:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.939 20:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA1MTgzZTVkZDUzMGYyNDVhOTllN2NhYzE2MjJiZTAwYTU4YTAwMTUwOWRlODVkrrWU6Q==: --dhchap-ctrl-secret DHHC-1:01:NGIwZjg2Yjk0ZmRiYjVjZTY3YjhkMmUxNjkzYjMxODOiDw2k: 00:16:33.939 20:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:NTA1MTgzZTVkZDUzMGYyNDVhOTllN2NhYzE2MjJiZTAwYTU4YTAwMTUwOWRlODVkrrWU6Q==: --dhchap-ctrl-secret DHHC-1:01:NGIwZjg2Yjk0ZmRiYjVjZTY3YjhkMmUxNjkzYjMxODOiDw2k: 00:16:34.872 20:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.872 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.872 20:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:34.872 20:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.872 20:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.872 20:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.872 20:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:34.872 20:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:34.872 20:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:35.438 20:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:16:35.438 20:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.438 20:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:35.438 20:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:35.438 20:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:35.438 20:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.438 20:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:35.438 20:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.438 20:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.438 20:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.438 20:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:35.438 20:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:35.438 20:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:36.371 00:16:36.371 20:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.371 20:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.371 20:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.371 20:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.371 20:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.371 20:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.371 20:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.371 20:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.372 20:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.372 { 00:16:36.372 "cntlid": 143, 00:16:36.372 "qid": 0, 00:16:36.372 "state": "enabled", 00:16:36.372 "thread": "nvmf_tgt_poll_group_000", 00:16:36.372 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:36.372 "listen_address": { 00:16:36.372 "trtype": "TCP", 00:16:36.372 "adrfam": "IPv4", 00:16:36.372 "traddr": "10.0.0.2", 00:16:36.372 "trsvcid": "4420" 00:16:36.372 }, 00:16:36.372 "peer_address": { 00:16:36.372 "trtype": "TCP", 00:16:36.372 "adrfam": "IPv4", 00:16:36.372 "traddr": "10.0.0.1", 00:16:36.372 "trsvcid": "41648" 00:16:36.372 }, 00:16:36.372 "auth": { 00:16:36.372 "state": "completed", 00:16:36.372 "digest": "sha512", 00:16:36.372 "dhgroup": "ffdhe8192" 00:16:36.372 } 00:16:36.372 } 00:16:36.372 ]' 00:16:36.372 20:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.372 20:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:36.372 20:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.629 20:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:36.629 20:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.629 20:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.629 20:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.629 20:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.886 20:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTkwNWI0NmIzZTU3ZjkyZmQwMTg4ZDQ2OTExNjI1MGI5ZGQ0ZDllMDIyNTBiMzYwNTAyYzc1MTNkNzJhNzdkZuyf8Tc=: 00:16:36.886 20:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MTkwNWI0NmIzZTU3ZjkyZmQwMTg4ZDQ2OTExNjI1MGI5ZGQ0ZDllMDIyNTBiMzYwNTAyYzc1MTNkNzJhNzdkZuyf8Tc=: 00:16:37.820 20:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.820 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.820 20:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:37.820 20:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.820 20:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.820 20:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.820 20:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:16:37.820 20:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:16:37.820 20:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:16:37.820 20:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:37.820 20:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:37.820 20:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:38.078 20:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:16:38.078 20:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.078 20:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:38.078 20:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:38.078 20:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:38.078 20:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.078 20:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.078 20:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.078 20:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.078 20:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.078 20:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.078 20:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.078 20:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.011 00:16:39.011 20:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.011 20:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.011 20:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.268 20:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.268 20:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.268 20:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.268 20:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.268 20:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.268 20:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.268 { 00:16:39.268 "cntlid": 145, 00:16:39.268 "qid": 0, 00:16:39.268 "state": "enabled", 00:16:39.268 "thread": "nvmf_tgt_poll_group_000", 00:16:39.268 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:39.268 "listen_address": { 00:16:39.268 "trtype": "TCP", 00:16:39.268 "adrfam": "IPv4", 00:16:39.268 "traddr": "10.0.0.2", 00:16:39.268 "trsvcid": "4420" 00:16:39.268 }, 00:16:39.268 "peer_address": { 00:16:39.268 "trtype": "TCP", 00:16:39.268 "adrfam": "IPv4", 00:16:39.268 "traddr": "10.0.0.1", 00:16:39.268 "trsvcid": "41678" 00:16:39.268 }, 00:16:39.268 "auth": { 00:16:39.268 "state": "completed", 00:16:39.268 "digest": "sha512", 00:16:39.268 "dhgroup": "ffdhe8192" 00:16:39.268 } 00:16:39.268 } 00:16:39.268 ]' 00:16:39.268 20:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.268 20:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:39.268 20:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.268 20:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:39.268 20:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.526 20:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.526 20:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.526 20:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.783 20:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZDJiM2Q3OTg0ZWE2YjI5NzMzNzM3NmY1NjNjNjVkZjdiYjEwMWI5ODg1NmQzNWYyQcmoCQ==: --dhchap-ctrl-secret DHHC-1:03:Yzc5MmZlYzBiN2U1NWFkYzQ4NjU2MTIwZDczZGU1ODk2MjgzYTBjMWQ2NzVmMTgyYWZiODUyMjlkZTEzNGMwZm/Jc0E=: 00:16:39.783 20:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZDJiM2Q3OTg0ZWE2YjI5NzMzNzM3NmY1NjNjNjVkZjdiYjEwMWI5ODg1NmQzNWYyQcmoCQ==: --dhchap-ctrl-secret DHHC-1:03:Yzc5MmZlYzBiN2U1NWFkYzQ4NjU2MTIwZDczZGU1ODk2MjgzYTBjMWQ2NzVmMTgyYWZiODUyMjlkZTEzNGMwZm/Jc0E=: 00:16:40.716 20:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.716 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.716 20:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:40.716 20:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.716 20:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.716 20:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.716 20:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:16:40.716 20:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.716 20:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.716 20:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.716 20:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:16:40.716 20:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:40.716 20:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:16:40.716 20:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:40.716 20:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:40.716 20:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:40.716 20:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:40.716 20:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:16:40.716 20:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:16:40.716 20:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:16:41.281 request: 00:16:41.281 { 00:16:41.281 "name": "nvme0", 00:16:41.281 "trtype": "tcp", 00:16:41.281 "traddr": "10.0.0.2", 00:16:41.281 "adrfam": "ipv4", 00:16:41.281 "trsvcid": "4420", 00:16:41.281 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:41.281 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:41.281 "prchk_reftag": false, 00:16:41.281 "prchk_guard": false, 00:16:41.281 "hdgst": false, 00:16:41.281 "ddgst": false, 00:16:41.281 "dhchap_key": "key2", 00:16:41.281 "allow_unrecognized_csi": false, 00:16:41.281 "method": "bdev_nvme_attach_controller", 00:16:41.281 "req_id": 1 00:16:41.281 } 00:16:41.281 Got JSON-RPC error response 00:16:41.281 response: 00:16:41.281 { 00:16:41.281 "code": -5, 00:16:41.281 "message": "Input/output error" 00:16:41.281 } 00:16:41.281 20:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:41.281 20:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:41.281 20:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:41.281 20:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:41.281 20:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:41.281 20:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.281 20:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.281 20:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.281 20:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.281 20:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.281 20:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.281 20:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.281 20:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:41.281 20:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:41.281 20:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:41.281 20:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:41.281 20:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:41.281 20:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:41.281 20:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:41.281 20:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:41.281 20:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:41.281 20:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:42.212 request: 00:16:42.213 { 00:16:42.213 "name": "nvme0", 00:16:42.213 "trtype": "tcp", 00:16:42.213 "traddr": "10.0.0.2", 00:16:42.213 "adrfam": "ipv4", 00:16:42.213 "trsvcid": "4420", 00:16:42.213 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:42.213 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:42.213 "prchk_reftag": false, 00:16:42.213 "prchk_guard": false, 00:16:42.213 "hdgst": false, 00:16:42.213 "ddgst": false, 00:16:42.213 "dhchap_key": "key1", 00:16:42.213 "dhchap_ctrlr_key": "ckey2", 00:16:42.213 "allow_unrecognized_csi": false, 00:16:42.213 "method": "bdev_nvme_attach_controller", 00:16:42.213 "req_id": 1 00:16:42.213 } 00:16:42.213 Got JSON-RPC error response 00:16:42.213 response: 00:16:42.213 { 00:16:42.213 "code": -5, 00:16:42.213 "message": "Input/output error" 00:16:42.213 } 00:16:42.213 20:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:42.213 20:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:42.213 20:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:42.213 20:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:42.213 20:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:42.213 20:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.213 20:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.213 20:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.213 20:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:16:42.213 20:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.213 20:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.213 20:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.213 20:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.213 20:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:42.213 20:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.213 20:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:42.213 20:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:42.213 20:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:42.213 20:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:42.213 20:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.213 20:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:42.213 20:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.146 request: 00:16:43.146 { 00:16:43.146 "name": "nvme0", 00:16:43.146 "trtype": "tcp", 00:16:43.146 "traddr": "10.0.0.2", 00:16:43.146 "adrfam": "ipv4", 00:16:43.146 "trsvcid": "4420", 00:16:43.146 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:43.146 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:43.146 "prchk_reftag": false, 00:16:43.146 "prchk_guard": false, 00:16:43.146 "hdgst": false, 00:16:43.146 "ddgst": false, 00:16:43.146 "dhchap_key": "key1", 00:16:43.146 "dhchap_ctrlr_key": "ckey1", 00:16:43.146 "allow_unrecognized_csi": false, 00:16:43.146 "method": "bdev_nvme_attach_controller", 00:16:43.146 "req_id": 1 00:16:43.146 } 00:16:43.146 Got JSON-RPC error response 00:16:43.146 response: 00:16:43.146 { 00:16:43.146 "code": -5, 00:16:43.146 "message": "Input/output error" 00:16:43.146 } 00:16:43.146 20:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:43.146 20:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:43.146 20:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:43.146 20:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:43.146 20:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:43.146 20:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.146 20:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.146 20:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.146 20:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 1645477 00:16:43.146 20:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1645477 ']' 00:16:43.146 20:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1645477 00:16:43.146 20:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:16:43.146 20:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:43.146 20:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1645477 00:16:43.146 20:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:43.146 20:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:43.146 20:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1645477' 00:16:43.146 killing process with pid 1645477 00:16:43.146 20:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1645477 00:16:43.146 20:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1645477 00:16:43.404 20:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:16:43.405 20:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:43.405 20:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:43.405 20:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.405 20:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1668243 00:16:43.405 20:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:16:43.405 20:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1668243 00:16:43.405 20:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1668243 ']' 00:16:43.405 20:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.405 20:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:43.405 20:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.405 20:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:43.405 20:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.662 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:43.662 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:43.662 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:43.662 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:43.662 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.662 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:43.662 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:43.662 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1668243 00:16:43.662 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1668243 ']' 00:16:43.662 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.662 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:43.662 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.662 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:43.662 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.919 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:43.919 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:43.919 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:16:43.919 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.919 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.919 null0 00:16:43.919 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.919 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:43.919 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.okc 00:16:43.919 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.919 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.919 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.919 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.AvC ]] 00:16:43.919 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.AvC 00:16:43.919 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.919 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.919 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.919 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:43.919 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.2KZ 00:16:43.919 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.919 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.177 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.177 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.btC ]] 00:16:44.177 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.btC 00:16:44.177 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.177 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.177 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.177 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:44.177 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.xlU 00:16:44.177 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.177 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.177 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.177 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.6wk ]] 00:16:44.177 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.6wk 00:16:44.177 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.177 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.177 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.177 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:44.177 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.SXr 00:16:44.177 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.177 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.177 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.177 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:16:44.177 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:16:44.177 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:44.177 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:44.177 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:44.177 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:44.177 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.177 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:44.177 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.177 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.177 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.177 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:44.177 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:44.177 20:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:45.549 nvme0n1 00:16:45.549 20:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.549 20:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.549 20:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.807 20:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.807 20:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.807 20:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.807 20:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.807 20:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.807 20:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.807 { 00:16:45.807 "cntlid": 1, 00:16:45.807 "qid": 0, 00:16:45.807 "state": "enabled", 00:16:45.807 "thread": "nvmf_tgt_poll_group_000", 00:16:45.807 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:45.807 "listen_address": { 00:16:45.807 "trtype": "TCP", 00:16:45.807 "adrfam": "IPv4", 00:16:45.807 "traddr": "10.0.0.2", 00:16:45.807 "trsvcid": "4420" 00:16:45.807 }, 00:16:45.807 "peer_address": { 00:16:45.807 "trtype": "TCP", 00:16:45.807 "adrfam": "IPv4", 00:16:45.807 "traddr": "10.0.0.1", 00:16:45.807 "trsvcid": "44518" 00:16:45.807 }, 00:16:45.807 "auth": { 00:16:45.807 "state": "completed", 00:16:45.807 "digest": "sha512", 00:16:45.807 "dhgroup": "ffdhe8192" 00:16:45.807 } 00:16:45.807 } 00:16:45.807 ]' 00:16:45.808 20:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.808 20:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:45.808 20:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.808 20:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:45.808 20:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.808 20:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.808 20:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.808 20:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.066 20:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTkwNWI0NmIzZTU3ZjkyZmQwMTg4ZDQ2OTExNjI1MGI5ZGQ0ZDllMDIyNTBiMzYwNTAyYzc1MTNkNzJhNzdkZuyf8Tc=: 00:16:46.066 20:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MTkwNWI0NmIzZTU3ZjkyZmQwMTg4ZDQ2OTExNjI1MGI5ZGQ0ZDllMDIyNTBiMzYwNTAyYzc1MTNkNzJhNzdkZuyf8Tc=: 00:16:47.000 20:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.000 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.000 20:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:47.000 20:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.000 20:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.000 20:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.000 20:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:16:47.000 20:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.000 20:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.000 20:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.000 20:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:16:47.000 20:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:16:47.257 20:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:16:47.257 20:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:47.257 20:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:16:47.257 20:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:47.257 20:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:47.257 20:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:47.257 20:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:47.257 20:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:47.258 20:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:47.258 20:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:47.515 request: 00:16:47.515 { 00:16:47.515 "name": "nvme0", 00:16:47.515 "trtype": "tcp", 00:16:47.515 "traddr": "10.0.0.2", 00:16:47.515 "adrfam": "ipv4", 00:16:47.515 "trsvcid": "4420", 00:16:47.515 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:47.515 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:47.515 "prchk_reftag": false, 00:16:47.515 "prchk_guard": false, 00:16:47.515 "hdgst": false, 00:16:47.515 "ddgst": false, 00:16:47.515 "dhchap_key": "key3", 00:16:47.515 "allow_unrecognized_csi": false, 00:16:47.515 "method": "bdev_nvme_attach_controller", 00:16:47.515 "req_id": 1 00:16:47.515 } 00:16:47.515 Got JSON-RPC error response 00:16:47.515 response: 00:16:47.515 { 00:16:47.515 "code": -5, 00:16:47.515 "message": "Input/output error" 00:16:47.515 } 00:16:47.515 20:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:47.515 20:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:47.515 20:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:47.515 20:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:47.515 20:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:16:47.515 20:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:16:47.515 20:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:47.515 20:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:48.080 20:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:16:48.080 20:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:48.080 20:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:16:48.080 20:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:48.081 20:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:48.081 20:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:48.081 20:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:48.081 20:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:48.081 20:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:48.081 20:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:48.081 request: 00:16:48.081 { 00:16:48.081 "name": "nvme0", 00:16:48.081 "trtype": "tcp", 00:16:48.081 "traddr": "10.0.0.2", 00:16:48.081 "adrfam": "ipv4", 00:16:48.081 "trsvcid": "4420", 00:16:48.081 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:48.081 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:48.081 "prchk_reftag": false, 00:16:48.081 "prchk_guard": false, 00:16:48.081 "hdgst": false, 00:16:48.081 "ddgst": false, 00:16:48.081 "dhchap_key": "key3", 00:16:48.081 "allow_unrecognized_csi": false, 00:16:48.081 "method": "bdev_nvme_attach_controller", 00:16:48.081 "req_id": 1 00:16:48.081 } 00:16:48.081 Got JSON-RPC error response 00:16:48.081 response: 00:16:48.081 { 00:16:48.081 "code": -5, 00:16:48.081 "message": "Input/output error" 00:16:48.081 } 00:16:48.081 20:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:48.081 20:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:48.081 20:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:48.081 20:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:48.081 20:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:16:48.081 20:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:16:48.081 20:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:16:48.081 20:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:48.081 20:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:48.081 20:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:48.339 20:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:48.339 20:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.339 20:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.598 20:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.598 20:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:48.598 20:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.598 20:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.598 20:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.598 20:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:48.598 20:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:48.598 20:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:48.598 20:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:48.598 20:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:48.598 20:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:48.598 20:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:48.598 20:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:48.598 20:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:48.598 20:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:49.164 request: 00:16:49.164 { 00:16:49.164 "name": "nvme0", 00:16:49.164 "trtype": "tcp", 00:16:49.164 "traddr": "10.0.0.2", 00:16:49.164 "adrfam": "ipv4", 00:16:49.164 "trsvcid": "4420", 00:16:49.164 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:49.164 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:49.164 "prchk_reftag": false, 00:16:49.164 "prchk_guard": false, 00:16:49.164 "hdgst": false, 00:16:49.164 "ddgst": false, 00:16:49.164 "dhchap_key": "key0", 00:16:49.164 "dhchap_ctrlr_key": "key1", 00:16:49.164 "allow_unrecognized_csi": false, 00:16:49.164 "method": "bdev_nvme_attach_controller", 00:16:49.164 "req_id": 1 00:16:49.164 } 00:16:49.164 Got JSON-RPC error response 00:16:49.164 response: 00:16:49.164 { 00:16:49.164 "code": -5, 00:16:49.164 "message": "Input/output error" 00:16:49.164 } 00:16:49.164 20:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:49.164 20:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:49.164 20:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:49.164 20:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:49.164 20:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:16:49.164 20:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:16:49.164 20:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:16:49.422 nvme0n1 00:16:49.422 20:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:16:49.422 20:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:16:49.422 20:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.680 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.680 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.680 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.938 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:16:49.938 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.938 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.938 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.938 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:16:49.938 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:49.938 20:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:51.311 nvme0n1 00:16:51.311 20:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:16:51.311 20:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:16:51.311 20:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.569 20:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.569 20:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:51.569 20:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.570 20:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.570 20:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.570 20:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:16:51.570 20:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:16:51.570 20:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.828 20:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.828 20:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NTA1MTgzZTVkZDUzMGYyNDVhOTllN2NhYzE2MjJiZTAwYTU4YTAwMTUwOWRlODVkrrWU6Q==: --dhchap-ctrl-secret DHHC-1:03:MTkwNWI0NmIzZTU3ZjkyZmQwMTg4ZDQ2OTExNjI1MGI5ZGQ0ZDllMDIyNTBiMzYwNTAyYzc1MTNkNzJhNzdkZuyf8Tc=: 00:16:51.828 20:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:NTA1MTgzZTVkZDUzMGYyNDVhOTllN2NhYzE2MjJiZTAwYTU4YTAwMTUwOWRlODVkrrWU6Q==: --dhchap-ctrl-secret DHHC-1:03:MTkwNWI0NmIzZTU3ZjkyZmQwMTg4ZDQ2OTExNjI1MGI5ZGQ0ZDllMDIyNTBiMzYwNTAyYzc1MTNkNzJhNzdkZuyf8Tc=: 00:16:52.761 20:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:16:52.761 20:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:16:52.761 20:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:16:52.761 20:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:16:52.761 20:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:16:52.761 20:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:16:52.761 20:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:16:52.761 20:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.761 20:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.019 20:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:16:53.019 20:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:53.019 20:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:16:53.019 20:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:53.019 20:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:53.019 20:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:53.019 20:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:53.019 20:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:16:53.019 20:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:53.019 20:46:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:53.951 request: 00:16:53.951 { 00:16:53.951 "name": "nvme0", 00:16:53.951 "trtype": "tcp", 00:16:53.951 "traddr": "10.0.0.2", 00:16:53.951 "adrfam": "ipv4", 00:16:53.951 "trsvcid": "4420", 00:16:53.951 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:53.951 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:16:53.951 "prchk_reftag": false, 00:16:53.951 "prchk_guard": false, 00:16:53.951 "hdgst": false, 00:16:53.951 "ddgst": false, 00:16:53.951 "dhchap_key": "key1", 00:16:53.951 "allow_unrecognized_csi": false, 00:16:53.951 "method": "bdev_nvme_attach_controller", 00:16:53.951 "req_id": 1 00:16:53.951 } 00:16:53.951 Got JSON-RPC error response 00:16:53.951 response: 00:16:53.951 { 00:16:53.951 "code": -5, 00:16:53.951 "message": "Input/output error" 00:16:53.951 } 00:16:53.951 20:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:53.951 20:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:53.951 20:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:53.951 20:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:53.951 20:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:53.951 20:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:53.951 20:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:55.324 nvme0n1 00:16:55.324 20:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:16:55.324 20:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:16:55.324 20:46:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.324 20:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.324 20:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.324 20:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.889 20:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:55.889 20:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.889 20:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.889 20:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.889 20:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:16:55.889 20:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:16:55.889 20:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:16:56.146 nvme0n1 00:16:56.146 20:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:16:56.146 20:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:16:56.146 20:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.418 20:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.418 20:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.418 20:46:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.736 20:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:56.736 20:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.736 20:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.736 20:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.736 20:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NmIyNWU5ZjM0M2Y1ZWMyOTU0NzI0NzA4NDQyMDYyZGS+yavJ: '' 2s 00:16:56.736 20:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:16:56.736 20:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:16:56.736 20:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NmIyNWU5ZjM0M2Y1ZWMyOTU0NzI0NzA4NDQyMDYyZGS+yavJ: 00:16:56.736 20:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:16:56.736 20:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:16:56.736 20:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:16:56.736 20:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NmIyNWU5ZjM0M2Y1ZWMyOTU0NzI0NzA4NDQyMDYyZGS+yavJ: ]] 00:16:56.736 20:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NmIyNWU5ZjM0M2Y1ZWMyOTU0NzI0NzA4NDQyMDYyZGS+yavJ: 00:16:56.736 20:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:16:56.736 20:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:16:56.736 20:47:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:16:58.638 20:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:16:58.638 20:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:16:58.638 20:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:58.638 20:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:16:58.638 20:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:58.638 20:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:16:58.638 20:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:16:58.638 20:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key key2 00:16:58.638 20:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.638 20:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.638 20:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.638 20:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NTA1MTgzZTVkZDUzMGYyNDVhOTllN2NhYzE2MjJiZTAwYTU4YTAwMTUwOWRlODVkrrWU6Q==: 2s 00:16:58.638 20:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:16:58.638 20:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:16:58.638 20:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:16:58.638 20:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NTA1MTgzZTVkZDUzMGYyNDVhOTllN2NhYzE2MjJiZTAwYTU4YTAwMTUwOWRlODVkrrWU6Q==: 00:16:58.638 20:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:16:58.638 20:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:16:58.638 20:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:16:58.638 20:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NTA1MTgzZTVkZDUzMGYyNDVhOTllN2NhYzE2MjJiZTAwYTU4YTAwMTUwOWRlODVkrrWU6Q==: ]] 00:16:58.638 20:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NTA1MTgzZTVkZDUzMGYyNDVhOTllN2NhYzE2MjJiZTAwYTU4YTAwMTUwOWRlODVkrrWU6Q==: 00:16:58.638 20:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:16:58.638 20:47:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:01.165 20:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:17:01.165 20:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:01.165 20:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:01.165 20:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:01.165 20:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:01.165 20:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:01.165 20:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:01.165 20:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.165 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.165 20:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:01.165 20:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.165 20:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.165 20:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.165 20:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:01.165 20:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:01.165 20:47:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:02.099 nvme0n1 00:17:02.099 20:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:02.099 20:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.099 20:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.099 20:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.099 20:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:02.099 20:47:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:03.032 20:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:17:03.032 20:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:17:03.032 20:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.032 20:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.032 20:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:03.032 20:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.032 20:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.290 20:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.290 20:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:17:03.290 20:47:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:17:03.547 20:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:17:03.547 20:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:17:03.547 20:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.805 20:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.805 20:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:03.805 20:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.805 20:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.805 20:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.805 20:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:03.805 20:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:03.805 20:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:03.805 20:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:03.805 20:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:03.805 20:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:03.805 20:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:03.805 20:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:03.805 20:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:04.370 request: 00:17:04.370 { 00:17:04.370 "name": "nvme0", 00:17:04.370 "dhchap_key": "key1", 00:17:04.370 "dhchap_ctrlr_key": "key3", 00:17:04.370 "method": "bdev_nvme_set_keys", 00:17:04.370 "req_id": 1 00:17:04.370 } 00:17:04.370 Got JSON-RPC error response 00:17:04.370 response: 00:17:04.370 { 00:17:04.370 "code": -13, 00:17:04.370 "message": "Permission denied" 00:17:04.370 } 00:17:04.627 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:04.627 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:04.627 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:04.627 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:04.627 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:04.627 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:04.627 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.885 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:17:04.885 20:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:17:05.818 20:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:05.818 20:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:05.818 20:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.075 20:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:17:06.075 20:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:06.075 20:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.075 20:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.075 20:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.075 20:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:06.075 20:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:06.075 20:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:07.443 nvme0n1 00:17:07.443 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:07.443 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.443 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.443 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.443 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:07.443 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:07.443 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:07.443 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:07.443 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:07.443 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:07.443 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:07.443 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:07.443 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:08.376 request: 00:17:08.376 { 00:17:08.376 "name": "nvme0", 00:17:08.376 "dhchap_key": "key2", 00:17:08.376 "dhchap_ctrlr_key": "key0", 00:17:08.376 "method": "bdev_nvme_set_keys", 00:17:08.376 "req_id": 1 00:17:08.376 } 00:17:08.376 Got JSON-RPC error response 00:17:08.376 response: 00:17:08.376 { 00:17:08.376 "code": -13, 00:17:08.376 "message": "Permission denied" 00:17:08.376 } 00:17:08.376 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:08.376 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:08.376 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:08.376 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:08.376 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:08.376 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.376 20:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:08.633 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:17:08.633 20:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:17:09.564 20:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:09.564 20:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:09.564 20:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.821 20:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:17:09.821 20:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:17:09.821 20:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:17:09.821 20:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1645508 00:17:09.821 20:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1645508 ']' 00:17:09.821 20:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1645508 00:17:09.821 20:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:09.821 20:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:09.821 20:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1645508 00:17:09.821 20:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:09.821 20:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:09.821 20:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1645508' 00:17:09.821 killing process with pid 1645508 00:17:09.821 20:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1645508 00:17:09.821 20:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1645508 00:17:10.384 20:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:10.384 20:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:10.384 20:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:17:10.384 20:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:10.384 20:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:17:10.384 20:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:10.384 20:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:10.384 rmmod nvme_tcp 00:17:10.384 rmmod nvme_fabrics 00:17:10.384 rmmod nvme_keyring 00:17:10.384 20:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:10.384 20:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:17:10.384 20:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:17:10.384 20:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 1668243 ']' 00:17:10.384 20:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 1668243 00:17:10.384 20:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1668243 ']' 00:17:10.384 20:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1668243 00:17:10.384 20:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:10.384 20:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:10.384 20:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1668243 00:17:10.384 20:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:10.384 20:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:10.384 20:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1668243' 00:17:10.384 killing process with pid 1668243 00:17:10.384 20:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1668243 00:17:10.384 20:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1668243 00:17:10.644 20:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:10.644 20:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:10.644 20:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:10.644 20:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:17:10.644 20:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:17:10.644 20:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:10.644 20:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:17:10.644 20:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:10.644 20:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:10.644 20:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:10.644 20:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:10.644 20:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:13.181 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:13.181 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.okc /tmp/spdk.key-sha256.2KZ /tmp/spdk.key-sha384.xlU /tmp/spdk.key-sha512.SXr /tmp/spdk.key-sha512.AvC /tmp/spdk.key-sha384.btC /tmp/spdk.key-sha256.6wk '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:13.181 00:17:13.181 real 3m31.452s 00:17:13.181 user 8m16.451s 00:17:13.181 sys 0m27.959s 00:17:13.181 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:13.181 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.181 ************************************ 00:17:13.181 END TEST nvmf_auth_target 00:17:13.181 ************************************ 00:17:13.181 20:47:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:17:13.181 20:47:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:13.181 20:47:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:13.181 20:47:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:13.181 20:47:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:13.181 ************************************ 00:17:13.181 START TEST nvmf_bdevio_no_huge 00:17:13.181 ************************************ 00:17:13.181 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:13.181 * Looking for test storage... 00:17:13.181 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:13.181 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:13.181 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:17:13.181 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:13.181 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:13.181 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:13.181 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:13.181 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:13.181 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:17:13.181 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:17:13.181 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:17:13.181 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:17:13.181 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:17:13.181 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:17:13.181 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:17:13.181 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:13.181 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:17:13.181 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:17:13.181 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:13.181 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:13.181 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:17:13.181 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:17:13.181 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:13.181 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:17:13.181 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:17:13.181 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:17:13.181 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:17:13.181 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:13.181 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:17:13.181 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:17:13.181 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:13.181 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:13.181 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:17:13.182 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:13.182 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:13.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.182 --rc genhtml_branch_coverage=1 00:17:13.182 --rc genhtml_function_coverage=1 00:17:13.182 --rc genhtml_legend=1 00:17:13.182 --rc geninfo_all_blocks=1 00:17:13.182 --rc geninfo_unexecuted_blocks=1 00:17:13.182 00:17:13.182 ' 00:17:13.182 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:13.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.182 --rc genhtml_branch_coverage=1 00:17:13.182 --rc genhtml_function_coverage=1 00:17:13.182 --rc genhtml_legend=1 00:17:13.182 --rc geninfo_all_blocks=1 00:17:13.182 --rc geninfo_unexecuted_blocks=1 00:17:13.182 00:17:13.182 ' 00:17:13.182 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:13.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.182 --rc genhtml_branch_coverage=1 00:17:13.182 --rc genhtml_function_coverage=1 00:17:13.182 --rc genhtml_legend=1 00:17:13.182 --rc geninfo_all_blocks=1 00:17:13.182 --rc geninfo_unexecuted_blocks=1 00:17:13.182 00:17:13.182 ' 00:17:13.182 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:13.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.182 --rc genhtml_branch_coverage=1 00:17:13.182 --rc genhtml_function_coverage=1 00:17:13.182 --rc genhtml_legend=1 00:17:13.182 --rc geninfo_all_blocks=1 00:17:13.182 --rc geninfo_unexecuted_blocks=1 00:17:13.182 00:17:13.182 ' 00:17:13.182 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:13.182 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:13.182 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:13.182 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:13.182 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:13.182 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:13.182 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:13.182 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:13.182 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:13.182 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:13.182 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:13.182 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:13.182 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:13.182 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:17:13.182 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:13.182 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:13.182 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:13.182 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:13.182 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:13.182 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:17:13.182 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:13.182 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:13.182 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:13.182 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.182 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.182 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.182 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:13.182 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.182 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:17:13.182 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:13.182 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:13.182 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:13.182 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:13.182 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:13.182 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:13.182 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:13.182 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:13.182 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:13.182 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:13.182 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:13.182 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:13.182 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:13.182 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:13.182 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:13.182 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:13.182 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:13.182 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:13.182 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:13.182 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:13.182 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:13.182 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:13.182 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:13.182 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:17:13.182 20:47:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:17:15.087 Found 0000:09:00.0 (0x8086 - 0x159b) 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:17:15.087 Found 0000:09:00.1 (0x8086 - 0x159b) 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:17:15.087 Found net devices under 0000:09:00.0: cvl_0_0 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:17:15.087 Found net devices under 0000:09:00.1: cvl_0_1 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:15.087 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:15.088 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:15.088 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:15.088 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:15.088 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:15.088 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:15.088 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:15.088 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:15.088 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:15.088 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:15.088 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:15.088 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:15.088 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:15.088 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:15.088 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:15.088 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:15.088 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:15.088 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:15.088 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:15.088 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:15.088 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:17:15.088 00:17:15.088 --- 10.0.0.2 ping statistics --- 00:17:15.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:15.088 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:17:15.088 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:15.088 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:15.088 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:17:15.088 00:17:15.088 --- 10.0.0.1 ping statistics --- 00:17:15.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:15.088 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:17:15.088 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:15.088 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:17:15.088 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:15.088 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:15.088 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:15.088 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:15.088 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:15.088 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:15.088 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:15.088 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:15.088 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:15.088 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:15.088 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:15.088 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=1673501 00:17:15.088 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:15.088 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 1673501 00:17:15.088 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 1673501 ']' 00:17:15.088 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:15.088 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:15.088 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:15.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:15.088 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:15.088 20:47:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:15.347 [2024-11-26 20:47:18.790890] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:17:15.347 [2024-11-26 20:47:18.790988] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:15.347 [2024-11-26 20:47:18.868892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:15.347 [2024-11-26 20:47:18.924951] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:15.347 [2024-11-26 20:47:18.925005] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:15.347 [2024-11-26 20:47:18.925037] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:15.347 [2024-11-26 20:47:18.925049] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:15.347 [2024-11-26 20:47:18.925058] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:15.347 [2024-11-26 20:47:18.926143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:17:15.347 [2024-11-26 20:47:18.926251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:17:15.347 [2024-11-26 20:47:18.926346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:15.347 [2024-11-26 20:47:18.926341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:17:15.605 20:47:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:15.605 20:47:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:17:15.605 20:47:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:15.605 20:47:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:15.605 20:47:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:15.605 20:47:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:15.605 20:47:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:15.605 20:47:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.605 20:47:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:15.605 [2024-11-26 20:47:19.087874] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:15.605 20:47:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.605 20:47:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:15.605 20:47:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.605 20:47:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:15.605 Malloc0 00:17:15.605 20:47:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.605 20:47:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:15.605 20:47:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.605 20:47:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:15.605 20:47:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.605 20:47:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:15.605 20:47:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.605 20:47:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:15.605 20:47:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.605 20:47:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:15.605 20:47:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.605 20:47:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:15.605 [2024-11-26 20:47:19.126401] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:15.605 20:47:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.605 20:47:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:15.605 20:47:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:15.605 20:47:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:17:15.605 20:47:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:17:15.605 20:47:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:15.605 20:47:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:15.605 { 00:17:15.605 "params": { 00:17:15.606 "name": "Nvme$subsystem", 00:17:15.606 "trtype": "$TEST_TRANSPORT", 00:17:15.606 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:15.606 "adrfam": "ipv4", 00:17:15.606 "trsvcid": "$NVMF_PORT", 00:17:15.606 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:15.606 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:15.606 "hdgst": ${hdgst:-false}, 00:17:15.606 "ddgst": ${ddgst:-false} 00:17:15.606 }, 00:17:15.606 "method": "bdev_nvme_attach_controller" 00:17:15.606 } 00:17:15.606 EOF 00:17:15.606 )") 00:17:15.606 20:47:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:17:15.606 20:47:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:17:15.606 20:47:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:17:15.606 20:47:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:17:15.606 "params": { 00:17:15.606 "name": "Nvme1", 00:17:15.606 "trtype": "tcp", 00:17:15.606 "traddr": "10.0.0.2", 00:17:15.606 "adrfam": "ipv4", 00:17:15.606 "trsvcid": "4420", 00:17:15.606 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:15.606 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:15.606 "hdgst": false, 00:17:15.606 "ddgst": false 00:17:15.606 }, 00:17:15.606 "method": "bdev_nvme_attach_controller" 00:17:15.606 }' 00:17:15.606 [2024-11-26 20:47:19.178637] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:17:15.606 [2024-11-26 20:47:19.178714] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1673525 ] 00:17:15.606 [2024-11-26 20:47:19.253051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:15.864 [2024-11-26 20:47:19.319066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:15.864 [2024-11-26 20:47:19.319116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:15.864 [2024-11-26 20:47:19.319119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:15.864 I/O targets: 00:17:15.864 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:15.864 00:17:15.864 00:17:15.864 CUnit - A unit testing framework for C - Version 2.1-3 00:17:15.864 http://cunit.sourceforge.net/ 00:17:15.864 00:17:15.864 00:17:15.864 Suite: bdevio tests on: Nvme1n1 00:17:16.122 Test: blockdev write read block ...passed 00:17:16.122 Test: blockdev write zeroes read block ...passed 00:17:16.122 Test: blockdev write zeroes read no split ...passed 00:17:16.122 Test: blockdev write zeroes read split ...passed 00:17:16.122 Test: blockdev write zeroes read split partial ...passed 00:17:16.122 Test: blockdev reset ...[2024-11-26 20:47:19.712162] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:17:16.122 [2024-11-26 20:47:19.712274] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd5b6a0 (9): Bad file descriptor 00:17:16.122 [2024-11-26 20:47:19.770447] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:17:16.122 passed 00:17:16.122 Test: blockdev write read 8 blocks ...passed 00:17:16.122 Test: blockdev write read size > 128k ...passed 00:17:16.122 Test: blockdev write read invalid size ...passed 00:17:16.122 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:16.122 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:16.122 Test: blockdev write read max offset ...passed 00:17:16.380 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:16.380 Test: blockdev writev readv 8 blocks ...passed 00:17:16.380 Test: blockdev writev readv 30 x 1block ...passed 00:17:16.380 Test: blockdev writev readv block ...passed 00:17:16.380 Test: blockdev writev readv size > 128k ...passed 00:17:16.380 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:16.380 Test: blockdev comparev and writev ...[2024-11-26 20:47:20.023962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:16.380 [2024-11-26 20:47:20.024013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:16.380 [2024-11-26 20:47:20.024039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:16.380 [2024-11-26 20:47:20.024057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:16.380 [2024-11-26 20:47:20.024389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:16.380 [2024-11-26 20:47:20.024414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:16.380 [2024-11-26 20:47:20.024437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:16.380 [2024-11-26 20:47:20.024453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:16.380 [2024-11-26 20:47:20.024765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:16.380 [2024-11-26 20:47:20.024789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:16.380 [2024-11-26 20:47:20.024811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:16.380 [2024-11-26 20:47:20.024827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:16.380 [2024-11-26 20:47:20.025127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:16.380 [2024-11-26 20:47:20.025151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:16.380 [2024-11-26 20:47:20.025174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:16.380 [2024-11-26 20:47:20.025190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:16.380 passed 00:17:16.638 Test: blockdev nvme passthru rw ...passed 00:17:16.638 Test: blockdev nvme passthru vendor specific ...[2024-11-26 20:47:20.107553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:16.638 [2024-11-26 20:47:20.107583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:16.638 [2024-11-26 20:47:20.107727] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:16.638 [2024-11-26 20:47:20.107752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:16.638 [2024-11-26 20:47:20.107889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:16.638 [2024-11-26 20:47:20.107913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:16.638 [2024-11-26 20:47:20.108063] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:16.638 [2024-11-26 20:47:20.108088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:16.638 passed 00:17:16.638 Test: blockdev nvme admin passthru ...passed 00:17:16.638 Test: blockdev copy ...passed 00:17:16.638 00:17:16.638 Run Summary: Type Total Ran Passed Failed Inactive 00:17:16.638 suites 1 1 n/a 0 0 00:17:16.638 tests 23 23 23 0 0 00:17:16.638 asserts 152 152 152 0 n/a 00:17:16.638 00:17:16.638 Elapsed time = 1.222 seconds 00:17:16.896 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:16.896 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.896 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:16.896 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.896 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:16.896 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:16.896 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:16.896 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:17:16.896 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:16.896 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:17:16.896 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:16.896 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:16.896 rmmod nvme_tcp 00:17:16.896 rmmod nvme_fabrics 00:17:16.896 rmmod nvme_keyring 00:17:16.896 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:16.896 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:17:16.896 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:17:16.896 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 1673501 ']' 00:17:16.896 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 1673501 00:17:16.896 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 1673501 ']' 00:17:16.896 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 1673501 00:17:16.896 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:17:16.896 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:16.896 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1673501 00:17:17.154 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:17:17.154 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:17:17.154 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1673501' 00:17:17.154 killing process with pid 1673501 00:17:17.154 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 1673501 00:17:17.154 20:47:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 1673501 00:17:17.414 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:17.414 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:17.414 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:17.414 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:17:17.414 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:17:17.414 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:17.414 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:17:17.414 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:17.414 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:17.414 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:17.414 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:17.414 20:47:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:19.948 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:19.948 00:17:19.948 real 0m6.702s 00:17:19.948 user 0m11.095s 00:17:19.948 sys 0m2.654s 00:17:19.948 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:19.948 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:19.948 ************************************ 00:17:19.948 END TEST nvmf_bdevio_no_huge 00:17:19.948 ************************************ 00:17:19.948 20:47:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:19.948 20:47:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:19.948 20:47:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:19.948 20:47:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:19.948 ************************************ 00:17:19.948 START TEST nvmf_tls 00:17:19.948 ************************************ 00:17:19.948 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:19.948 * Looking for test storage... 00:17:19.948 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:19.948 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:19.948 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:17:19.948 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:19.948 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:19.948 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:19.948 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:19.948 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:19.948 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:17:19.948 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:17:19.948 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:17:19.948 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:17:19.948 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:17:19.948 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:17:19.948 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:17:19.948 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:19.948 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:17:19.948 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:17:19.948 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:19.948 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:19.948 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:17:19.948 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:17:19.948 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:19.948 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:17:19.948 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:17:19.948 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:17:19.948 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:17:19.948 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:19.948 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:17:19.948 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:17:19.948 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:19.948 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:19.948 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:17:19.948 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:19.948 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:19.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.948 --rc genhtml_branch_coverage=1 00:17:19.948 --rc genhtml_function_coverage=1 00:17:19.948 --rc genhtml_legend=1 00:17:19.948 --rc geninfo_all_blocks=1 00:17:19.948 --rc geninfo_unexecuted_blocks=1 00:17:19.948 00:17:19.948 ' 00:17:19.948 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:19.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.948 --rc genhtml_branch_coverage=1 00:17:19.948 --rc genhtml_function_coverage=1 00:17:19.948 --rc genhtml_legend=1 00:17:19.948 --rc geninfo_all_blocks=1 00:17:19.948 --rc geninfo_unexecuted_blocks=1 00:17:19.948 00:17:19.948 ' 00:17:19.948 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:19.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.948 --rc genhtml_branch_coverage=1 00:17:19.948 --rc genhtml_function_coverage=1 00:17:19.948 --rc genhtml_legend=1 00:17:19.948 --rc geninfo_all_blocks=1 00:17:19.948 --rc geninfo_unexecuted_blocks=1 00:17:19.948 00:17:19.948 ' 00:17:19.948 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:19.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.948 --rc genhtml_branch_coverage=1 00:17:19.948 --rc genhtml_function_coverage=1 00:17:19.948 --rc genhtml_legend=1 00:17:19.948 --rc geninfo_all_blocks=1 00:17:19.948 --rc geninfo_unexecuted_blocks=1 00:17:19.948 00:17:19.948 ' 00:17:19.948 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:19.948 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:19.948 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:19.948 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:19.948 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:19.948 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:19.948 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:19.948 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:19.948 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:19.949 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:19.949 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:19.949 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:19.949 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:19.949 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:17:19.949 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:19.949 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:19.949 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:19.949 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:19.949 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:19.949 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:17:19.949 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:19.949 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:19.949 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:19.949 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.949 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.949 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.949 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:19.949 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.949 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:17:19.949 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:19.949 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:19.949 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:19.949 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:19.949 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:19.949 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:19.949 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:19.949 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:19.949 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:19.949 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:19.949 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:19.949 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:17:19.949 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:19.949 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:19.949 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:19.949 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:19.949 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:19.949 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:19.949 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:19.949 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:19.949 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:19.949 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:19.949 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:17:19.949 20:47:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:17:21.853 Found 0000:09:00.0 (0x8086 - 0x159b) 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:17:21.853 Found 0000:09:00.1 (0x8086 - 0x159b) 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:17:21.853 Found net devices under 0000:09:00.0: cvl_0_0 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:17:21.853 Found net devices under 0000:09:00.1: cvl_0_1 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:21.853 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:21.853 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:17:21.853 00:17:21.853 --- 10.0.0.2 ping statistics --- 00:17:21.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:21.853 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:21.853 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:21.853 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:17:21.853 00:17:21.853 --- 10.0.0.1 ping statistics --- 00:17:21.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:21.853 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:21.853 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:21.854 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:21.854 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:21.854 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:21.854 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:21.854 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:21.854 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:21.854 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:21.854 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:21.854 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:21.854 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1675731 00:17:21.854 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:21.854 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1675731 00:17:21.854 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1675731 ']' 00:17:21.854 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:21.854 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:21.854 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:21.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:21.854 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:21.854 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:21.854 [2024-11-26 20:47:25.518848] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:17:21.854 [2024-11-26 20:47:25.518942] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:22.112 [2024-11-26 20:47:25.591378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.112 [2024-11-26 20:47:25.644896] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:22.112 [2024-11-26 20:47:25.644953] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:22.112 [2024-11-26 20:47:25.644981] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:22.112 [2024-11-26 20:47:25.644992] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:22.112 [2024-11-26 20:47:25.645002] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:22.112 [2024-11-26 20:47:25.645589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:22.112 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:22.112 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:22.112 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:22.112 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:22.112 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:22.112 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:22.112 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:17:22.112 20:47:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:22.370 true 00:17:22.370 20:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:22.370 20:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:17:22.628 20:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:17:22.628 20:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:17:22.628 20:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:23.194 20:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:23.194 20:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:17:23.194 20:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:17:23.194 20:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:17:23.194 20:47:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:23.452 20:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:23.452 20:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:17:23.710 20:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:17:23.710 20:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:17:23.710 20:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:23.710 20:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:17:24.276 20:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:17:24.276 20:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:17:24.276 20:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:24.276 20:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:24.276 20:47:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:17:24.537 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:17:24.537 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:17:24.537 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:25.167 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:25.167 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:17:25.167 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:17:25.167 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:17:25.167 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:25.168 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:25.168 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:25.168 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:25.168 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:17:25.168 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:25.168 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:25.168 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:25.168 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:25.168 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:25.168 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:25.168 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:25.168 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:17:25.168 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:25.168 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:25.425 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:25.426 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:25.426 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.kJX6wm9Yxu 00:17:25.426 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:17:25.426 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.c5PW5lVg20 00:17:25.426 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:25.426 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:25.426 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.kJX6wm9Yxu 00:17:25.426 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.c5PW5lVg20 00:17:25.426 20:47:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:25.683 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:25.941 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.kJX6wm9Yxu 00:17:25.941 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.kJX6wm9Yxu 00:17:25.941 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:26.199 [2024-11-26 20:47:29.844753] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:26.199 20:47:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:26.457 20:47:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:26.715 [2024-11-26 20:47:30.382225] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:26.715 [2024-11-26 20:47:30.382517] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:26.715 20:47:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:27.282 malloc0 00:17:27.282 20:47:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:27.282 20:47:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.kJX6wm9Yxu 00:17:27.540 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:27.798 20:47:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.kJX6wm9Yxu 00:17:40.013 Initializing NVMe Controllers 00:17:40.013 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:40.013 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:40.013 Initialization complete. Launching workers. 00:17:40.013 ======================================================== 00:17:40.013 Latency(us) 00:17:40.013 Device Information : IOPS MiB/s Average min max 00:17:40.013 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8637.60 33.74 7411.45 1214.19 9116.16 00:17:40.013 ======================================================== 00:17:40.013 Total : 8637.60 33.74 7411.45 1214.19 9116.16 00:17:40.013 00:17:40.013 20:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kJX6wm9Yxu 00:17:40.013 20:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:40.013 20:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:40.013 20:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:40.013 20:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.kJX6wm9Yxu 00:17:40.013 20:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:40.013 20:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1677633 00:17:40.013 20:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:40.013 20:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:40.013 20:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1677633 /var/tmp/bdevperf.sock 00:17:40.013 20:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1677633 ']' 00:17:40.013 20:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:40.013 20:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:40.013 20:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:40.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:40.013 20:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:40.013 20:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:40.013 [2024-11-26 20:47:41.646145] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:17:40.013 [2024-11-26 20:47:41.646232] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1677633 ] 00:17:40.013 [2024-11-26 20:47:41.715412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.013 [2024-11-26 20:47:41.772605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:40.013 20:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:40.013 20:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:40.013 20:47:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.kJX6wm9Yxu 00:17:40.013 20:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:40.013 [2024-11-26 20:47:42.478964] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:40.013 TLSTESTn1 00:17:40.013 20:47:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:40.013 Running I/O for 10 seconds... 00:17:41.383 3399.00 IOPS, 13.28 MiB/s [2024-11-26T19:47:46.012Z] 3504.50 IOPS, 13.69 MiB/s [2024-11-26T19:47:46.942Z] 3484.33 IOPS, 13.61 MiB/s [2024-11-26T19:47:47.877Z] 3510.50 IOPS, 13.71 MiB/s [2024-11-26T19:47:48.811Z] 3536.00 IOPS, 13.81 MiB/s [2024-11-26T19:47:49.746Z] 3543.17 IOPS, 13.84 MiB/s [2024-11-26T19:47:51.120Z] 3548.29 IOPS, 13.86 MiB/s [2024-11-26T19:47:52.052Z] 3556.25 IOPS, 13.89 MiB/s [2024-11-26T19:47:52.982Z] 3563.67 IOPS, 13.92 MiB/s [2024-11-26T19:47:52.982Z] 3565.30 IOPS, 13.93 MiB/s 00:17:49.285 Latency(us) 00:17:49.285 [2024-11-26T19:47:52.982Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:49.285 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:49.285 Verification LBA range: start 0x0 length 0x2000 00:17:49.285 TLSTESTn1 : 10.02 3570.63 13.95 0.00 0.00 35788.84 7475.96 30680.56 00:17:49.285 [2024-11-26T19:47:52.982Z] =================================================================================================================== 00:17:49.285 [2024-11-26T19:47:52.982Z] Total : 3570.63 13.95 0.00 0.00 35788.84 7475.96 30680.56 00:17:49.285 { 00:17:49.285 "results": [ 00:17:49.285 { 00:17:49.285 "job": "TLSTESTn1", 00:17:49.285 "core_mask": "0x4", 00:17:49.285 "workload": "verify", 00:17:49.285 "status": "finished", 00:17:49.285 "verify_range": { 00:17:49.285 "start": 0, 00:17:49.285 "length": 8192 00:17:49.285 }, 00:17:49.285 "queue_depth": 128, 00:17:49.285 "io_size": 4096, 00:17:49.285 "runtime": 10.020073, 00:17:49.285 "iops": 3570.632669043429, 00:17:49.285 "mibps": 13.947783863450894, 00:17:49.285 "io_failed": 0, 00:17:49.285 "io_timeout": 0, 00:17:49.285 "avg_latency_us": 35788.83737682788, 00:17:49.285 "min_latency_us": 7475.958518518519, 00:17:49.285 "max_latency_us": 30680.557037037037 00:17:49.285 } 00:17:49.285 ], 00:17:49.285 "core_count": 1 00:17:49.285 } 00:17:49.285 20:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:49.285 20:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1677633 00:17:49.285 20:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1677633 ']' 00:17:49.285 20:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1677633 00:17:49.285 20:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:49.285 20:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:49.285 20:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1677633 00:17:49.285 20:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:49.285 20:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:49.285 20:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1677633' 00:17:49.285 killing process with pid 1677633 00:17:49.285 20:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1677633 00:17:49.285 Received shutdown signal, test time was about 10.000000 seconds 00:17:49.285 00:17:49.285 Latency(us) 00:17:49.285 [2024-11-26T19:47:52.982Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:49.285 [2024-11-26T19:47:52.982Z] =================================================================================================================== 00:17:49.285 [2024-11-26T19:47:52.982Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:49.285 20:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1677633 00:17:49.285 20:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.c5PW5lVg20 00:17:49.285 20:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:49.285 20:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.c5PW5lVg20 00:17:49.285 20:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:49.285 20:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:49.285 20:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:49.285 20:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:49.285 20:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.c5PW5lVg20 00:17:49.285 20:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:49.285 20:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:49.285 20:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:49.285 20:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.c5PW5lVg20 00:17:49.285 20:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:49.285 20:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1678957 00:17:49.285 20:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:49.285 20:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:49.285 20:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1678957 /var/tmp/bdevperf.sock 00:17:49.285 20:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1678957 ']' 00:17:49.285 20:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:49.285 20:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:49.285 20:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:49.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:49.285 20:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:49.285 20:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:49.542 [2024-11-26 20:47:53.013861] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:17:49.542 [2024-11-26 20:47:53.013928] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1678957 ] 00:17:49.543 [2024-11-26 20:47:53.078427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.543 [2024-11-26 20:47:53.135316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:49.799 20:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:49.799 20:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:49.799 20:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.c5PW5lVg20 00:17:50.056 20:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:50.324 [2024-11-26 20:47:53.823897] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:50.324 [2024-11-26 20:47:53.829423] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:50.324 [2024-11-26 20:47:53.829933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23af2f0 (107): Transport endpoint is not connected 00:17:50.324 [2024-11-26 20:47:53.830921] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23af2f0 (9): Bad file descriptor 00:17:50.324 [2024-11-26 20:47:53.831920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:17:50.324 [2024-11-26 20:47:53.831946] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:50.324 [2024-11-26 20:47:53.831960] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:17:50.324 [2024-11-26 20:47:53.831973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:17:50.324 request: 00:17:50.324 { 00:17:50.324 "name": "TLSTEST", 00:17:50.324 "trtype": "tcp", 00:17:50.324 "traddr": "10.0.0.2", 00:17:50.324 "adrfam": "ipv4", 00:17:50.324 "trsvcid": "4420", 00:17:50.324 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:50.324 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:50.324 "prchk_reftag": false, 00:17:50.324 "prchk_guard": false, 00:17:50.324 "hdgst": false, 00:17:50.324 "ddgst": false, 00:17:50.324 "psk": "key0", 00:17:50.324 "allow_unrecognized_csi": false, 00:17:50.324 "method": "bdev_nvme_attach_controller", 00:17:50.324 "req_id": 1 00:17:50.324 } 00:17:50.324 Got JSON-RPC error response 00:17:50.324 response: 00:17:50.324 { 00:17:50.324 "code": -5, 00:17:50.324 "message": "Input/output error" 00:17:50.324 } 00:17:50.324 20:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1678957 00:17:50.324 20:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1678957 ']' 00:17:50.324 20:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1678957 00:17:50.324 20:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:50.324 20:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:50.324 20:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1678957 00:17:50.324 20:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:50.324 20:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:50.324 20:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1678957' 00:17:50.324 killing process with pid 1678957 00:17:50.324 20:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1678957 00:17:50.324 Received shutdown signal, test time was about 10.000000 seconds 00:17:50.324 00:17:50.324 Latency(us) 00:17:50.324 [2024-11-26T19:47:54.021Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.324 [2024-11-26T19:47:54.021Z] =================================================================================================================== 00:17:50.324 [2024-11-26T19:47:54.021Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:50.324 20:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1678957 00:17:50.583 20:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:50.583 20:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:50.583 20:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:50.583 20:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:50.583 20:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:50.583 20:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.kJX6wm9Yxu 00:17:50.583 20:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:50.583 20:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.kJX6wm9Yxu 00:17:50.583 20:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:50.583 20:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:50.583 20:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:50.583 20:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:50.583 20:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.kJX6wm9Yxu 00:17:50.583 20:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:50.583 20:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:50.583 20:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:50.583 20:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.kJX6wm9Yxu 00:17:50.583 20:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:50.583 20:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1679101 00:17:50.583 20:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:50.583 20:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:50.583 20:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1679101 /var/tmp/bdevperf.sock 00:17:50.583 20:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1679101 ']' 00:17:50.583 20:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:50.583 20:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:50.583 20:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:50.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:50.583 20:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:50.583 20:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:50.583 [2024-11-26 20:47:54.165114] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:17:50.583 [2024-11-26 20:47:54.165217] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1679101 ] 00:17:50.583 [2024-11-26 20:47:54.231706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.841 [2024-11-26 20:47:54.291151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:50.841 20:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:50.841 20:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:50.841 20:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.kJX6wm9Yxu 00:17:51.099 20:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:17:51.356 [2024-11-26 20:47:54.990574] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:51.356 [2024-11-26 20:47:55.000856] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:51.356 [2024-11-26 20:47:55.000888] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:51.356 [2024-11-26 20:47:55.000939] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:51.356 [2024-11-26 20:47:55.001756] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10bf2f0 (107): Transport endpoint is not connected 00:17:51.356 [2024-11-26 20:47:55.002746] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10bf2f0 (9): Bad file descriptor 00:17:51.356 [2024-11-26 20:47:55.003746] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:17:51.356 [2024-11-26 20:47:55.003766] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:51.356 [2024-11-26 20:47:55.003780] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:17:51.356 [2024-11-26 20:47:55.003793] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:17:51.356 request: 00:17:51.356 { 00:17:51.356 "name": "TLSTEST", 00:17:51.356 "trtype": "tcp", 00:17:51.356 "traddr": "10.0.0.2", 00:17:51.356 "adrfam": "ipv4", 00:17:51.356 "trsvcid": "4420", 00:17:51.356 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:51.356 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:51.356 "prchk_reftag": false, 00:17:51.356 "prchk_guard": false, 00:17:51.356 "hdgst": false, 00:17:51.356 "ddgst": false, 00:17:51.356 "psk": "key0", 00:17:51.356 "allow_unrecognized_csi": false, 00:17:51.356 "method": "bdev_nvme_attach_controller", 00:17:51.356 "req_id": 1 00:17:51.356 } 00:17:51.356 Got JSON-RPC error response 00:17:51.356 response: 00:17:51.356 { 00:17:51.356 "code": -5, 00:17:51.356 "message": "Input/output error" 00:17:51.356 } 00:17:51.356 20:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1679101 00:17:51.356 20:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1679101 ']' 00:17:51.356 20:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1679101 00:17:51.356 20:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:51.356 20:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:51.356 20:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1679101 00:17:51.614 20:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:51.614 20:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:51.614 20:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1679101' 00:17:51.614 killing process with pid 1679101 00:17:51.614 20:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1679101 00:17:51.614 Received shutdown signal, test time was about 10.000000 seconds 00:17:51.614 00:17:51.614 Latency(us) 00:17:51.614 [2024-11-26T19:47:55.311Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:51.614 [2024-11-26T19:47:55.311Z] =================================================================================================================== 00:17:51.614 [2024-11-26T19:47:55.311Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:51.614 20:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1679101 00:17:51.614 20:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:51.614 20:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:51.614 20:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:51.614 20:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:51.614 20:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:51.614 20:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.kJX6wm9Yxu 00:17:51.614 20:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:51.614 20:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.kJX6wm9Yxu 00:17:51.614 20:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:51.614 20:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:51.614 20:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:51.614 20:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:51.614 20:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.kJX6wm9Yxu 00:17:51.614 20:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:51.614 20:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:51.614 20:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:51.614 20:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.kJX6wm9Yxu 00:17:51.614 20:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:51.614 20:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1679239 00:17:51.614 20:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:51.614 20:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:51.614 20:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1679239 /var/tmp/bdevperf.sock 00:17:51.614 20:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1679239 ']' 00:17:51.614 20:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:51.614 20:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:51.614 20:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:51.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:51.614 20:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:51.614 20:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:51.873 [2024-11-26 20:47:55.336899] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:17:51.873 [2024-11-26 20:47:55.337000] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1679239 ] 00:17:51.873 [2024-11-26 20:47:55.402547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.873 [2024-11-26 20:47:55.457298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:51.873 20:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:51.873 20:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:51.873 20:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.kJX6wm9Yxu 00:17:52.438 20:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:52.438 [2024-11-26 20:47:56.109620] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:52.438 [2024-11-26 20:47:56.115145] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:52.438 [2024-11-26 20:47:56.115180] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:52.438 [2024-11-26 20:47:56.115220] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:52.438 [2024-11-26 20:47:56.115769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b5e2f0 (107): Transport endpoint is not connected 00:17:52.438 [2024-11-26 20:47:56.116759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b5e2f0 (9): Bad file descriptor 00:17:52.438 [2024-11-26 20:47:56.117759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:17:52.438 [2024-11-26 20:47:56.117782] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:52.438 [2024-11-26 20:47:56.117805] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:17:52.438 [2024-11-26 20:47:56.117819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:17:52.438 request: 00:17:52.438 { 00:17:52.438 "name": "TLSTEST", 00:17:52.438 "trtype": "tcp", 00:17:52.438 "traddr": "10.0.0.2", 00:17:52.438 "adrfam": "ipv4", 00:17:52.438 "trsvcid": "4420", 00:17:52.438 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:52.438 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:52.438 "prchk_reftag": false, 00:17:52.438 "prchk_guard": false, 00:17:52.438 "hdgst": false, 00:17:52.438 "ddgst": false, 00:17:52.438 "psk": "key0", 00:17:52.438 "allow_unrecognized_csi": false, 00:17:52.438 "method": "bdev_nvme_attach_controller", 00:17:52.438 "req_id": 1 00:17:52.438 } 00:17:52.438 Got JSON-RPC error response 00:17:52.438 response: 00:17:52.438 { 00:17:52.438 "code": -5, 00:17:52.438 "message": "Input/output error" 00:17:52.438 } 00:17:52.697 20:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1679239 00:17:52.697 20:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1679239 ']' 00:17:52.697 20:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1679239 00:17:52.697 20:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:52.697 20:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:52.697 20:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1679239 00:17:52.697 20:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:52.697 20:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:52.697 20:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1679239' 00:17:52.697 killing process with pid 1679239 00:17:52.697 20:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1679239 00:17:52.697 Received shutdown signal, test time was about 10.000000 seconds 00:17:52.697 00:17:52.697 Latency(us) 00:17:52.697 [2024-11-26T19:47:56.394Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:52.697 [2024-11-26T19:47:56.394Z] =================================================================================================================== 00:17:52.697 [2024-11-26T19:47:56.394Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:52.697 20:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1679239 00:17:52.697 20:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:52.697 20:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:52.697 20:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:52.697 20:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:52.697 20:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:52.697 20:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:52.697 20:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:52.697 20:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:52.697 20:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:52.955 20:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:52.955 20:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:52.955 20:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:52.955 20:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:52.955 20:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:52.955 20:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:52.955 20:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:52.955 20:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:17:52.955 20:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:52.955 20:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1679382 00:17:52.955 20:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:52.955 20:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:52.955 20:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1679382 /var/tmp/bdevperf.sock 00:17:52.955 20:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1679382 ']' 00:17:52.955 20:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:52.955 20:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:52.955 20:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:52.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:52.955 20:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:52.955 20:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:52.955 [2024-11-26 20:47:56.442155] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:17:52.955 [2024-11-26 20:47:56.442253] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1679382 ] 00:17:52.955 [2024-11-26 20:47:56.509440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.955 [2024-11-26 20:47:56.564044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:53.213 20:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:53.213 20:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:53.213 20:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:17:53.471 [2024-11-26 20:47:56.936836] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:17:53.471 [2024-11-26 20:47:56.936875] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:17:53.471 request: 00:17:53.471 { 00:17:53.471 "name": "key0", 00:17:53.471 "path": "", 00:17:53.471 "method": "keyring_file_add_key", 00:17:53.471 "req_id": 1 00:17:53.471 } 00:17:53.471 Got JSON-RPC error response 00:17:53.471 response: 00:17:53.471 { 00:17:53.471 "code": -1, 00:17:53.471 "message": "Operation not permitted" 00:17:53.471 } 00:17:53.472 20:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:53.730 [2024-11-26 20:47:57.201678] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:53.730 [2024-11-26 20:47:57.201730] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:17:53.730 request: 00:17:53.730 { 00:17:53.730 "name": "TLSTEST", 00:17:53.730 "trtype": "tcp", 00:17:53.730 "traddr": "10.0.0.2", 00:17:53.730 "adrfam": "ipv4", 00:17:53.730 "trsvcid": "4420", 00:17:53.730 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:53.730 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:53.730 "prchk_reftag": false, 00:17:53.730 "prchk_guard": false, 00:17:53.730 "hdgst": false, 00:17:53.730 "ddgst": false, 00:17:53.730 "psk": "key0", 00:17:53.730 "allow_unrecognized_csi": false, 00:17:53.730 "method": "bdev_nvme_attach_controller", 00:17:53.730 "req_id": 1 00:17:53.730 } 00:17:53.730 Got JSON-RPC error response 00:17:53.730 response: 00:17:53.730 { 00:17:53.730 "code": -126, 00:17:53.730 "message": "Required key not available" 00:17:53.730 } 00:17:53.730 20:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1679382 00:17:53.730 20:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1679382 ']' 00:17:53.730 20:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1679382 00:17:53.730 20:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:53.730 20:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:53.730 20:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1679382 00:17:53.730 20:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:53.730 20:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:53.730 20:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1679382' 00:17:53.730 killing process with pid 1679382 00:17:53.730 20:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1679382 00:17:53.730 Received shutdown signal, test time was about 10.000000 seconds 00:17:53.730 00:17:53.730 Latency(us) 00:17:53.730 [2024-11-26T19:47:57.427Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.730 [2024-11-26T19:47:57.427Z] =================================================================================================================== 00:17:53.730 [2024-11-26T19:47:57.427Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:53.730 20:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1679382 00:17:53.988 20:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:53.988 20:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:53.988 20:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:53.988 20:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:53.988 20:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:53.988 20:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 1675731 00:17:53.988 20:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1675731 ']' 00:17:53.988 20:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1675731 00:17:53.988 20:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:53.988 20:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:53.988 20:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1675731 00:17:53.988 20:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:53.988 20:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:53.988 20:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1675731' 00:17:53.988 killing process with pid 1675731 00:17:53.988 20:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1675731 00:17:53.988 20:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1675731 00:17:54.247 20:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:17:54.247 20:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:17:54.247 20:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:54.247 20:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:54.247 20:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:54.247 20:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:17:54.247 20:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:54.247 20:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:54.247 20:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:17:54.247 20:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.1zPxjs6rVC 00:17:54.247 20:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:54.247 20:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.1zPxjs6rVC 00:17:54.247 20:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:17:54.247 20:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:54.247 20:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:54.247 20:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:54.247 20:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1679532 00:17:54.247 20:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:54.247 20:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1679532 00:17:54.247 20:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1679532 ']' 00:17:54.247 20:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.247 20:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:54.247 20:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.247 20:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:54.247 20:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:54.247 [2024-11-26 20:47:57.871940] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:17:54.247 [2024-11-26 20:47:57.872037] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:54.505 [2024-11-26 20:47:57.946366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.505 [2024-11-26 20:47:58.001715] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:54.505 [2024-11-26 20:47:58.001775] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:54.505 [2024-11-26 20:47:58.001799] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:54.505 [2024-11-26 20:47:58.001810] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:54.505 [2024-11-26 20:47:58.001820] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:54.505 [2024-11-26 20:47:58.002358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:54.505 20:47:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:54.505 20:47:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:54.505 20:47:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:54.506 20:47:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:54.506 20:47:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:54.506 20:47:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:54.506 20:47:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.1zPxjs6rVC 00:17:54.506 20:47:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.1zPxjs6rVC 00:17:54.506 20:47:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:54.763 [2024-11-26 20:47:58.390934] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:54.763 20:47:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:55.021 20:47:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:55.279 [2024-11-26 20:47:58.936475] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:55.279 [2024-11-26 20:47:58.936739] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:55.279 20:47:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:55.845 malloc0 00:17:55.845 20:47:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:56.103 20:47:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.1zPxjs6rVC 00:17:56.361 20:47:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:56.619 20:48:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1zPxjs6rVC 00:17:56.619 20:48:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:56.619 20:48:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:56.619 20:48:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:56.619 20:48:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.1zPxjs6rVC 00:17:56.619 20:48:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:56.619 20:48:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1679899 00:17:56.619 20:48:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:56.619 20:48:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1679899 /var/tmp/bdevperf.sock 00:17:56.619 20:48:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:56.619 20:48:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1679899 ']' 00:17:56.619 20:48:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:56.619 20:48:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:56.619 20:48:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:56.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:56.620 20:48:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:56.620 20:48:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:56.620 [2024-11-26 20:48:00.199556] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:17:56.620 [2024-11-26 20:48:00.199656] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1679899 ] 00:17:56.620 [2024-11-26 20:48:00.269390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.877 [2024-11-26 20:48:00.331523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:56.877 20:48:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:56.877 20:48:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:56.877 20:48:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.1zPxjs6rVC 00:17:57.135 20:48:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:57.393 [2024-11-26 20:48:00.998991] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:57.393 TLSTESTn1 00:17:57.650 20:48:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:57.650 Running I/O for 10 seconds... 00:17:59.585 3200.00 IOPS, 12.50 MiB/s [2024-11-26T19:48:04.212Z] 3287.00 IOPS, 12.84 MiB/s [2024-11-26T19:48:05.582Z] 3319.67 IOPS, 12.97 MiB/s [2024-11-26T19:48:06.513Z] 3317.75 IOPS, 12.96 MiB/s [2024-11-26T19:48:07.447Z] 3323.60 IOPS, 12.98 MiB/s [2024-11-26T19:48:08.381Z] 3326.67 IOPS, 12.99 MiB/s [2024-11-26T19:48:09.312Z] 3328.00 IOPS, 13.00 MiB/s [2024-11-26T19:48:10.244Z] 3329.25 IOPS, 13.00 MiB/s [2024-11-26T19:48:11.616Z] 3334.56 IOPS, 13.03 MiB/s [2024-11-26T19:48:11.616Z] 3341.30 IOPS, 13.05 MiB/s 00:18:07.919 Latency(us) 00:18:07.919 [2024-11-26T19:48:11.616Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:07.919 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:07.919 Verification LBA range: start 0x0 length 0x2000 00:18:07.919 TLSTESTn1 : 10.02 3347.08 13.07 0.00 0.00 38176.18 7864.32 32428.18 00:18:07.919 [2024-11-26T19:48:11.616Z] =================================================================================================================== 00:18:07.919 [2024-11-26T19:48:11.616Z] Total : 3347.08 13.07 0.00 0.00 38176.18 7864.32 32428.18 00:18:07.919 { 00:18:07.919 "results": [ 00:18:07.919 { 00:18:07.919 "job": "TLSTESTn1", 00:18:07.919 "core_mask": "0x4", 00:18:07.919 "workload": "verify", 00:18:07.919 "status": "finished", 00:18:07.919 "verify_range": { 00:18:07.919 "start": 0, 00:18:07.919 "length": 8192 00:18:07.919 }, 00:18:07.919 "queue_depth": 128, 00:18:07.919 "io_size": 4096, 00:18:07.919 "runtime": 10.020674, 00:18:07.919 "iops": 3347.0802462988017, 00:18:07.919 "mibps": 13.074532212104694, 00:18:07.919 "io_failed": 0, 00:18:07.919 "io_timeout": 0, 00:18:07.919 "avg_latency_us": 38176.177513460985, 00:18:07.919 "min_latency_us": 7864.32, 00:18:07.919 "max_latency_us": 32428.183703703704 00:18:07.919 } 00:18:07.919 ], 00:18:07.919 "core_count": 1 00:18:07.919 } 00:18:07.919 20:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:07.919 20:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1679899 00:18:07.919 20:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1679899 ']' 00:18:07.919 20:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1679899 00:18:07.919 20:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:07.919 20:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:07.919 20:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1679899 00:18:07.919 20:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:07.919 20:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:07.919 20:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1679899' 00:18:07.919 killing process with pid 1679899 00:18:07.919 20:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1679899 00:18:07.919 Received shutdown signal, test time was about 10.000000 seconds 00:18:07.919 00:18:07.919 Latency(us) 00:18:07.919 [2024-11-26T19:48:11.616Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:07.919 [2024-11-26T19:48:11.616Z] =================================================================================================================== 00:18:07.919 [2024-11-26T19:48:11.616Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:07.919 20:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1679899 00:18:07.919 20:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.1zPxjs6rVC 00:18:07.919 20:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1zPxjs6rVC 00:18:07.919 20:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:07.919 20:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1zPxjs6rVC 00:18:07.919 20:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:07.919 20:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:07.919 20:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:07.919 20:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:07.919 20:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1zPxjs6rVC 00:18:07.919 20:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:07.919 20:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:07.919 20:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:07.919 20:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.1zPxjs6rVC 00:18:07.919 20:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:07.919 20:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1681843 00:18:07.919 20:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:07.919 20:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:07.919 20:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1681843 /var/tmp/bdevperf.sock 00:18:07.919 20:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1681843 ']' 00:18:07.919 20:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:07.919 20:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:07.919 20:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:07.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:07.919 20:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:07.919 20:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:07.919 [2024-11-26 20:48:11.570647] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:18:07.919 [2024-11-26 20:48:11.570745] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1681843 ] 00:18:08.177 [2024-11-26 20:48:11.637826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.177 [2024-11-26 20:48:11.694364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:08.177 20:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:08.177 20:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:08.177 20:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.1zPxjs6rVC 00:18:08.435 [2024-11-26 20:48:12.045800] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.1zPxjs6rVC': 0100666 00:18:08.435 [2024-11-26 20:48:12.045842] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:08.435 request: 00:18:08.435 { 00:18:08.435 "name": "key0", 00:18:08.435 "path": "/tmp/tmp.1zPxjs6rVC", 00:18:08.435 "method": "keyring_file_add_key", 00:18:08.435 "req_id": 1 00:18:08.435 } 00:18:08.435 Got JSON-RPC error response 00:18:08.435 response: 00:18:08.435 { 00:18:08.435 "code": -1, 00:18:08.435 "message": "Operation not permitted" 00:18:08.435 } 00:18:08.435 20:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:08.693 [2024-11-26 20:48:12.310616] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:08.693 [2024-11-26 20:48:12.310670] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:08.693 request: 00:18:08.693 { 00:18:08.693 "name": "TLSTEST", 00:18:08.693 "trtype": "tcp", 00:18:08.693 "traddr": "10.0.0.2", 00:18:08.693 "adrfam": "ipv4", 00:18:08.693 "trsvcid": "4420", 00:18:08.693 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:08.693 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:08.693 "prchk_reftag": false, 00:18:08.693 "prchk_guard": false, 00:18:08.693 "hdgst": false, 00:18:08.693 "ddgst": false, 00:18:08.693 "psk": "key0", 00:18:08.693 "allow_unrecognized_csi": false, 00:18:08.693 "method": "bdev_nvme_attach_controller", 00:18:08.693 "req_id": 1 00:18:08.693 } 00:18:08.693 Got JSON-RPC error response 00:18:08.693 response: 00:18:08.693 { 00:18:08.693 "code": -126, 00:18:08.693 "message": "Required key not available" 00:18:08.693 } 00:18:08.693 20:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1681843 00:18:08.693 20:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1681843 ']' 00:18:08.693 20:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1681843 00:18:08.693 20:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:08.693 20:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:08.693 20:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1681843 00:18:08.693 20:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:08.693 20:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:08.693 20:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1681843' 00:18:08.693 killing process with pid 1681843 00:18:08.693 20:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1681843 00:18:08.693 Received shutdown signal, test time was about 10.000000 seconds 00:18:08.693 00:18:08.693 Latency(us) 00:18:08.693 [2024-11-26T19:48:12.390Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:08.693 [2024-11-26T19:48:12.390Z] =================================================================================================================== 00:18:08.693 [2024-11-26T19:48:12.390Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:08.693 20:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1681843 00:18:08.951 20:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:08.951 20:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:08.951 20:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:08.951 20:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:08.951 20:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:08.951 20:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 1679532 00:18:08.951 20:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1679532 ']' 00:18:08.951 20:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1679532 00:18:08.951 20:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:08.951 20:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:08.951 20:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1679532 00:18:08.951 20:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:08.951 20:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:08.951 20:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1679532' 00:18:08.951 killing process with pid 1679532 00:18:08.951 20:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1679532 00:18:08.951 20:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1679532 00:18:09.208 20:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:18:09.208 20:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:09.208 20:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:09.208 20:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:09.208 20:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1682028 00:18:09.208 20:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:09.208 20:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1682028 00:18:09.208 20:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1682028 ']' 00:18:09.208 20:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:09.208 20:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:09.208 20:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:09.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:09.208 20:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:09.208 20:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:09.208 [2024-11-26 20:48:12.869047] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:18:09.209 [2024-11-26 20:48:12.869140] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:09.467 [2024-11-26 20:48:12.939575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.467 [2024-11-26 20:48:12.997902] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:09.467 [2024-11-26 20:48:12.997959] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:09.467 [2024-11-26 20:48:12.997973] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:09.467 [2024-11-26 20:48:12.997984] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:09.467 [2024-11-26 20:48:12.997993] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:09.467 [2024-11-26 20:48:12.998606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:09.467 20:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:09.467 20:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:09.467 20:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:09.467 20:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:09.467 20:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:09.467 20:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:09.467 20:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.1zPxjs6rVC 00:18:09.467 20:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:09.467 20:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.1zPxjs6rVC 00:18:09.467 20:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:18:09.467 20:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:09.467 20:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:18:09.467 20:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:09.467 20:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.1zPxjs6rVC 00:18:09.467 20:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.1zPxjs6rVC 00:18:09.467 20:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:09.725 [2024-11-26 20:48:13.393752] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:09.725 20:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:10.290 20:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:10.549 [2024-11-26 20:48:13.999441] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:10.549 [2024-11-26 20:48:13.999724] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:10.549 20:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:10.807 malloc0 00:18:10.807 20:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:11.066 20:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.1zPxjs6rVC 00:18:11.324 [2024-11-26 20:48:14.981034] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.1zPxjs6rVC': 0100666 00:18:11.324 [2024-11-26 20:48:14.981075] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:11.324 request: 00:18:11.324 { 00:18:11.324 "name": "key0", 00:18:11.324 "path": "/tmp/tmp.1zPxjs6rVC", 00:18:11.324 "method": "keyring_file_add_key", 00:18:11.324 "req_id": 1 00:18:11.324 } 00:18:11.324 Got JSON-RPC error response 00:18:11.324 response: 00:18:11.324 { 00:18:11.324 "code": -1, 00:18:11.324 "message": "Operation not permitted" 00:18:11.324 } 00:18:11.324 20:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:11.890 [2024-11-26 20:48:15.305904] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:18:11.890 [2024-11-26 20:48:15.305951] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:11.890 request: 00:18:11.890 { 00:18:11.890 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:11.890 "host": "nqn.2016-06.io.spdk:host1", 00:18:11.890 "psk": "key0", 00:18:11.890 "method": "nvmf_subsystem_add_host", 00:18:11.890 "req_id": 1 00:18:11.890 } 00:18:11.890 Got JSON-RPC error response 00:18:11.890 response: 00:18:11.890 { 00:18:11.890 "code": -32603, 00:18:11.890 "message": "Internal error" 00:18:11.890 } 00:18:11.890 20:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:11.890 20:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:11.890 20:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:11.890 20:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:11.890 20:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 1682028 00:18:11.890 20:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1682028 ']' 00:18:11.890 20:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1682028 00:18:11.890 20:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:11.890 20:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:11.890 20:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1682028 00:18:11.890 20:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:11.890 20:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:11.890 20:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1682028' 00:18:11.890 killing process with pid 1682028 00:18:11.890 20:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1682028 00:18:11.890 20:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1682028 00:18:12.148 20:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.1zPxjs6rVC 00:18:12.148 20:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:18:12.148 20:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:12.148 20:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:12.148 20:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:12.148 20:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1682333 00:18:12.148 20:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:12.148 20:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1682333 00:18:12.148 20:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1682333 ']' 00:18:12.148 20:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:12.148 20:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:12.148 20:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:12.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:12.148 20:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:12.148 20:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:12.148 [2024-11-26 20:48:15.674215] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:18:12.148 [2024-11-26 20:48:15.674338] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:12.148 [2024-11-26 20:48:15.744990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.148 [2024-11-26 20:48:15.800392] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:12.148 [2024-11-26 20:48:15.800445] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:12.148 [2024-11-26 20:48:15.800464] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:12.148 [2024-11-26 20:48:15.800475] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:12.148 [2024-11-26 20:48:15.800485] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:12.148 [2024-11-26 20:48:15.801047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:12.406 20:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:12.406 20:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:12.406 20:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:12.406 20:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:12.406 20:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:12.406 20:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:12.406 20:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.1zPxjs6rVC 00:18:12.406 20:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.1zPxjs6rVC 00:18:12.406 20:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:12.664 [2024-11-26 20:48:16.193380] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:12.664 20:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:12.922 20:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:13.180 [2024-11-26 20:48:16.734875] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:13.180 [2024-11-26 20:48:16.735126] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:13.180 20:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:13.437 malloc0 00:18:13.437 20:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:13.694 20:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.1zPxjs6rVC 00:18:13.952 20:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:14.518 20:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=1682619 00:18:14.518 20:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:14.518 20:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:14.518 20:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 1682619 /var/tmp/bdevperf.sock 00:18:14.518 20:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1682619 ']' 00:18:14.518 20:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:14.518 20:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:14.518 20:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:14.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:14.518 20:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:14.518 20:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:14.518 [2024-11-26 20:48:17.986668] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:18:14.518 [2024-11-26 20:48:17.986735] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1682619 ] 00:18:14.518 [2024-11-26 20:48:18.050902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.518 [2024-11-26 20:48:18.108614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:14.776 20:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:14.776 20:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:14.776 20:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.1zPxjs6rVC 00:18:15.033 20:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:15.291 [2024-11-26 20:48:18.759090] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:15.291 TLSTESTn1 00:18:15.291 20:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:15.549 20:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:18:15.549 "subsystems": [ 00:18:15.549 { 00:18:15.549 "subsystem": "keyring", 00:18:15.549 "config": [ 00:18:15.549 { 00:18:15.549 "method": "keyring_file_add_key", 00:18:15.549 "params": { 00:18:15.549 "name": "key0", 00:18:15.549 "path": "/tmp/tmp.1zPxjs6rVC" 00:18:15.549 } 00:18:15.549 } 00:18:15.549 ] 00:18:15.549 }, 00:18:15.549 { 00:18:15.549 "subsystem": "iobuf", 00:18:15.549 "config": [ 00:18:15.549 { 00:18:15.549 "method": "iobuf_set_options", 00:18:15.549 "params": { 00:18:15.549 "small_pool_count": 8192, 00:18:15.549 "large_pool_count": 1024, 00:18:15.549 "small_bufsize": 8192, 00:18:15.549 "large_bufsize": 135168, 00:18:15.549 "enable_numa": false 00:18:15.549 } 00:18:15.549 } 00:18:15.549 ] 00:18:15.549 }, 00:18:15.549 { 00:18:15.549 "subsystem": "sock", 00:18:15.549 "config": [ 00:18:15.549 { 00:18:15.549 "method": "sock_set_default_impl", 00:18:15.549 "params": { 00:18:15.549 "impl_name": "posix" 00:18:15.549 } 00:18:15.549 }, 00:18:15.549 { 00:18:15.549 "method": "sock_impl_set_options", 00:18:15.549 "params": { 00:18:15.549 "impl_name": "ssl", 00:18:15.549 "recv_buf_size": 4096, 00:18:15.549 "send_buf_size": 4096, 00:18:15.549 "enable_recv_pipe": true, 00:18:15.549 "enable_quickack": false, 00:18:15.549 "enable_placement_id": 0, 00:18:15.549 "enable_zerocopy_send_server": true, 00:18:15.549 "enable_zerocopy_send_client": false, 00:18:15.549 "zerocopy_threshold": 0, 00:18:15.549 "tls_version": 0, 00:18:15.549 "enable_ktls": false 00:18:15.549 } 00:18:15.549 }, 00:18:15.549 { 00:18:15.549 "method": "sock_impl_set_options", 00:18:15.549 "params": { 00:18:15.549 "impl_name": "posix", 00:18:15.549 "recv_buf_size": 2097152, 00:18:15.549 "send_buf_size": 2097152, 00:18:15.549 "enable_recv_pipe": true, 00:18:15.549 "enable_quickack": false, 00:18:15.549 "enable_placement_id": 0, 00:18:15.549 "enable_zerocopy_send_server": true, 00:18:15.549 "enable_zerocopy_send_client": false, 00:18:15.549 "zerocopy_threshold": 0, 00:18:15.549 "tls_version": 0, 00:18:15.549 "enable_ktls": false 00:18:15.549 } 00:18:15.549 } 00:18:15.549 ] 00:18:15.549 }, 00:18:15.549 { 00:18:15.549 "subsystem": "vmd", 00:18:15.549 "config": [] 00:18:15.549 }, 00:18:15.549 { 00:18:15.549 "subsystem": "accel", 00:18:15.549 "config": [ 00:18:15.549 { 00:18:15.549 "method": "accel_set_options", 00:18:15.549 "params": { 00:18:15.549 "small_cache_size": 128, 00:18:15.549 "large_cache_size": 16, 00:18:15.549 "task_count": 2048, 00:18:15.549 "sequence_count": 2048, 00:18:15.549 "buf_count": 2048 00:18:15.549 } 00:18:15.549 } 00:18:15.549 ] 00:18:15.549 }, 00:18:15.549 { 00:18:15.549 "subsystem": "bdev", 00:18:15.549 "config": [ 00:18:15.549 { 00:18:15.549 "method": "bdev_set_options", 00:18:15.549 "params": { 00:18:15.549 "bdev_io_pool_size": 65535, 00:18:15.549 "bdev_io_cache_size": 256, 00:18:15.549 "bdev_auto_examine": true, 00:18:15.549 "iobuf_small_cache_size": 128, 00:18:15.549 "iobuf_large_cache_size": 16 00:18:15.549 } 00:18:15.549 }, 00:18:15.549 { 00:18:15.549 "method": "bdev_raid_set_options", 00:18:15.549 "params": { 00:18:15.549 "process_window_size_kb": 1024, 00:18:15.550 "process_max_bandwidth_mb_sec": 0 00:18:15.550 } 00:18:15.550 }, 00:18:15.550 { 00:18:15.550 "method": "bdev_iscsi_set_options", 00:18:15.550 "params": { 00:18:15.550 "timeout_sec": 30 00:18:15.550 } 00:18:15.550 }, 00:18:15.550 { 00:18:15.550 "method": "bdev_nvme_set_options", 00:18:15.550 "params": { 00:18:15.550 "action_on_timeout": "none", 00:18:15.550 "timeout_us": 0, 00:18:15.550 "timeout_admin_us": 0, 00:18:15.550 "keep_alive_timeout_ms": 10000, 00:18:15.550 "arbitration_burst": 0, 00:18:15.550 "low_priority_weight": 0, 00:18:15.550 "medium_priority_weight": 0, 00:18:15.550 "high_priority_weight": 0, 00:18:15.550 "nvme_adminq_poll_period_us": 10000, 00:18:15.550 "nvme_ioq_poll_period_us": 0, 00:18:15.550 "io_queue_requests": 0, 00:18:15.550 "delay_cmd_submit": true, 00:18:15.550 "transport_retry_count": 4, 00:18:15.550 "bdev_retry_count": 3, 00:18:15.550 "transport_ack_timeout": 0, 00:18:15.550 "ctrlr_loss_timeout_sec": 0, 00:18:15.550 "reconnect_delay_sec": 0, 00:18:15.550 "fast_io_fail_timeout_sec": 0, 00:18:15.550 "disable_auto_failback": false, 00:18:15.550 "generate_uuids": false, 00:18:15.550 "transport_tos": 0, 00:18:15.550 "nvme_error_stat": false, 00:18:15.550 "rdma_srq_size": 0, 00:18:15.550 "io_path_stat": false, 00:18:15.550 "allow_accel_sequence": false, 00:18:15.550 "rdma_max_cq_size": 0, 00:18:15.550 "rdma_cm_event_timeout_ms": 0, 00:18:15.550 "dhchap_digests": [ 00:18:15.550 "sha256", 00:18:15.550 "sha384", 00:18:15.550 "sha512" 00:18:15.550 ], 00:18:15.550 "dhchap_dhgroups": [ 00:18:15.550 "null", 00:18:15.550 "ffdhe2048", 00:18:15.550 "ffdhe3072", 00:18:15.550 "ffdhe4096", 00:18:15.550 "ffdhe6144", 00:18:15.550 "ffdhe8192" 00:18:15.550 ] 00:18:15.550 } 00:18:15.550 }, 00:18:15.550 { 00:18:15.550 "method": "bdev_nvme_set_hotplug", 00:18:15.550 "params": { 00:18:15.550 "period_us": 100000, 00:18:15.550 "enable": false 00:18:15.550 } 00:18:15.550 }, 00:18:15.550 { 00:18:15.550 "method": "bdev_malloc_create", 00:18:15.550 "params": { 00:18:15.550 "name": "malloc0", 00:18:15.550 "num_blocks": 8192, 00:18:15.550 "block_size": 4096, 00:18:15.550 "physical_block_size": 4096, 00:18:15.550 "uuid": "7a9fd1bd-f7de-42e1-b9ee-aba1b8991572", 00:18:15.550 "optimal_io_boundary": 0, 00:18:15.550 "md_size": 0, 00:18:15.550 "dif_type": 0, 00:18:15.550 "dif_is_head_of_md": false, 00:18:15.550 "dif_pi_format": 0 00:18:15.550 } 00:18:15.550 }, 00:18:15.550 { 00:18:15.550 "method": "bdev_wait_for_examine" 00:18:15.550 } 00:18:15.550 ] 00:18:15.550 }, 00:18:15.550 { 00:18:15.550 "subsystem": "nbd", 00:18:15.550 "config": [] 00:18:15.550 }, 00:18:15.550 { 00:18:15.550 "subsystem": "scheduler", 00:18:15.550 "config": [ 00:18:15.550 { 00:18:15.550 "method": "framework_set_scheduler", 00:18:15.550 "params": { 00:18:15.550 "name": "static" 00:18:15.550 } 00:18:15.550 } 00:18:15.550 ] 00:18:15.550 }, 00:18:15.550 { 00:18:15.550 "subsystem": "nvmf", 00:18:15.550 "config": [ 00:18:15.550 { 00:18:15.550 "method": "nvmf_set_config", 00:18:15.550 "params": { 00:18:15.550 "discovery_filter": "match_any", 00:18:15.550 "admin_cmd_passthru": { 00:18:15.550 "identify_ctrlr": false 00:18:15.550 }, 00:18:15.550 "dhchap_digests": [ 00:18:15.550 "sha256", 00:18:15.550 "sha384", 00:18:15.550 "sha512" 00:18:15.550 ], 00:18:15.550 "dhchap_dhgroups": [ 00:18:15.550 "null", 00:18:15.550 "ffdhe2048", 00:18:15.550 "ffdhe3072", 00:18:15.550 "ffdhe4096", 00:18:15.550 "ffdhe6144", 00:18:15.550 "ffdhe8192" 00:18:15.550 ] 00:18:15.550 } 00:18:15.550 }, 00:18:15.550 { 00:18:15.550 "method": "nvmf_set_max_subsystems", 00:18:15.550 "params": { 00:18:15.550 "max_subsystems": 1024 00:18:15.550 } 00:18:15.550 }, 00:18:15.550 { 00:18:15.550 "method": "nvmf_set_crdt", 00:18:15.550 "params": { 00:18:15.550 "crdt1": 0, 00:18:15.550 "crdt2": 0, 00:18:15.550 "crdt3": 0 00:18:15.550 } 00:18:15.550 }, 00:18:15.550 { 00:18:15.550 "method": "nvmf_create_transport", 00:18:15.550 "params": { 00:18:15.550 "trtype": "TCP", 00:18:15.550 "max_queue_depth": 128, 00:18:15.550 "max_io_qpairs_per_ctrlr": 127, 00:18:15.550 "in_capsule_data_size": 4096, 00:18:15.550 "max_io_size": 131072, 00:18:15.550 "io_unit_size": 131072, 00:18:15.550 "max_aq_depth": 128, 00:18:15.550 "num_shared_buffers": 511, 00:18:15.550 "buf_cache_size": 4294967295, 00:18:15.550 "dif_insert_or_strip": false, 00:18:15.550 "zcopy": false, 00:18:15.550 "c2h_success": false, 00:18:15.550 "sock_priority": 0, 00:18:15.550 "abort_timeout_sec": 1, 00:18:15.550 "ack_timeout": 0, 00:18:15.550 "data_wr_pool_size": 0 00:18:15.550 } 00:18:15.550 }, 00:18:15.550 { 00:18:15.550 "method": "nvmf_create_subsystem", 00:18:15.550 "params": { 00:18:15.550 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:15.550 "allow_any_host": false, 00:18:15.550 "serial_number": "SPDK00000000000001", 00:18:15.550 "model_number": "SPDK bdev Controller", 00:18:15.550 "max_namespaces": 10, 00:18:15.550 "min_cntlid": 1, 00:18:15.550 "max_cntlid": 65519, 00:18:15.550 "ana_reporting": false 00:18:15.550 } 00:18:15.550 }, 00:18:15.550 { 00:18:15.550 "method": "nvmf_subsystem_add_host", 00:18:15.550 "params": { 00:18:15.550 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:15.550 "host": "nqn.2016-06.io.spdk:host1", 00:18:15.550 "psk": "key0" 00:18:15.550 } 00:18:15.550 }, 00:18:15.550 { 00:18:15.550 "method": "nvmf_subsystem_add_ns", 00:18:15.550 "params": { 00:18:15.550 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:15.550 "namespace": { 00:18:15.550 "nsid": 1, 00:18:15.550 "bdev_name": "malloc0", 00:18:15.550 "nguid": "7A9FD1BDF7DE42E1B9EEABA1B8991572", 00:18:15.550 "uuid": "7a9fd1bd-f7de-42e1-b9ee-aba1b8991572", 00:18:15.550 "no_auto_visible": false 00:18:15.550 } 00:18:15.550 } 00:18:15.550 }, 00:18:15.550 { 00:18:15.550 "method": "nvmf_subsystem_add_listener", 00:18:15.550 "params": { 00:18:15.550 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:15.550 "listen_address": { 00:18:15.550 "trtype": "TCP", 00:18:15.550 "adrfam": "IPv4", 00:18:15.550 "traddr": "10.0.0.2", 00:18:15.550 "trsvcid": "4420" 00:18:15.550 }, 00:18:15.550 "secure_channel": true 00:18:15.550 } 00:18:15.550 } 00:18:15.550 ] 00:18:15.550 } 00:18:15.550 ] 00:18:15.550 }' 00:18:15.550 20:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:16.117 20:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:18:16.117 "subsystems": [ 00:18:16.117 { 00:18:16.117 "subsystem": "keyring", 00:18:16.117 "config": [ 00:18:16.117 { 00:18:16.117 "method": "keyring_file_add_key", 00:18:16.117 "params": { 00:18:16.117 "name": "key0", 00:18:16.117 "path": "/tmp/tmp.1zPxjs6rVC" 00:18:16.117 } 00:18:16.117 } 00:18:16.117 ] 00:18:16.117 }, 00:18:16.117 { 00:18:16.117 "subsystem": "iobuf", 00:18:16.117 "config": [ 00:18:16.117 { 00:18:16.117 "method": "iobuf_set_options", 00:18:16.117 "params": { 00:18:16.117 "small_pool_count": 8192, 00:18:16.117 "large_pool_count": 1024, 00:18:16.117 "small_bufsize": 8192, 00:18:16.117 "large_bufsize": 135168, 00:18:16.117 "enable_numa": false 00:18:16.117 } 00:18:16.117 } 00:18:16.117 ] 00:18:16.117 }, 00:18:16.117 { 00:18:16.117 "subsystem": "sock", 00:18:16.117 "config": [ 00:18:16.117 { 00:18:16.117 "method": "sock_set_default_impl", 00:18:16.117 "params": { 00:18:16.117 "impl_name": "posix" 00:18:16.117 } 00:18:16.117 }, 00:18:16.117 { 00:18:16.117 "method": "sock_impl_set_options", 00:18:16.117 "params": { 00:18:16.117 "impl_name": "ssl", 00:18:16.117 "recv_buf_size": 4096, 00:18:16.117 "send_buf_size": 4096, 00:18:16.117 "enable_recv_pipe": true, 00:18:16.117 "enable_quickack": false, 00:18:16.117 "enable_placement_id": 0, 00:18:16.117 "enable_zerocopy_send_server": true, 00:18:16.117 "enable_zerocopy_send_client": false, 00:18:16.117 "zerocopy_threshold": 0, 00:18:16.117 "tls_version": 0, 00:18:16.117 "enable_ktls": false 00:18:16.117 } 00:18:16.117 }, 00:18:16.117 { 00:18:16.117 "method": "sock_impl_set_options", 00:18:16.117 "params": { 00:18:16.117 "impl_name": "posix", 00:18:16.117 "recv_buf_size": 2097152, 00:18:16.117 "send_buf_size": 2097152, 00:18:16.117 "enable_recv_pipe": true, 00:18:16.117 "enable_quickack": false, 00:18:16.117 "enable_placement_id": 0, 00:18:16.117 "enable_zerocopy_send_server": true, 00:18:16.117 "enable_zerocopy_send_client": false, 00:18:16.117 "zerocopy_threshold": 0, 00:18:16.117 "tls_version": 0, 00:18:16.117 "enable_ktls": false 00:18:16.117 } 00:18:16.117 } 00:18:16.117 ] 00:18:16.117 }, 00:18:16.117 { 00:18:16.117 "subsystem": "vmd", 00:18:16.117 "config": [] 00:18:16.117 }, 00:18:16.117 { 00:18:16.117 "subsystem": "accel", 00:18:16.117 "config": [ 00:18:16.117 { 00:18:16.117 "method": "accel_set_options", 00:18:16.117 "params": { 00:18:16.117 "small_cache_size": 128, 00:18:16.117 "large_cache_size": 16, 00:18:16.117 "task_count": 2048, 00:18:16.117 "sequence_count": 2048, 00:18:16.117 "buf_count": 2048 00:18:16.117 } 00:18:16.117 } 00:18:16.117 ] 00:18:16.117 }, 00:18:16.117 { 00:18:16.117 "subsystem": "bdev", 00:18:16.117 "config": [ 00:18:16.117 { 00:18:16.117 "method": "bdev_set_options", 00:18:16.117 "params": { 00:18:16.117 "bdev_io_pool_size": 65535, 00:18:16.117 "bdev_io_cache_size": 256, 00:18:16.117 "bdev_auto_examine": true, 00:18:16.117 "iobuf_small_cache_size": 128, 00:18:16.117 "iobuf_large_cache_size": 16 00:18:16.117 } 00:18:16.117 }, 00:18:16.117 { 00:18:16.117 "method": "bdev_raid_set_options", 00:18:16.117 "params": { 00:18:16.117 "process_window_size_kb": 1024, 00:18:16.117 "process_max_bandwidth_mb_sec": 0 00:18:16.117 } 00:18:16.117 }, 00:18:16.117 { 00:18:16.117 "method": "bdev_iscsi_set_options", 00:18:16.117 "params": { 00:18:16.117 "timeout_sec": 30 00:18:16.117 } 00:18:16.117 }, 00:18:16.117 { 00:18:16.117 "method": "bdev_nvme_set_options", 00:18:16.117 "params": { 00:18:16.117 "action_on_timeout": "none", 00:18:16.117 "timeout_us": 0, 00:18:16.117 "timeout_admin_us": 0, 00:18:16.117 "keep_alive_timeout_ms": 10000, 00:18:16.117 "arbitration_burst": 0, 00:18:16.117 "low_priority_weight": 0, 00:18:16.117 "medium_priority_weight": 0, 00:18:16.117 "high_priority_weight": 0, 00:18:16.117 "nvme_adminq_poll_period_us": 10000, 00:18:16.117 "nvme_ioq_poll_period_us": 0, 00:18:16.117 "io_queue_requests": 512, 00:18:16.117 "delay_cmd_submit": true, 00:18:16.117 "transport_retry_count": 4, 00:18:16.117 "bdev_retry_count": 3, 00:18:16.117 "transport_ack_timeout": 0, 00:18:16.117 "ctrlr_loss_timeout_sec": 0, 00:18:16.117 "reconnect_delay_sec": 0, 00:18:16.117 "fast_io_fail_timeout_sec": 0, 00:18:16.117 "disable_auto_failback": false, 00:18:16.117 "generate_uuids": false, 00:18:16.117 "transport_tos": 0, 00:18:16.117 "nvme_error_stat": false, 00:18:16.117 "rdma_srq_size": 0, 00:18:16.117 "io_path_stat": false, 00:18:16.117 "allow_accel_sequence": false, 00:18:16.117 "rdma_max_cq_size": 0, 00:18:16.117 "rdma_cm_event_timeout_ms": 0, 00:18:16.117 "dhchap_digests": [ 00:18:16.117 "sha256", 00:18:16.117 "sha384", 00:18:16.117 "sha512" 00:18:16.117 ], 00:18:16.117 "dhchap_dhgroups": [ 00:18:16.117 "null", 00:18:16.117 "ffdhe2048", 00:18:16.117 "ffdhe3072", 00:18:16.117 "ffdhe4096", 00:18:16.117 "ffdhe6144", 00:18:16.117 "ffdhe8192" 00:18:16.117 ] 00:18:16.117 } 00:18:16.117 }, 00:18:16.117 { 00:18:16.117 "method": "bdev_nvme_attach_controller", 00:18:16.117 "params": { 00:18:16.117 "name": "TLSTEST", 00:18:16.117 "trtype": "TCP", 00:18:16.117 "adrfam": "IPv4", 00:18:16.117 "traddr": "10.0.0.2", 00:18:16.117 "trsvcid": "4420", 00:18:16.117 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:16.117 "prchk_reftag": false, 00:18:16.117 "prchk_guard": false, 00:18:16.117 "ctrlr_loss_timeout_sec": 0, 00:18:16.117 "reconnect_delay_sec": 0, 00:18:16.117 "fast_io_fail_timeout_sec": 0, 00:18:16.117 "psk": "key0", 00:18:16.117 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:16.117 "hdgst": false, 00:18:16.117 "ddgst": false, 00:18:16.117 "multipath": "multipath" 00:18:16.117 } 00:18:16.117 }, 00:18:16.117 { 00:18:16.117 "method": "bdev_nvme_set_hotplug", 00:18:16.117 "params": { 00:18:16.117 "period_us": 100000, 00:18:16.117 "enable": false 00:18:16.117 } 00:18:16.117 }, 00:18:16.117 { 00:18:16.117 "method": "bdev_wait_for_examine" 00:18:16.117 } 00:18:16.117 ] 00:18:16.117 }, 00:18:16.117 { 00:18:16.117 "subsystem": "nbd", 00:18:16.117 "config": [] 00:18:16.117 } 00:18:16.117 ] 00:18:16.117 }' 00:18:16.117 20:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 1682619 00:18:16.117 20:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1682619 ']' 00:18:16.117 20:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1682619 00:18:16.117 20:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:16.117 20:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:16.117 20:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1682619 00:18:16.117 20:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:16.118 20:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:16.118 20:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1682619' 00:18:16.118 killing process with pid 1682619 00:18:16.118 20:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1682619 00:18:16.118 Received shutdown signal, test time was about 10.000000 seconds 00:18:16.118 00:18:16.118 Latency(us) 00:18:16.118 [2024-11-26T19:48:19.815Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:16.118 [2024-11-26T19:48:19.815Z] =================================================================================================================== 00:18:16.118 [2024-11-26T19:48:19.815Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:16.118 20:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1682619 00:18:16.377 20:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 1682333 00:18:16.377 20:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1682333 ']' 00:18:16.377 20:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1682333 00:18:16.377 20:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:16.377 20:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:16.377 20:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1682333 00:18:16.377 20:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:16.377 20:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:16.377 20:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1682333' 00:18:16.377 killing process with pid 1682333 00:18:16.377 20:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1682333 00:18:16.377 20:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1682333 00:18:16.635 20:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:16.635 20:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:16.635 20:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:16.635 20:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:18:16.635 "subsystems": [ 00:18:16.635 { 00:18:16.635 "subsystem": "keyring", 00:18:16.635 "config": [ 00:18:16.635 { 00:18:16.635 "method": "keyring_file_add_key", 00:18:16.635 "params": { 00:18:16.635 "name": "key0", 00:18:16.635 "path": "/tmp/tmp.1zPxjs6rVC" 00:18:16.635 } 00:18:16.635 } 00:18:16.635 ] 00:18:16.635 }, 00:18:16.635 { 00:18:16.635 "subsystem": "iobuf", 00:18:16.635 "config": [ 00:18:16.635 { 00:18:16.635 "method": "iobuf_set_options", 00:18:16.635 "params": { 00:18:16.635 "small_pool_count": 8192, 00:18:16.635 "large_pool_count": 1024, 00:18:16.635 "small_bufsize": 8192, 00:18:16.635 "large_bufsize": 135168, 00:18:16.635 "enable_numa": false 00:18:16.635 } 00:18:16.635 } 00:18:16.635 ] 00:18:16.635 }, 00:18:16.635 { 00:18:16.635 "subsystem": "sock", 00:18:16.635 "config": [ 00:18:16.635 { 00:18:16.635 "method": "sock_set_default_impl", 00:18:16.635 "params": { 00:18:16.635 "impl_name": "posix" 00:18:16.635 } 00:18:16.635 }, 00:18:16.635 { 00:18:16.635 "method": "sock_impl_set_options", 00:18:16.635 "params": { 00:18:16.635 "impl_name": "ssl", 00:18:16.635 "recv_buf_size": 4096, 00:18:16.635 "send_buf_size": 4096, 00:18:16.635 "enable_recv_pipe": true, 00:18:16.635 "enable_quickack": false, 00:18:16.635 "enable_placement_id": 0, 00:18:16.635 "enable_zerocopy_send_server": true, 00:18:16.635 "enable_zerocopy_send_client": false, 00:18:16.635 "zerocopy_threshold": 0, 00:18:16.635 "tls_version": 0, 00:18:16.635 "enable_ktls": false 00:18:16.635 } 00:18:16.635 }, 00:18:16.635 { 00:18:16.635 "method": "sock_impl_set_options", 00:18:16.635 "params": { 00:18:16.635 "impl_name": "posix", 00:18:16.636 "recv_buf_size": 2097152, 00:18:16.636 "send_buf_size": 2097152, 00:18:16.636 "enable_recv_pipe": true, 00:18:16.636 "enable_quickack": false, 00:18:16.636 "enable_placement_id": 0, 00:18:16.636 "enable_zerocopy_send_server": true, 00:18:16.636 "enable_zerocopy_send_client": false, 00:18:16.636 "zerocopy_threshold": 0, 00:18:16.636 "tls_version": 0, 00:18:16.636 "enable_ktls": false 00:18:16.636 } 00:18:16.636 } 00:18:16.636 ] 00:18:16.636 }, 00:18:16.636 { 00:18:16.636 "subsystem": "vmd", 00:18:16.636 "config": [] 00:18:16.636 }, 00:18:16.636 { 00:18:16.636 "subsystem": "accel", 00:18:16.636 "config": [ 00:18:16.636 { 00:18:16.636 "method": "accel_set_options", 00:18:16.636 "params": { 00:18:16.636 "small_cache_size": 128, 00:18:16.636 "large_cache_size": 16, 00:18:16.636 "task_count": 2048, 00:18:16.636 "sequence_count": 2048, 00:18:16.636 "buf_count": 2048 00:18:16.636 } 00:18:16.636 } 00:18:16.636 ] 00:18:16.636 }, 00:18:16.636 { 00:18:16.636 "subsystem": "bdev", 00:18:16.636 "config": [ 00:18:16.636 { 00:18:16.636 "method": "bdev_set_options", 00:18:16.636 "params": { 00:18:16.636 "bdev_io_pool_size": 65535, 00:18:16.636 "bdev_io_cache_size": 256, 00:18:16.636 "bdev_auto_examine": true, 00:18:16.636 "iobuf_small_cache_size": 128, 00:18:16.636 "iobuf_large_cache_size": 16 00:18:16.636 } 00:18:16.636 }, 00:18:16.636 { 00:18:16.636 "method": "bdev_raid_set_options", 00:18:16.636 "params": { 00:18:16.636 "process_window_size_kb": 1024, 00:18:16.636 "process_max_bandwidth_mb_sec": 0 00:18:16.636 } 00:18:16.636 }, 00:18:16.636 { 00:18:16.636 "method": "bdev_iscsi_set_options", 00:18:16.636 "params": { 00:18:16.636 "timeout_sec": 30 00:18:16.636 } 00:18:16.636 }, 00:18:16.636 { 00:18:16.636 "method": "bdev_nvme_set_options", 00:18:16.636 "params": { 00:18:16.636 "action_on_timeout": "none", 00:18:16.636 "timeout_us": 0, 00:18:16.636 "timeout_admin_us": 0, 00:18:16.636 "keep_alive_timeout_ms": 10000, 00:18:16.636 "arbitration_burst": 0, 00:18:16.636 "low_priority_weight": 0, 00:18:16.636 "medium_priority_weight": 0, 00:18:16.636 "high_priority_weight": 0, 00:18:16.636 "nvme_adminq_poll_period_us": 10000, 00:18:16.636 "nvme_ioq_poll_period_us": 0, 00:18:16.636 "io_queue_requests": 0, 00:18:16.636 "delay_cmd_submit": true, 00:18:16.636 "transport_retry_count": 4, 00:18:16.636 "bdev_retry_count": 3, 00:18:16.636 "transport_ack_timeout": 0, 00:18:16.636 "ctrlr_loss_timeout_sec": 0, 00:18:16.636 "reconnect_delay_sec": 0, 00:18:16.636 "fast_io_fail_timeout_sec": 0, 00:18:16.636 "disable_auto_failback": false, 00:18:16.636 "generate_uuids": false, 00:18:16.636 "transport_tos": 0, 00:18:16.636 "nvme_error_stat": false, 00:18:16.636 "rdma_srq_size": 0, 00:18:16.636 "io_path_stat": false, 00:18:16.636 "allow_accel_sequence": false, 00:18:16.636 "rdma_max_cq_size": 0, 00:18:16.636 "rdma_cm_event_timeout_ms": 0, 00:18:16.636 "dhchap_digests": [ 00:18:16.636 "sha256", 00:18:16.636 "sha384", 00:18:16.636 "sha512" 00:18:16.636 ], 00:18:16.636 "dhchap_dhgroups": [ 00:18:16.636 "null", 00:18:16.636 "ffdhe2048", 00:18:16.636 "ffdhe3072", 00:18:16.636 "ffdhe4096", 00:18:16.636 "ffdhe6144", 00:18:16.636 "ffdhe8192" 00:18:16.636 ] 00:18:16.636 } 00:18:16.636 }, 00:18:16.636 { 00:18:16.636 "method": "bdev_nvme_set_hotplug", 00:18:16.636 "params": { 00:18:16.636 "period_us": 100000, 00:18:16.636 "enable": false 00:18:16.636 } 00:18:16.636 }, 00:18:16.636 { 00:18:16.636 "method": "bdev_malloc_create", 00:18:16.636 "params": { 00:18:16.636 "name": "malloc0", 00:18:16.636 "num_blocks": 8192, 00:18:16.636 "block_size": 4096, 00:18:16.636 "physical_block_size": 4096, 00:18:16.636 "uuid": "7a9fd1bd-f7de-42e1-b9ee-aba1b8991572", 00:18:16.636 "optimal_io_boundary": 0, 00:18:16.636 "md_size": 0, 00:18:16.636 "dif_type": 0, 00:18:16.636 "dif_is_head_of_md": false, 00:18:16.636 "dif_pi_format": 0 00:18:16.636 } 00:18:16.636 }, 00:18:16.636 { 00:18:16.636 "method": "bdev_wait_for_examine" 00:18:16.636 } 00:18:16.636 ] 00:18:16.636 }, 00:18:16.636 { 00:18:16.636 "subsystem": "nbd", 00:18:16.636 "config": [] 00:18:16.636 }, 00:18:16.636 { 00:18:16.636 "subsystem": "scheduler", 00:18:16.636 "config": [ 00:18:16.636 { 00:18:16.636 "method": "framework_set_scheduler", 00:18:16.636 "params": { 00:18:16.636 "name": "static" 00:18:16.636 } 00:18:16.636 } 00:18:16.636 ] 00:18:16.636 }, 00:18:16.636 { 00:18:16.636 "subsystem": "nvmf", 00:18:16.636 "config": [ 00:18:16.636 { 00:18:16.636 "method": "nvmf_set_config", 00:18:16.636 "params": { 00:18:16.636 "discovery_filter": "match_any", 00:18:16.636 "admin_cmd_passthru": { 00:18:16.636 "identify_ctrlr": false 00:18:16.636 }, 00:18:16.636 "dhchap_digests": [ 00:18:16.636 "sha256", 00:18:16.636 "sha384", 00:18:16.636 "sha512" 00:18:16.636 ], 00:18:16.636 "dhchap_dhgroups": [ 00:18:16.636 "null", 00:18:16.636 "ffdhe2048", 00:18:16.636 "ffdhe3072", 00:18:16.636 "ffdhe4096", 00:18:16.636 "ffdhe6144", 00:18:16.636 "ffdhe8192" 00:18:16.636 ] 00:18:16.636 } 00:18:16.636 }, 00:18:16.636 { 00:18:16.636 "method": "nvmf_set_max_subsystems", 00:18:16.636 "params": { 00:18:16.636 "max_subsystems": 1024 00:18:16.636 } 00:18:16.636 }, 00:18:16.636 { 00:18:16.636 "method": "nvmf_set_crdt", 00:18:16.636 "params": { 00:18:16.636 "crdt1": 0, 00:18:16.636 "crdt2": 0, 00:18:16.636 "crdt3": 0 00:18:16.636 } 00:18:16.636 }, 00:18:16.636 { 00:18:16.636 "method": "nvmf_create_transport", 00:18:16.636 "params": { 00:18:16.636 "trtype": "TCP", 00:18:16.636 "max_queue_depth": 128, 00:18:16.636 "max_io_qpairs_per_ctrlr": 127, 00:18:16.636 "in_capsule_data_size": 4096, 00:18:16.636 "max_io_size": 131072, 00:18:16.636 "io_unit_size": 131072, 00:18:16.636 "max_aq_depth": 128, 00:18:16.636 "num_shared_buffers": 511, 00:18:16.636 "buf_cache_size": 4294967295, 00:18:16.636 "dif_insert_or_strip": false, 00:18:16.636 "zcopy": false, 00:18:16.636 "c2h_success": false, 00:18:16.636 "sock_priority": 0, 00:18:16.636 "abort_timeout_sec": 1, 00:18:16.636 "ack_timeout": 0, 00:18:16.636 "data_wr_pool_size": 0 00:18:16.636 } 00:18:16.636 }, 00:18:16.636 { 00:18:16.636 "method": "nvmf_create_subsystem", 00:18:16.636 "params": { 00:18:16.636 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:16.636 "allow_any_host": false, 00:18:16.636 "serial_number": "SPDK00000000000001", 00:18:16.636 "model_number": "SPDK bdev Controller", 00:18:16.636 "max_namespaces": 10, 00:18:16.636 "min_cntlid": 1, 00:18:16.636 "max_cntlid": 65519, 00:18:16.636 "ana_reporting": false 00:18:16.636 } 00:18:16.636 }, 00:18:16.636 { 00:18:16.636 "method": "nvmf_subsystem_add_host", 00:18:16.636 "params": { 00:18:16.636 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:16.636 "host": "nqn.2016-06.io.spdk:host1", 00:18:16.636 "psk": "key0" 00:18:16.636 } 00:18:16.636 }, 00:18:16.636 { 00:18:16.636 "method": "nvmf_subsystem_add_ns", 00:18:16.636 "params": { 00:18:16.636 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:16.636 "namespace": { 00:18:16.636 "nsid": 1, 00:18:16.636 "bdev_name": "malloc0", 00:18:16.636 "nguid": "7A9FD1BDF7DE42E1B9EEABA1B8991572", 00:18:16.636 "uuid": "7a9fd1bd-f7de-42e1-b9ee-aba1b8991572", 00:18:16.637 "no_auto_visible": false 00:18:16.637 } 00:18:16.637 } 00:18:16.637 }, 00:18:16.637 { 00:18:16.637 "method": "nvmf_subsystem_add_listener", 00:18:16.637 "params": { 00:18:16.637 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:16.637 "listen_address": { 00:18:16.637 "trtype": "TCP", 00:18:16.637 "adrfam": "IPv4", 00:18:16.637 "traddr": "10.0.0.2", 00:18:16.637 "trsvcid": "4420" 00:18:16.637 }, 00:18:16.637 "secure_channel": true 00:18:16.637 } 00:18:16.637 } 00:18:16.637 ] 00:18:16.637 } 00:18:16.637 ] 00:18:16.637 }' 00:18:16.637 20:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:16.637 20:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1682899 00:18:16.637 20:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:16.637 20:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1682899 00:18:16.637 20:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1682899 ']' 00:18:16.637 20:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:16.637 20:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:16.637 20:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:16.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:16.637 20:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:16.637 20:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:16.637 [2024-11-26 20:48:20.155482] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:18:16.637 [2024-11-26 20:48:20.155579] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:16.637 [2024-11-26 20:48:20.230608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.637 [2024-11-26 20:48:20.284529] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:16.637 [2024-11-26 20:48:20.284601] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:16.637 [2024-11-26 20:48:20.284626] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:16.637 [2024-11-26 20:48:20.284636] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:16.637 [2024-11-26 20:48:20.284646] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:16.637 [2024-11-26 20:48:20.285286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:16.895 [2024-11-26 20:48:20.531851] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:16.895 [2024-11-26 20:48:20.563859] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:16.895 [2024-11-26 20:48:20.564106] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:17.461 20:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:17.461 20:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:17.461 20:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:17.461 20:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:17.461 20:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:17.721 20:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:17.721 20:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=1683054 00:18:17.721 20:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 1683054 /var/tmp/bdevperf.sock 00:18:17.721 20:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1683054 ']' 00:18:17.721 20:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:17.721 20:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:17.721 20:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:17.721 20:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:18:17.721 "subsystems": [ 00:18:17.721 { 00:18:17.721 "subsystem": "keyring", 00:18:17.721 "config": [ 00:18:17.721 { 00:18:17.721 "method": "keyring_file_add_key", 00:18:17.721 "params": { 00:18:17.721 "name": "key0", 00:18:17.721 "path": "/tmp/tmp.1zPxjs6rVC" 00:18:17.721 } 00:18:17.721 } 00:18:17.721 ] 00:18:17.721 }, 00:18:17.721 { 00:18:17.721 "subsystem": "iobuf", 00:18:17.721 "config": [ 00:18:17.721 { 00:18:17.721 "method": "iobuf_set_options", 00:18:17.721 "params": { 00:18:17.721 "small_pool_count": 8192, 00:18:17.721 "large_pool_count": 1024, 00:18:17.721 "small_bufsize": 8192, 00:18:17.721 "large_bufsize": 135168, 00:18:17.721 "enable_numa": false 00:18:17.721 } 00:18:17.721 } 00:18:17.721 ] 00:18:17.721 }, 00:18:17.721 { 00:18:17.721 "subsystem": "sock", 00:18:17.721 "config": [ 00:18:17.721 { 00:18:17.721 "method": "sock_set_default_impl", 00:18:17.721 "params": { 00:18:17.721 "impl_name": "posix" 00:18:17.721 } 00:18:17.721 }, 00:18:17.721 { 00:18:17.721 "method": "sock_impl_set_options", 00:18:17.721 "params": { 00:18:17.721 "impl_name": "ssl", 00:18:17.721 "recv_buf_size": 4096, 00:18:17.721 "send_buf_size": 4096, 00:18:17.721 "enable_recv_pipe": true, 00:18:17.721 "enable_quickack": false, 00:18:17.721 "enable_placement_id": 0, 00:18:17.721 "enable_zerocopy_send_server": true, 00:18:17.721 "enable_zerocopy_send_client": false, 00:18:17.721 "zerocopy_threshold": 0, 00:18:17.721 "tls_version": 0, 00:18:17.721 "enable_ktls": false 00:18:17.721 } 00:18:17.721 }, 00:18:17.721 { 00:18:17.721 "method": "sock_impl_set_options", 00:18:17.721 "params": { 00:18:17.721 "impl_name": "posix", 00:18:17.721 "recv_buf_size": 2097152, 00:18:17.721 "send_buf_size": 2097152, 00:18:17.721 "enable_recv_pipe": true, 00:18:17.721 "enable_quickack": false, 00:18:17.721 "enable_placement_id": 0, 00:18:17.721 "enable_zerocopy_send_server": true, 00:18:17.721 "enable_zerocopy_send_client": false, 00:18:17.721 "zerocopy_threshold": 0, 00:18:17.721 "tls_version": 0, 00:18:17.721 "enable_ktls": false 00:18:17.721 } 00:18:17.721 } 00:18:17.721 ] 00:18:17.721 }, 00:18:17.721 { 00:18:17.721 "subsystem": "vmd", 00:18:17.721 "config": [] 00:18:17.721 }, 00:18:17.721 { 00:18:17.721 "subsystem": "accel", 00:18:17.721 "config": [ 00:18:17.721 { 00:18:17.721 "method": "accel_set_options", 00:18:17.721 "params": { 00:18:17.721 "small_cache_size": 128, 00:18:17.721 "large_cache_size": 16, 00:18:17.721 "task_count": 2048, 00:18:17.721 "sequence_count": 2048, 00:18:17.721 "buf_count": 2048 00:18:17.721 } 00:18:17.721 } 00:18:17.721 ] 00:18:17.721 }, 00:18:17.721 { 00:18:17.721 "subsystem": "bdev", 00:18:17.721 "config": [ 00:18:17.721 { 00:18:17.721 "method": "bdev_set_options", 00:18:17.721 "params": { 00:18:17.721 "bdev_io_pool_size": 65535, 00:18:17.721 "bdev_io_cache_size": 256, 00:18:17.721 "bdev_auto_examine": true, 00:18:17.721 "iobuf_small_cache_size": 128, 00:18:17.721 "iobuf_large_cache_size": 16 00:18:17.721 } 00:18:17.721 }, 00:18:17.721 { 00:18:17.721 "method": "bdev_raid_set_options", 00:18:17.721 "params": { 00:18:17.721 "process_window_size_kb": 1024, 00:18:17.721 "process_max_bandwidth_mb_sec": 0 00:18:17.721 } 00:18:17.721 }, 00:18:17.721 { 00:18:17.721 "method": "bdev_iscsi_set_options", 00:18:17.722 "params": { 00:18:17.722 "timeout_sec": 30 00:18:17.722 } 00:18:17.722 }, 00:18:17.722 { 00:18:17.722 "method": "bdev_nvme_set_options", 00:18:17.722 "params": { 00:18:17.722 "action_on_timeout": "none", 00:18:17.722 "timeout_us": 0, 00:18:17.722 "timeout_admin_us": 0, 00:18:17.722 "keep_alive_timeout_ms": 10000, 00:18:17.722 "arbitration_burst": 0, 00:18:17.722 "low_priority_weight": 0, 00:18:17.722 "medium_priority_weight": 0, 00:18:17.722 "high_priority_weight": 0, 00:18:17.722 "nvme_adminq_poll_period_us": 10000, 00:18:17.722 "nvme_ioq_poll_period_us": 0, 00:18:17.722 "io_queue_requests": 512, 00:18:17.722 "delay_cmd_submit": true, 00:18:17.722 "transport_retry_count": 4, 00:18:17.722 "bdev_retry_count": 3, 00:18:17.722 "transport_ack_timeout": 0, 00:18:17.722 "ctrlr_loss_timeout_sec": 0, 00:18:17.722 "reconnect_delay_sec": 0, 00:18:17.722 "fast_io_fail_timeout_sec": 0, 00:18:17.722 "disable_auto_failback": false, 00:18:17.722 "generate_uuids": false, 00:18:17.722 "transport_tos": 0, 00:18:17.722 "nvme_error_stat": false, 00:18:17.722 "rdma_srq_size": 0, 00:18:17.722 "io_path_stat": false, 00:18:17.722 "allow_accel_sequence": false, 00:18:17.722 "rdma_max_cq_size": 0, 00:18:17.722 "rdma_cm_event_timeout_ms": 0, 00:18:17.722 "dhchap_digests": [ 00:18:17.722 "sha256", 00:18:17.722 "sha384", 00:18:17.722 "sha512" 00:18:17.722 ], 00:18:17.722 "dhchap_dhgroups": [ 00:18:17.722 "null", 00:18:17.722 "ffdhe2048", 00:18:17.722 "ffdhe3072", 00:18:17.722 "ffdhe4096", 00:18:17.722 "ffdhe6144", 00:18:17.722 "ffdhe8192" 00:18:17.722 ] 00:18:17.722 } 00:18:17.722 }, 00:18:17.722 { 00:18:17.722 "method": "bdev_nvme_attach_controller", 00:18:17.722 "params": { 00:18:17.722 "name": "TLSTEST", 00:18:17.722 "trtype": "TCP", 00:18:17.722 "adrfam": "IPv4", 00:18:17.722 "traddr": "10.0.0.2", 00:18:17.722 "trsvcid": "4420", 00:18:17.722 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:17.722 "prchk_reftag": false, 00:18:17.722 "prchk_guard": false, 00:18:17.722 "ctrlr_loss_timeout_sec": 0, 00:18:17.722 "reconnect_delay_sec": 0, 00:18:17.722 "fast_io_fail_timeout_sec": 0, 00:18:17.722 "psk": "key0", 00:18:17.722 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:17.722 "hdgst": false, 00:18:17.722 "ddgst": false, 00:18:17.722 "multipath": "multipath" 00:18:17.722 } 00:18:17.722 }, 00:18:17.722 { 00:18:17.722 "method": "bdev_nvme_set_hotplug", 00:18:17.722 "params": { 00:18:17.722 "period_us": 100000, 00:18:17.722 "enable": false 00:18:17.722 } 00:18:17.722 }, 00:18:17.722 { 00:18:17.722 "method": "bdev_wait_for_examine" 00:18:17.722 } 00:18:17.722 ] 00:18:17.722 }, 00:18:17.722 { 00:18:17.722 "subsystem": "nbd", 00:18:17.722 "config": [] 00:18:17.722 } 00:18:17.722 ] 00:18:17.722 }' 00:18:17.722 20:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:17.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:17.722 20:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:17.722 20:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:17.722 [2024-11-26 20:48:21.216075] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:18:17.722 [2024-11-26 20:48:21.216167] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1683054 ] 00:18:17.722 [2024-11-26 20:48:21.281976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.722 [2024-11-26 20:48:21.339049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:17.979 [2024-11-26 20:48:21.524574] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:17.979 20:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:17.979 20:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:17.979 20:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:18.236 Running I/O for 10 seconds... 00:18:20.100 3512.00 IOPS, 13.72 MiB/s [2024-11-26T19:48:25.170Z] 3543.00 IOPS, 13.84 MiB/s [2024-11-26T19:48:26.102Z] 3581.33 IOPS, 13.99 MiB/s [2024-11-26T19:48:27.034Z] 3552.00 IOPS, 13.88 MiB/s [2024-11-26T19:48:27.966Z] 3560.80 IOPS, 13.91 MiB/s [2024-11-26T19:48:28.899Z] 3562.83 IOPS, 13.92 MiB/s [2024-11-26T19:48:29.924Z] 3560.00 IOPS, 13.91 MiB/s [2024-11-26T19:48:30.857Z] 3561.62 IOPS, 13.91 MiB/s [2024-11-26T19:48:31.789Z] 3567.67 IOPS, 13.94 MiB/s [2024-11-26T19:48:32.045Z] 3567.40 IOPS, 13.94 MiB/s 00:18:28.348 Latency(us) 00:18:28.348 [2024-11-26T19:48:32.045Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:28.348 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:28.348 Verification LBA range: start 0x0 length 0x2000 00:18:28.348 TLSTESTn1 : 10.03 3568.70 13.94 0.00 0.00 35786.90 7281.78 37671.06 00:18:28.348 [2024-11-26T19:48:32.045Z] =================================================================================================================== 00:18:28.348 [2024-11-26T19:48:32.045Z] Total : 3568.70 13.94 0.00 0.00 35786.90 7281.78 37671.06 00:18:28.348 { 00:18:28.348 "results": [ 00:18:28.348 { 00:18:28.348 "job": "TLSTESTn1", 00:18:28.348 "core_mask": "0x4", 00:18:28.348 "workload": "verify", 00:18:28.348 "status": "finished", 00:18:28.348 "verify_range": { 00:18:28.348 "start": 0, 00:18:28.348 "length": 8192 00:18:28.348 }, 00:18:28.348 "queue_depth": 128, 00:18:28.349 "io_size": 4096, 00:18:28.349 "runtime": 10.031933, 00:18:28.349 "iops": 3568.7040573337163, 00:18:28.349 "mibps": 13.94025022395983, 00:18:28.349 "io_failed": 0, 00:18:28.349 "io_timeout": 0, 00:18:28.349 "avg_latency_us": 35786.895313621484, 00:18:28.349 "min_latency_us": 7281.777777777777, 00:18:28.349 "max_latency_us": 37671.0637037037 00:18:28.349 } 00:18:28.349 ], 00:18:28.349 "core_count": 1 00:18:28.349 } 00:18:28.349 20:48:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:28.349 20:48:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 1683054 00:18:28.349 20:48:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1683054 ']' 00:18:28.349 20:48:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1683054 00:18:28.349 20:48:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:28.349 20:48:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:28.349 20:48:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1683054 00:18:28.349 20:48:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:28.349 20:48:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:28.349 20:48:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1683054' 00:18:28.349 killing process with pid 1683054 00:18:28.349 20:48:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1683054 00:18:28.349 Received shutdown signal, test time was about 10.000000 seconds 00:18:28.349 00:18:28.349 Latency(us) 00:18:28.349 [2024-11-26T19:48:32.046Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:28.349 [2024-11-26T19:48:32.046Z] =================================================================================================================== 00:18:28.349 [2024-11-26T19:48:32.046Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:28.349 20:48:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1683054 00:18:28.606 20:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 1682899 00:18:28.606 20:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1682899 ']' 00:18:28.606 20:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1682899 00:18:28.606 20:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:28.606 20:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:28.606 20:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1682899 00:18:28.606 20:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:28.606 20:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:28.606 20:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1682899' 00:18:28.606 killing process with pid 1682899 00:18:28.606 20:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1682899 00:18:28.606 20:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1682899 00:18:28.864 20:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:18:28.864 20:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:28.864 20:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:28.864 20:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.864 20:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1684386 00:18:28.864 20:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:28.864 20:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1684386 00:18:28.864 20:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1684386 ']' 00:18:28.864 20:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:28.864 20:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:28.864 20:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:28.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:28.864 20:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:28.864 20:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.864 [2024-11-26 20:48:32.424076] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:18:28.864 [2024-11-26 20:48:32.424169] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:28.864 [2024-11-26 20:48:32.493962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.864 [2024-11-26 20:48:32.549537] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:28.864 [2024-11-26 20:48:32.549606] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:28.864 [2024-11-26 20:48:32.549629] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:28.864 [2024-11-26 20:48:32.549640] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:28.864 [2024-11-26 20:48:32.549649] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:28.864 [2024-11-26 20:48:32.550194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:29.121 20:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:29.121 20:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:29.121 20:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:29.121 20:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:29.121 20:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:29.121 20:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:29.121 20:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.1zPxjs6rVC 00:18:29.121 20:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.1zPxjs6rVC 00:18:29.121 20:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:29.378 [2024-11-26 20:48:32.927237] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:29.378 20:48:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:29.635 20:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:29.893 [2024-11-26 20:48:33.460711] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:29.893 [2024-11-26 20:48:33.460941] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:29.893 20:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:30.149 malloc0 00:18:30.149 20:48:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:30.407 20:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.1zPxjs6rVC 00:18:30.664 20:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:30.921 20:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=1684670 00:18:30.921 20:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:30.921 20:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:30.921 20:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 1684670 /var/tmp/bdevperf.sock 00:18:30.921 20:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1684670 ']' 00:18:30.921 20:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:30.921 20:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:30.921 20:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:30.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:30.921 20:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:30.921 20:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:31.179 [2024-11-26 20:48:34.625996] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:18:31.179 [2024-11-26 20:48:34.626091] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1684670 ] 00:18:31.179 [2024-11-26 20:48:34.692741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.179 [2024-11-26 20:48:34.749556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:31.179 20:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:31.179 20:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:31.179 20:48:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.1zPxjs6rVC 00:18:31.437 20:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:32.003 [2024-11-26 20:48:35.425909] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:32.003 nvme0n1 00:18:32.003 20:48:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:32.003 Running I/O for 1 seconds... 00:18:33.375 3165.00 IOPS, 12.36 MiB/s 00:18:33.375 Latency(us) 00:18:33.375 [2024-11-26T19:48:37.072Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:33.375 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:33.375 Verification LBA range: start 0x0 length 0x2000 00:18:33.375 nvme0n1 : 1.02 3237.47 12.65 0.00 0.00 39237.85 6602.15 70681.79 00:18:33.375 [2024-11-26T19:48:37.072Z] =================================================================================================================== 00:18:33.375 [2024-11-26T19:48:37.072Z] Total : 3237.47 12.65 0.00 0.00 39237.85 6602.15 70681.79 00:18:33.375 { 00:18:33.375 "results": [ 00:18:33.375 { 00:18:33.375 "job": "nvme0n1", 00:18:33.375 "core_mask": "0x2", 00:18:33.375 "workload": "verify", 00:18:33.375 "status": "finished", 00:18:33.375 "verify_range": { 00:18:33.375 "start": 0, 00:18:33.375 "length": 8192 00:18:33.375 }, 00:18:33.375 "queue_depth": 128, 00:18:33.375 "io_size": 4096, 00:18:33.375 "runtime": 1.017153, 00:18:33.375 "iops": 3237.467716262942, 00:18:33.375 "mibps": 12.646358266652117, 00:18:33.375 "io_failed": 0, 00:18:33.375 "io_timeout": 0, 00:18:33.375 "avg_latency_us": 39237.85107894411, 00:18:33.375 "min_latency_us": 6602.145185185185, 00:18:33.375 "max_latency_us": 70681.78962962962 00:18:33.375 } 00:18:33.375 ], 00:18:33.375 "core_count": 1 00:18:33.375 } 00:18:33.375 20:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 1684670 00:18:33.375 20:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1684670 ']' 00:18:33.375 20:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1684670 00:18:33.375 20:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:33.375 20:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:33.375 20:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1684670 00:18:33.375 20:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:33.375 20:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:33.375 20:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1684670' 00:18:33.375 killing process with pid 1684670 00:18:33.375 20:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1684670 00:18:33.375 Received shutdown signal, test time was about 1.000000 seconds 00:18:33.375 00:18:33.375 Latency(us) 00:18:33.375 [2024-11-26T19:48:37.072Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:33.375 [2024-11-26T19:48:37.072Z] =================================================================================================================== 00:18:33.375 [2024-11-26T19:48:37.072Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:33.376 20:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1684670 00:18:33.376 20:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 1684386 00:18:33.376 20:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1684386 ']' 00:18:33.376 20:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1684386 00:18:33.376 20:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:33.376 20:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:33.376 20:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1684386 00:18:33.376 20:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:33.376 20:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:33.376 20:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1684386' 00:18:33.376 killing process with pid 1684386 00:18:33.376 20:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1684386 00:18:33.376 20:48:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1684386 00:18:33.635 20:48:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:18:33.635 20:48:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:33.635 20:48:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:33.635 20:48:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:33.635 20:48:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1684946 00:18:33.635 20:48:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:33.635 20:48:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1684946 00:18:33.635 20:48:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1684946 ']' 00:18:33.635 20:48:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:33.635 20:48:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:33.635 20:48:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:33.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:33.635 20:48:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:33.635 20:48:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:33.635 [2024-11-26 20:48:37.291894] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:18:33.635 [2024-11-26 20:48:37.292010] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:33.893 [2024-11-26 20:48:37.363365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.893 [2024-11-26 20:48:37.415609] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:33.893 [2024-11-26 20:48:37.415658] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:33.893 [2024-11-26 20:48:37.415680] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:33.893 [2024-11-26 20:48:37.415692] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:33.893 [2024-11-26 20:48:37.415702] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:33.893 [2024-11-26 20:48:37.416265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.893 20:48:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:33.893 20:48:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:33.893 20:48:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:33.893 20:48:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:33.893 20:48:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:33.893 20:48:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:33.893 20:48:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:18:33.893 20:48:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.893 20:48:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:33.893 [2024-11-26 20:48:37.563157] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:33.893 malloc0 00:18:34.152 [2024-11-26 20:48:37.593754] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:34.152 [2024-11-26 20:48:37.593980] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:34.152 20:48:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.152 20:48:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=1685090 00:18:34.152 20:48:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:34.152 20:48:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 1685090 /var/tmp/bdevperf.sock 00:18:34.152 20:48:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1685090 ']' 00:18:34.152 20:48:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:34.152 20:48:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:34.152 20:48:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:34.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:34.152 20:48:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:34.152 20:48:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.152 [2024-11-26 20:48:37.664406] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:18:34.152 [2024-11-26 20:48:37.664468] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1685090 ] 00:18:34.152 [2024-11-26 20:48:37.728525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.152 [2024-11-26 20:48:37.786840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:34.410 20:48:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:34.410 20:48:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:34.410 20:48:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.1zPxjs6rVC 00:18:34.669 20:48:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:34.927 [2024-11-26 20:48:38.400891] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:34.927 nvme0n1 00:18:34.927 20:48:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:34.927 Running I/O for 1 seconds... 00:18:36.301 3466.00 IOPS, 13.54 MiB/s 00:18:36.301 Latency(us) 00:18:36.301 [2024-11-26T19:48:39.998Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.301 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:36.301 Verification LBA range: start 0x0 length 0x2000 00:18:36.301 nvme0n1 : 1.02 3523.67 13.76 0.00 0.00 35979.86 7573.05 31263.10 00:18:36.301 [2024-11-26T19:48:39.998Z] =================================================================================================================== 00:18:36.301 [2024-11-26T19:48:39.998Z] Total : 3523.67 13.76 0.00 0.00 35979.86 7573.05 31263.10 00:18:36.301 { 00:18:36.301 "results": [ 00:18:36.301 { 00:18:36.301 "job": "nvme0n1", 00:18:36.301 "core_mask": "0x2", 00:18:36.301 "workload": "verify", 00:18:36.301 "status": "finished", 00:18:36.301 "verify_range": { 00:18:36.301 "start": 0, 00:18:36.301 "length": 8192 00:18:36.301 }, 00:18:36.301 "queue_depth": 128, 00:18:36.301 "io_size": 4096, 00:18:36.301 "runtime": 1.019958, 00:18:36.301 "iops": 3523.6745042442926, 00:18:36.301 "mibps": 13.764353532204268, 00:18:36.301 "io_failed": 0, 00:18:36.301 "io_timeout": 0, 00:18:36.301 "avg_latency_us": 35979.856969846864, 00:18:36.301 "min_latency_us": 7573.0488888888885, 00:18:36.301 "max_latency_us": 31263.09925925926 00:18:36.301 } 00:18:36.301 ], 00:18:36.301 "core_count": 1 00:18:36.301 } 00:18:36.301 20:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:18:36.301 20:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.301 20:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:36.301 20:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.301 20:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:18:36.301 "subsystems": [ 00:18:36.301 { 00:18:36.301 "subsystem": "keyring", 00:18:36.301 "config": [ 00:18:36.301 { 00:18:36.301 "method": "keyring_file_add_key", 00:18:36.301 "params": { 00:18:36.301 "name": "key0", 00:18:36.301 "path": "/tmp/tmp.1zPxjs6rVC" 00:18:36.301 } 00:18:36.301 } 00:18:36.301 ] 00:18:36.301 }, 00:18:36.301 { 00:18:36.301 "subsystem": "iobuf", 00:18:36.301 "config": [ 00:18:36.301 { 00:18:36.301 "method": "iobuf_set_options", 00:18:36.301 "params": { 00:18:36.301 "small_pool_count": 8192, 00:18:36.301 "large_pool_count": 1024, 00:18:36.301 "small_bufsize": 8192, 00:18:36.301 "large_bufsize": 135168, 00:18:36.301 "enable_numa": false 00:18:36.301 } 00:18:36.301 } 00:18:36.301 ] 00:18:36.301 }, 00:18:36.301 { 00:18:36.301 "subsystem": "sock", 00:18:36.301 "config": [ 00:18:36.301 { 00:18:36.301 "method": "sock_set_default_impl", 00:18:36.301 "params": { 00:18:36.301 "impl_name": "posix" 00:18:36.301 } 00:18:36.301 }, 00:18:36.301 { 00:18:36.301 "method": "sock_impl_set_options", 00:18:36.301 "params": { 00:18:36.301 "impl_name": "ssl", 00:18:36.301 "recv_buf_size": 4096, 00:18:36.301 "send_buf_size": 4096, 00:18:36.301 "enable_recv_pipe": true, 00:18:36.301 "enable_quickack": false, 00:18:36.301 "enable_placement_id": 0, 00:18:36.301 "enable_zerocopy_send_server": true, 00:18:36.301 "enable_zerocopy_send_client": false, 00:18:36.301 "zerocopy_threshold": 0, 00:18:36.301 "tls_version": 0, 00:18:36.301 "enable_ktls": false 00:18:36.301 } 00:18:36.301 }, 00:18:36.301 { 00:18:36.301 "method": "sock_impl_set_options", 00:18:36.301 "params": { 00:18:36.301 "impl_name": "posix", 00:18:36.301 "recv_buf_size": 2097152, 00:18:36.301 "send_buf_size": 2097152, 00:18:36.301 "enable_recv_pipe": true, 00:18:36.301 "enable_quickack": false, 00:18:36.301 "enable_placement_id": 0, 00:18:36.301 "enable_zerocopy_send_server": true, 00:18:36.301 "enable_zerocopy_send_client": false, 00:18:36.301 "zerocopy_threshold": 0, 00:18:36.301 "tls_version": 0, 00:18:36.301 "enable_ktls": false 00:18:36.301 } 00:18:36.301 } 00:18:36.301 ] 00:18:36.301 }, 00:18:36.301 { 00:18:36.301 "subsystem": "vmd", 00:18:36.301 "config": [] 00:18:36.301 }, 00:18:36.301 { 00:18:36.301 "subsystem": "accel", 00:18:36.301 "config": [ 00:18:36.301 { 00:18:36.301 "method": "accel_set_options", 00:18:36.301 "params": { 00:18:36.301 "small_cache_size": 128, 00:18:36.301 "large_cache_size": 16, 00:18:36.301 "task_count": 2048, 00:18:36.301 "sequence_count": 2048, 00:18:36.301 "buf_count": 2048 00:18:36.301 } 00:18:36.301 } 00:18:36.301 ] 00:18:36.301 }, 00:18:36.302 { 00:18:36.302 "subsystem": "bdev", 00:18:36.302 "config": [ 00:18:36.302 { 00:18:36.302 "method": "bdev_set_options", 00:18:36.302 "params": { 00:18:36.302 "bdev_io_pool_size": 65535, 00:18:36.302 "bdev_io_cache_size": 256, 00:18:36.302 "bdev_auto_examine": true, 00:18:36.302 "iobuf_small_cache_size": 128, 00:18:36.302 "iobuf_large_cache_size": 16 00:18:36.302 } 00:18:36.302 }, 00:18:36.302 { 00:18:36.302 "method": "bdev_raid_set_options", 00:18:36.302 "params": { 00:18:36.302 "process_window_size_kb": 1024, 00:18:36.302 "process_max_bandwidth_mb_sec": 0 00:18:36.302 } 00:18:36.302 }, 00:18:36.302 { 00:18:36.302 "method": "bdev_iscsi_set_options", 00:18:36.302 "params": { 00:18:36.302 "timeout_sec": 30 00:18:36.302 } 00:18:36.302 }, 00:18:36.302 { 00:18:36.302 "method": "bdev_nvme_set_options", 00:18:36.302 "params": { 00:18:36.302 "action_on_timeout": "none", 00:18:36.302 "timeout_us": 0, 00:18:36.302 "timeout_admin_us": 0, 00:18:36.302 "keep_alive_timeout_ms": 10000, 00:18:36.302 "arbitration_burst": 0, 00:18:36.302 "low_priority_weight": 0, 00:18:36.302 "medium_priority_weight": 0, 00:18:36.302 "high_priority_weight": 0, 00:18:36.302 "nvme_adminq_poll_period_us": 10000, 00:18:36.302 "nvme_ioq_poll_period_us": 0, 00:18:36.302 "io_queue_requests": 0, 00:18:36.302 "delay_cmd_submit": true, 00:18:36.302 "transport_retry_count": 4, 00:18:36.302 "bdev_retry_count": 3, 00:18:36.302 "transport_ack_timeout": 0, 00:18:36.302 "ctrlr_loss_timeout_sec": 0, 00:18:36.302 "reconnect_delay_sec": 0, 00:18:36.302 "fast_io_fail_timeout_sec": 0, 00:18:36.302 "disable_auto_failback": false, 00:18:36.302 "generate_uuids": false, 00:18:36.302 "transport_tos": 0, 00:18:36.302 "nvme_error_stat": false, 00:18:36.302 "rdma_srq_size": 0, 00:18:36.302 "io_path_stat": false, 00:18:36.302 "allow_accel_sequence": false, 00:18:36.302 "rdma_max_cq_size": 0, 00:18:36.302 "rdma_cm_event_timeout_ms": 0, 00:18:36.302 "dhchap_digests": [ 00:18:36.302 "sha256", 00:18:36.302 "sha384", 00:18:36.302 "sha512" 00:18:36.302 ], 00:18:36.302 "dhchap_dhgroups": [ 00:18:36.302 "null", 00:18:36.302 "ffdhe2048", 00:18:36.302 "ffdhe3072", 00:18:36.302 "ffdhe4096", 00:18:36.302 "ffdhe6144", 00:18:36.302 "ffdhe8192" 00:18:36.302 ] 00:18:36.302 } 00:18:36.302 }, 00:18:36.302 { 00:18:36.302 "method": "bdev_nvme_set_hotplug", 00:18:36.302 "params": { 00:18:36.302 "period_us": 100000, 00:18:36.302 "enable": false 00:18:36.302 } 00:18:36.302 }, 00:18:36.302 { 00:18:36.302 "method": "bdev_malloc_create", 00:18:36.302 "params": { 00:18:36.302 "name": "malloc0", 00:18:36.302 "num_blocks": 8192, 00:18:36.302 "block_size": 4096, 00:18:36.302 "physical_block_size": 4096, 00:18:36.302 "uuid": "c8d1db27-116a-42b8-a18e-be97a0db97da", 00:18:36.302 "optimal_io_boundary": 0, 00:18:36.302 "md_size": 0, 00:18:36.302 "dif_type": 0, 00:18:36.302 "dif_is_head_of_md": false, 00:18:36.302 "dif_pi_format": 0 00:18:36.302 } 00:18:36.302 }, 00:18:36.302 { 00:18:36.302 "method": "bdev_wait_for_examine" 00:18:36.302 } 00:18:36.302 ] 00:18:36.302 }, 00:18:36.302 { 00:18:36.302 "subsystem": "nbd", 00:18:36.302 "config": [] 00:18:36.302 }, 00:18:36.302 { 00:18:36.302 "subsystem": "scheduler", 00:18:36.302 "config": [ 00:18:36.302 { 00:18:36.302 "method": "framework_set_scheduler", 00:18:36.302 "params": { 00:18:36.302 "name": "static" 00:18:36.302 } 00:18:36.302 } 00:18:36.302 ] 00:18:36.302 }, 00:18:36.302 { 00:18:36.302 "subsystem": "nvmf", 00:18:36.302 "config": [ 00:18:36.302 { 00:18:36.302 "method": "nvmf_set_config", 00:18:36.302 "params": { 00:18:36.302 "discovery_filter": "match_any", 00:18:36.302 "admin_cmd_passthru": { 00:18:36.302 "identify_ctrlr": false 00:18:36.302 }, 00:18:36.302 "dhchap_digests": [ 00:18:36.302 "sha256", 00:18:36.302 "sha384", 00:18:36.302 "sha512" 00:18:36.302 ], 00:18:36.302 "dhchap_dhgroups": [ 00:18:36.302 "null", 00:18:36.302 "ffdhe2048", 00:18:36.302 "ffdhe3072", 00:18:36.302 "ffdhe4096", 00:18:36.302 "ffdhe6144", 00:18:36.302 "ffdhe8192" 00:18:36.302 ] 00:18:36.302 } 00:18:36.302 }, 00:18:36.302 { 00:18:36.302 "method": "nvmf_set_max_subsystems", 00:18:36.302 "params": { 00:18:36.302 "max_subsystems": 1024 00:18:36.302 } 00:18:36.302 }, 00:18:36.302 { 00:18:36.302 "method": "nvmf_set_crdt", 00:18:36.302 "params": { 00:18:36.302 "crdt1": 0, 00:18:36.302 "crdt2": 0, 00:18:36.302 "crdt3": 0 00:18:36.302 } 00:18:36.302 }, 00:18:36.302 { 00:18:36.302 "method": "nvmf_create_transport", 00:18:36.302 "params": { 00:18:36.302 "trtype": "TCP", 00:18:36.302 "max_queue_depth": 128, 00:18:36.302 "max_io_qpairs_per_ctrlr": 127, 00:18:36.302 "in_capsule_data_size": 4096, 00:18:36.302 "max_io_size": 131072, 00:18:36.302 "io_unit_size": 131072, 00:18:36.302 "max_aq_depth": 128, 00:18:36.302 "num_shared_buffers": 511, 00:18:36.302 "buf_cache_size": 4294967295, 00:18:36.302 "dif_insert_or_strip": false, 00:18:36.302 "zcopy": false, 00:18:36.302 "c2h_success": false, 00:18:36.302 "sock_priority": 0, 00:18:36.302 "abort_timeout_sec": 1, 00:18:36.302 "ack_timeout": 0, 00:18:36.302 "data_wr_pool_size": 0 00:18:36.302 } 00:18:36.302 }, 00:18:36.302 { 00:18:36.302 "method": "nvmf_create_subsystem", 00:18:36.302 "params": { 00:18:36.302 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:36.302 "allow_any_host": false, 00:18:36.302 "serial_number": "00000000000000000000", 00:18:36.302 "model_number": "SPDK bdev Controller", 00:18:36.302 "max_namespaces": 32, 00:18:36.302 "min_cntlid": 1, 00:18:36.302 "max_cntlid": 65519, 00:18:36.302 "ana_reporting": false 00:18:36.302 } 00:18:36.302 }, 00:18:36.302 { 00:18:36.302 "method": "nvmf_subsystem_add_host", 00:18:36.302 "params": { 00:18:36.302 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:36.302 "host": "nqn.2016-06.io.spdk:host1", 00:18:36.302 "psk": "key0" 00:18:36.302 } 00:18:36.302 }, 00:18:36.302 { 00:18:36.302 "method": "nvmf_subsystem_add_ns", 00:18:36.302 "params": { 00:18:36.302 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:36.302 "namespace": { 00:18:36.302 "nsid": 1, 00:18:36.302 "bdev_name": "malloc0", 00:18:36.302 "nguid": "C8D1DB27116A42B8A18EBE97A0DB97DA", 00:18:36.302 "uuid": "c8d1db27-116a-42b8-a18e-be97a0db97da", 00:18:36.302 "no_auto_visible": false 00:18:36.302 } 00:18:36.302 } 00:18:36.302 }, 00:18:36.302 { 00:18:36.302 "method": "nvmf_subsystem_add_listener", 00:18:36.302 "params": { 00:18:36.302 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:36.302 "listen_address": { 00:18:36.302 "trtype": "TCP", 00:18:36.302 "adrfam": "IPv4", 00:18:36.302 "traddr": "10.0.0.2", 00:18:36.302 "trsvcid": "4420" 00:18:36.302 }, 00:18:36.302 "secure_channel": false, 00:18:36.302 "sock_impl": "ssl" 00:18:36.302 } 00:18:36.302 } 00:18:36.302 ] 00:18:36.302 } 00:18:36.302 ] 00:18:36.302 }' 00:18:36.302 20:48:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:36.561 20:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:18:36.561 "subsystems": [ 00:18:36.561 { 00:18:36.561 "subsystem": "keyring", 00:18:36.561 "config": [ 00:18:36.561 { 00:18:36.561 "method": "keyring_file_add_key", 00:18:36.561 "params": { 00:18:36.561 "name": "key0", 00:18:36.561 "path": "/tmp/tmp.1zPxjs6rVC" 00:18:36.561 } 00:18:36.561 } 00:18:36.561 ] 00:18:36.561 }, 00:18:36.561 { 00:18:36.561 "subsystem": "iobuf", 00:18:36.561 "config": [ 00:18:36.561 { 00:18:36.561 "method": "iobuf_set_options", 00:18:36.561 "params": { 00:18:36.561 "small_pool_count": 8192, 00:18:36.561 "large_pool_count": 1024, 00:18:36.561 "small_bufsize": 8192, 00:18:36.561 "large_bufsize": 135168, 00:18:36.561 "enable_numa": false 00:18:36.561 } 00:18:36.561 } 00:18:36.561 ] 00:18:36.561 }, 00:18:36.561 { 00:18:36.561 "subsystem": "sock", 00:18:36.561 "config": [ 00:18:36.561 { 00:18:36.561 "method": "sock_set_default_impl", 00:18:36.561 "params": { 00:18:36.561 "impl_name": "posix" 00:18:36.561 } 00:18:36.561 }, 00:18:36.561 { 00:18:36.561 "method": "sock_impl_set_options", 00:18:36.561 "params": { 00:18:36.561 "impl_name": "ssl", 00:18:36.561 "recv_buf_size": 4096, 00:18:36.561 "send_buf_size": 4096, 00:18:36.561 "enable_recv_pipe": true, 00:18:36.561 "enable_quickack": false, 00:18:36.561 "enable_placement_id": 0, 00:18:36.561 "enable_zerocopy_send_server": true, 00:18:36.561 "enable_zerocopy_send_client": false, 00:18:36.561 "zerocopy_threshold": 0, 00:18:36.561 "tls_version": 0, 00:18:36.561 "enable_ktls": false 00:18:36.561 } 00:18:36.561 }, 00:18:36.561 { 00:18:36.561 "method": "sock_impl_set_options", 00:18:36.561 "params": { 00:18:36.561 "impl_name": "posix", 00:18:36.561 "recv_buf_size": 2097152, 00:18:36.561 "send_buf_size": 2097152, 00:18:36.561 "enable_recv_pipe": true, 00:18:36.561 "enable_quickack": false, 00:18:36.561 "enable_placement_id": 0, 00:18:36.561 "enable_zerocopy_send_server": true, 00:18:36.561 "enable_zerocopy_send_client": false, 00:18:36.561 "zerocopy_threshold": 0, 00:18:36.561 "tls_version": 0, 00:18:36.561 "enable_ktls": false 00:18:36.561 } 00:18:36.561 } 00:18:36.561 ] 00:18:36.561 }, 00:18:36.562 { 00:18:36.562 "subsystem": "vmd", 00:18:36.562 "config": [] 00:18:36.562 }, 00:18:36.562 { 00:18:36.562 "subsystem": "accel", 00:18:36.562 "config": [ 00:18:36.562 { 00:18:36.562 "method": "accel_set_options", 00:18:36.562 "params": { 00:18:36.562 "small_cache_size": 128, 00:18:36.562 "large_cache_size": 16, 00:18:36.562 "task_count": 2048, 00:18:36.562 "sequence_count": 2048, 00:18:36.562 "buf_count": 2048 00:18:36.562 } 00:18:36.562 } 00:18:36.562 ] 00:18:36.562 }, 00:18:36.562 { 00:18:36.562 "subsystem": "bdev", 00:18:36.562 "config": [ 00:18:36.562 { 00:18:36.562 "method": "bdev_set_options", 00:18:36.562 "params": { 00:18:36.562 "bdev_io_pool_size": 65535, 00:18:36.562 "bdev_io_cache_size": 256, 00:18:36.562 "bdev_auto_examine": true, 00:18:36.562 "iobuf_small_cache_size": 128, 00:18:36.562 "iobuf_large_cache_size": 16 00:18:36.562 } 00:18:36.562 }, 00:18:36.562 { 00:18:36.562 "method": "bdev_raid_set_options", 00:18:36.562 "params": { 00:18:36.562 "process_window_size_kb": 1024, 00:18:36.562 "process_max_bandwidth_mb_sec": 0 00:18:36.562 } 00:18:36.562 }, 00:18:36.562 { 00:18:36.562 "method": "bdev_iscsi_set_options", 00:18:36.562 "params": { 00:18:36.562 "timeout_sec": 30 00:18:36.562 } 00:18:36.562 }, 00:18:36.562 { 00:18:36.562 "method": "bdev_nvme_set_options", 00:18:36.562 "params": { 00:18:36.562 "action_on_timeout": "none", 00:18:36.562 "timeout_us": 0, 00:18:36.562 "timeout_admin_us": 0, 00:18:36.562 "keep_alive_timeout_ms": 10000, 00:18:36.562 "arbitration_burst": 0, 00:18:36.562 "low_priority_weight": 0, 00:18:36.562 "medium_priority_weight": 0, 00:18:36.562 "high_priority_weight": 0, 00:18:36.562 "nvme_adminq_poll_period_us": 10000, 00:18:36.562 "nvme_ioq_poll_period_us": 0, 00:18:36.562 "io_queue_requests": 512, 00:18:36.562 "delay_cmd_submit": true, 00:18:36.562 "transport_retry_count": 4, 00:18:36.562 "bdev_retry_count": 3, 00:18:36.562 "transport_ack_timeout": 0, 00:18:36.562 "ctrlr_loss_timeout_sec": 0, 00:18:36.562 "reconnect_delay_sec": 0, 00:18:36.562 "fast_io_fail_timeout_sec": 0, 00:18:36.562 "disable_auto_failback": false, 00:18:36.562 "generate_uuids": false, 00:18:36.562 "transport_tos": 0, 00:18:36.562 "nvme_error_stat": false, 00:18:36.562 "rdma_srq_size": 0, 00:18:36.562 "io_path_stat": false, 00:18:36.562 "allow_accel_sequence": false, 00:18:36.562 "rdma_max_cq_size": 0, 00:18:36.562 "rdma_cm_event_timeout_ms": 0, 00:18:36.562 "dhchap_digests": [ 00:18:36.562 "sha256", 00:18:36.562 "sha384", 00:18:36.562 "sha512" 00:18:36.562 ], 00:18:36.562 "dhchap_dhgroups": [ 00:18:36.562 "null", 00:18:36.562 "ffdhe2048", 00:18:36.562 "ffdhe3072", 00:18:36.562 "ffdhe4096", 00:18:36.562 "ffdhe6144", 00:18:36.562 "ffdhe8192" 00:18:36.562 ] 00:18:36.562 } 00:18:36.562 }, 00:18:36.562 { 00:18:36.562 "method": "bdev_nvme_attach_controller", 00:18:36.562 "params": { 00:18:36.562 "name": "nvme0", 00:18:36.562 "trtype": "TCP", 00:18:36.562 "adrfam": "IPv4", 00:18:36.562 "traddr": "10.0.0.2", 00:18:36.562 "trsvcid": "4420", 00:18:36.562 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:36.562 "prchk_reftag": false, 00:18:36.562 "prchk_guard": false, 00:18:36.562 "ctrlr_loss_timeout_sec": 0, 00:18:36.562 "reconnect_delay_sec": 0, 00:18:36.562 "fast_io_fail_timeout_sec": 0, 00:18:36.562 "psk": "key0", 00:18:36.562 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:36.562 "hdgst": false, 00:18:36.562 "ddgst": false, 00:18:36.562 "multipath": "multipath" 00:18:36.562 } 00:18:36.562 }, 00:18:36.562 { 00:18:36.562 "method": "bdev_nvme_set_hotplug", 00:18:36.562 "params": { 00:18:36.562 "period_us": 100000, 00:18:36.562 "enable": false 00:18:36.562 } 00:18:36.562 }, 00:18:36.562 { 00:18:36.562 "method": "bdev_enable_histogram", 00:18:36.562 "params": { 00:18:36.562 "name": "nvme0n1", 00:18:36.562 "enable": true 00:18:36.562 } 00:18:36.562 }, 00:18:36.562 { 00:18:36.562 "method": "bdev_wait_for_examine" 00:18:36.562 } 00:18:36.562 ] 00:18:36.562 }, 00:18:36.562 { 00:18:36.562 "subsystem": "nbd", 00:18:36.562 "config": [] 00:18:36.562 } 00:18:36.562 ] 00:18:36.562 }' 00:18:36.562 20:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 1685090 00:18:36.562 20:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1685090 ']' 00:18:36.562 20:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1685090 00:18:36.562 20:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:36.562 20:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:36.562 20:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1685090 00:18:36.562 20:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:36.562 20:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:36.562 20:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1685090' 00:18:36.562 killing process with pid 1685090 00:18:36.563 20:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1685090 00:18:36.563 Received shutdown signal, test time was about 1.000000 seconds 00:18:36.563 00:18:36.563 Latency(us) 00:18:36.563 [2024-11-26T19:48:40.260Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.563 [2024-11-26T19:48:40.260Z] =================================================================================================================== 00:18:36.563 [2024-11-26T19:48:40.260Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:36.563 20:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1685090 00:18:36.821 20:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 1684946 00:18:36.821 20:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1684946 ']' 00:18:36.821 20:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1684946 00:18:36.821 20:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:36.821 20:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:36.821 20:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1684946 00:18:36.821 20:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:36.821 20:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:36.821 20:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1684946' 00:18:36.821 killing process with pid 1684946 00:18:36.821 20:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1684946 00:18:36.821 20:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1684946 00:18:37.080 20:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:18:37.080 20:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:37.080 20:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:18:37.080 "subsystems": [ 00:18:37.080 { 00:18:37.080 "subsystem": "keyring", 00:18:37.080 "config": [ 00:18:37.080 { 00:18:37.080 "method": "keyring_file_add_key", 00:18:37.080 "params": { 00:18:37.080 "name": "key0", 00:18:37.080 "path": "/tmp/tmp.1zPxjs6rVC" 00:18:37.080 } 00:18:37.080 } 00:18:37.080 ] 00:18:37.080 }, 00:18:37.080 { 00:18:37.080 "subsystem": "iobuf", 00:18:37.080 "config": [ 00:18:37.080 { 00:18:37.080 "method": "iobuf_set_options", 00:18:37.080 "params": { 00:18:37.080 "small_pool_count": 8192, 00:18:37.080 "large_pool_count": 1024, 00:18:37.080 "small_bufsize": 8192, 00:18:37.080 "large_bufsize": 135168, 00:18:37.080 "enable_numa": false 00:18:37.080 } 00:18:37.080 } 00:18:37.080 ] 00:18:37.080 }, 00:18:37.080 { 00:18:37.080 "subsystem": "sock", 00:18:37.080 "config": [ 00:18:37.080 { 00:18:37.080 "method": "sock_set_default_impl", 00:18:37.080 "params": { 00:18:37.080 "impl_name": "posix" 00:18:37.080 } 00:18:37.080 }, 00:18:37.080 { 00:18:37.080 "method": "sock_impl_set_options", 00:18:37.080 "params": { 00:18:37.080 "impl_name": "ssl", 00:18:37.080 "recv_buf_size": 4096, 00:18:37.080 "send_buf_size": 4096, 00:18:37.080 "enable_recv_pipe": true, 00:18:37.080 "enable_quickack": false, 00:18:37.080 "enable_placement_id": 0, 00:18:37.080 "enable_zerocopy_send_server": true, 00:18:37.080 "enable_zerocopy_send_client": false, 00:18:37.080 "zerocopy_threshold": 0, 00:18:37.080 "tls_version": 0, 00:18:37.080 "enable_ktls": false 00:18:37.080 } 00:18:37.080 }, 00:18:37.080 { 00:18:37.080 "method": "sock_impl_set_options", 00:18:37.080 "params": { 00:18:37.080 "impl_name": "posix", 00:18:37.080 "recv_buf_size": 2097152, 00:18:37.080 "send_buf_size": 2097152, 00:18:37.080 "enable_recv_pipe": true, 00:18:37.080 "enable_quickack": false, 00:18:37.080 "enable_placement_id": 0, 00:18:37.080 "enable_zerocopy_send_server": true, 00:18:37.080 "enable_zerocopy_send_client": false, 00:18:37.080 "zerocopy_threshold": 0, 00:18:37.080 "tls_version": 0, 00:18:37.080 "enable_ktls": false 00:18:37.080 } 00:18:37.080 } 00:18:37.080 ] 00:18:37.080 }, 00:18:37.080 { 00:18:37.080 "subsystem": "vmd", 00:18:37.080 "config": [] 00:18:37.080 }, 00:18:37.080 { 00:18:37.080 "subsystem": "accel", 00:18:37.080 "config": [ 00:18:37.080 { 00:18:37.080 "method": "accel_set_options", 00:18:37.080 "params": { 00:18:37.080 "small_cache_size": 128, 00:18:37.080 "large_cache_size": 16, 00:18:37.080 "task_count": 2048, 00:18:37.080 "sequence_count": 2048, 00:18:37.080 "buf_count": 2048 00:18:37.080 } 00:18:37.080 } 00:18:37.080 ] 00:18:37.080 }, 00:18:37.080 { 00:18:37.080 "subsystem": "bdev", 00:18:37.080 "config": [ 00:18:37.080 { 00:18:37.080 "method": "bdev_set_options", 00:18:37.080 "params": { 00:18:37.080 "bdev_io_pool_size": 65535, 00:18:37.080 "bdev_io_cache_size": 256, 00:18:37.080 "bdev_auto_examine": true, 00:18:37.080 "iobuf_small_cache_size": 128, 00:18:37.080 "iobuf_large_cache_size": 16 00:18:37.080 } 00:18:37.080 }, 00:18:37.080 { 00:18:37.080 "method": "bdev_raid_set_options", 00:18:37.081 "params": { 00:18:37.081 "process_window_size_kb": 1024, 00:18:37.081 "process_max_bandwidth_mb_sec": 0 00:18:37.081 } 00:18:37.081 }, 00:18:37.081 { 00:18:37.081 "method": "bdev_iscsi_set_options", 00:18:37.081 "params": { 00:18:37.081 "timeout_sec": 30 00:18:37.081 } 00:18:37.081 }, 00:18:37.081 { 00:18:37.081 "method": "bdev_nvme_set_options", 00:18:37.081 "params": { 00:18:37.081 "action_on_timeout": "none", 00:18:37.081 "timeout_us": 0, 00:18:37.081 "timeout_admin_us": 0, 00:18:37.081 "keep_alive_timeout_ms": 10000, 00:18:37.081 "arbitration_burst": 0, 00:18:37.081 "low_priority_weight": 0, 00:18:37.081 "medium_priority_weight": 0, 00:18:37.081 "high_priority_weight": 0, 00:18:37.081 "nvme_adminq_poll_period_us": 10000, 00:18:37.081 "nvme_ioq_poll_period_us": 0, 00:18:37.081 "io_queue_requests": 0, 00:18:37.081 "delay_cmd_submit": true, 00:18:37.081 "transport_retry_count": 4, 00:18:37.081 "bdev_retry_count": 3, 00:18:37.081 "transport_ack_timeout": 0, 00:18:37.081 "ctrlr_loss_timeout_sec": 0, 00:18:37.081 "reconnect_delay_sec": 0, 00:18:37.081 "fast_io_fail_timeout_sec": 0, 00:18:37.081 "disable_auto_failback": false, 00:18:37.081 "generate_uuids": false, 00:18:37.081 "transport_tos": 0, 00:18:37.081 "nvme_error_stat": false, 00:18:37.081 "rdma_srq_size": 0, 00:18:37.081 "io_path_stat": false, 00:18:37.081 "allow_accel_sequence": false, 00:18:37.081 "rdma_max_cq_size": 0, 00:18:37.081 "rdma_cm_event_timeout_ms": 0, 00:18:37.081 "dhchap_digests": [ 00:18:37.081 "sha256", 00:18:37.081 "sha384", 00:18:37.081 "sha512" 00:18:37.081 ], 00:18:37.081 "dhchap_dhgroups": [ 00:18:37.081 "null", 00:18:37.081 "ffdhe2048", 00:18:37.081 "ffdhe3072", 00:18:37.081 "ffdhe4096", 00:18:37.081 "ffdhe6144", 00:18:37.081 "ffdhe8192" 00:18:37.081 ] 00:18:37.081 } 00:18:37.081 }, 00:18:37.081 { 00:18:37.081 "method": "bdev_nvme_set_hotplug", 00:18:37.081 "params": { 00:18:37.081 "period_us": 100000, 00:18:37.081 "enable": false 00:18:37.081 } 00:18:37.081 }, 00:18:37.081 { 00:18:37.081 "method": "bdev_malloc_create", 00:18:37.081 "params": { 00:18:37.081 "name": "malloc0", 00:18:37.081 "num_blocks": 8192, 00:18:37.081 "block_size": 4096, 00:18:37.081 "physical_block_size": 4096, 00:18:37.081 "uuid": "c8d1db27-116a-42b8-a18e-be97a0db97da", 00:18:37.081 "optimal_io_boundary": 0, 00:18:37.081 "md_size": 0, 00:18:37.081 "dif_type": 0, 00:18:37.081 "dif_is_head_of_md": false, 00:18:37.081 "dif_pi_format": 0 00:18:37.081 } 00:18:37.081 }, 00:18:37.081 { 00:18:37.081 "method": "bdev_wait_for_examine" 00:18:37.081 } 00:18:37.081 ] 00:18:37.081 }, 00:18:37.081 { 00:18:37.081 "subsystem": "nbd", 00:18:37.081 "config": [] 00:18:37.081 }, 00:18:37.081 { 00:18:37.081 "subsystem": "scheduler", 00:18:37.081 "config": [ 00:18:37.081 { 00:18:37.081 "method": "framework_set_scheduler", 00:18:37.081 "params": { 00:18:37.081 "name": "static" 00:18:37.081 } 00:18:37.081 } 00:18:37.081 ] 00:18:37.081 }, 00:18:37.081 { 00:18:37.081 "subsystem": "nvmf", 00:18:37.081 "config": [ 00:18:37.081 { 00:18:37.081 "method": "nvmf_set_config", 00:18:37.081 "params": { 00:18:37.081 "discovery_filter": "match_any", 00:18:37.081 "admin_cmd_passthru": { 00:18:37.081 "identify_ctrlr": false 00:18:37.081 }, 00:18:37.081 "dhchap_digests": [ 00:18:37.081 "sha256", 00:18:37.081 "sha384", 00:18:37.081 "sha512" 00:18:37.081 ], 00:18:37.081 "dhchap_dhgroups": [ 00:18:37.081 "null", 00:18:37.081 "ffdhe2048", 00:18:37.081 "ffdhe3072", 00:18:37.081 "ffdhe4096", 00:18:37.081 "ffdhe6144", 00:18:37.081 "ffdhe8192" 00:18:37.081 ] 00:18:37.081 } 00:18:37.081 }, 00:18:37.081 { 00:18:37.081 "method": "nvmf_set_max_subsystems", 00:18:37.081 "params": { 00:18:37.081 "max_subsystems": 1024 00:18:37.081 } 00:18:37.081 }, 00:18:37.081 { 00:18:37.081 "method": "nvmf_set_crdt", 00:18:37.081 "params": { 00:18:37.081 "crdt1": 0, 00:18:37.081 "crdt2": 0, 00:18:37.081 "crdt3": 0 00:18:37.081 } 00:18:37.081 }, 00:18:37.081 { 00:18:37.081 "method": "nvmf_create_transport", 00:18:37.081 "params": { 00:18:37.081 "trtype": "TCP", 00:18:37.081 "max_queue_depth": 128, 00:18:37.081 "max_io_qpairs_per_ctrlr": 127, 00:18:37.081 "in_capsule_data_size": 4096, 00:18:37.081 "max_io_size": 131072, 00:18:37.081 "io_unit_size": 131072, 00:18:37.081 "max_aq_depth": 128, 00:18:37.081 "num_shared_buffers": 511, 00:18:37.081 "buf_cache_size": 4294967295, 00:18:37.081 "dif_insert_or_strip": false, 00:18:37.081 "zcopy": false, 00:18:37.081 "c2h_success": false, 00:18:37.081 "sock_priority": 0, 00:18:37.081 "abort_timeout_sec": 1, 00:18:37.081 "ack_timeout": 0, 00:18:37.081 "data_wr_pool_size": 0 00:18:37.081 } 00:18:37.081 }, 00:18:37.081 { 00:18:37.081 "method": "nvmf_create_subsystem", 00:18:37.081 "params": { 00:18:37.081 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.081 "allow_any_host": false, 00:18:37.081 "serial_number": "00000000000000000000", 00:18:37.081 "model_number": "SPDK bdev Controller", 00:18:37.081 "max_namespaces": 32, 00:18:37.082 "min_cntlid": 1, 00:18:37.082 "max_cntlid": 65519, 00:18:37.082 "ana_reporting": false 00:18:37.082 } 00:18:37.082 }, 00:18:37.082 { 00:18:37.082 "method": "nvmf_subsystem_add_host", 00:18:37.082 "params": { 00:18:37.082 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.082 "host": "nqn.2016-06.io.spdk:host1", 00:18:37.082 "psk": "key0" 00:18:37.082 } 00:18:37.082 }, 00:18:37.082 { 00:18:37.082 "method": "nvmf_subsystem_add_ns", 00:18:37.082 "params": { 00:18:37.082 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.082 "namespace": { 00:18:37.082 "nsid": 1, 00:18:37.082 "bdev_name": "malloc0", 00:18:37.082 "nguid": "C8D1DB27116A42B8A18EBE97A0DB97DA", 00:18:37.082 "uuid": "c8d1db27-116a-42b8-a18e-be97a0db97da", 00:18:37.082 "no_auto_visible": false 00:18:37.082 } 00:18:37.082 } 00:18:37.082 }, 00:18:37.082 { 00:18:37.082 "method": "nvmf_subsystem_add_listener", 00:18:37.082 "params": { 00:18:37.082 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.082 "listen_address": { 00:18:37.082 "trtype": "TCP", 00:18:37.082 "adrfam": "IPv4", 00:18:37.082 "traddr": "10.0.0.2", 00:18:37.082 "trsvcid": "4420" 00:18:37.082 }, 00:18:37.082 "secure_channel": false, 00:18:37.082 "sock_impl": "ssl" 00:18:37.082 } 00:18:37.082 } 00:18:37.082 ] 00:18:37.082 } 00:18:37.082 ] 00:18:37.082 }' 00:18:37.082 20:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:37.082 20:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:37.082 20:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1685381 00:18:37.082 20:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:37.082 20:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1685381 00:18:37.082 20:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1685381 ']' 00:18:37.082 20:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.082 20:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:37.082 20:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:37.082 20:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:37.082 20:48:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:37.082 [2024-11-26 20:48:40.685678] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:18:37.082 [2024-11-26 20:48:40.685783] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:37.082 [2024-11-26 20:48:40.760299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.340 [2024-11-26 20:48:40.816507] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:37.340 [2024-11-26 20:48:40.816562] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:37.340 [2024-11-26 20:48:40.816589] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:37.340 [2024-11-26 20:48:40.816601] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:37.340 [2024-11-26 20:48:40.816610] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:37.340 [2024-11-26 20:48:40.817259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.598 [2024-11-26 20:48:41.054466] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:37.598 [2024-11-26 20:48:41.086477] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:37.598 [2024-11-26 20:48:41.086676] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:38.164 20:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:38.164 20:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:38.164 20:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:38.164 20:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:38.164 20:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:38.164 20:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:38.164 20:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=1685535 00:18:38.164 20:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 1685535 /var/tmp/bdevperf.sock 00:18:38.164 20:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1685535 ']' 00:18:38.164 20:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:38.164 20:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:38.164 20:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:38.164 20:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:18:38.164 "subsystems": [ 00:18:38.164 { 00:18:38.164 "subsystem": "keyring", 00:18:38.164 "config": [ 00:18:38.164 { 00:18:38.164 "method": "keyring_file_add_key", 00:18:38.164 "params": { 00:18:38.164 "name": "key0", 00:18:38.164 "path": "/tmp/tmp.1zPxjs6rVC" 00:18:38.164 } 00:18:38.164 } 00:18:38.164 ] 00:18:38.164 }, 00:18:38.164 { 00:18:38.164 "subsystem": "iobuf", 00:18:38.164 "config": [ 00:18:38.164 { 00:18:38.164 "method": "iobuf_set_options", 00:18:38.164 "params": { 00:18:38.164 "small_pool_count": 8192, 00:18:38.164 "large_pool_count": 1024, 00:18:38.164 "small_bufsize": 8192, 00:18:38.164 "large_bufsize": 135168, 00:18:38.164 "enable_numa": false 00:18:38.164 } 00:18:38.164 } 00:18:38.164 ] 00:18:38.164 }, 00:18:38.164 { 00:18:38.164 "subsystem": "sock", 00:18:38.164 "config": [ 00:18:38.164 { 00:18:38.164 "method": "sock_set_default_impl", 00:18:38.164 "params": { 00:18:38.164 "impl_name": "posix" 00:18:38.164 } 00:18:38.164 }, 00:18:38.164 { 00:18:38.164 "method": "sock_impl_set_options", 00:18:38.164 "params": { 00:18:38.164 "impl_name": "ssl", 00:18:38.164 "recv_buf_size": 4096, 00:18:38.164 "send_buf_size": 4096, 00:18:38.164 "enable_recv_pipe": true, 00:18:38.164 "enable_quickack": false, 00:18:38.164 "enable_placement_id": 0, 00:18:38.164 "enable_zerocopy_send_server": true, 00:18:38.164 "enable_zerocopy_send_client": false, 00:18:38.164 "zerocopy_threshold": 0, 00:18:38.164 "tls_version": 0, 00:18:38.164 "enable_ktls": false 00:18:38.164 } 00:18:38.164 }, 00:18:38.164 { 00:18:38.164 "method": "sock_impl_set_options", 00:18:38.164 "params": { 00:18:38.164 "impl_name": "posix", 00:18:38.164 "recv_buf_size": 2097152, 00:18:38.164 "send_buf_size": 2097152, 00:18:38.164 "enable_recv_pipe": true, 00:18:38.164 "enable_quickack": false, 00:18:38.164 "enable_placement_id": 0, 00:18:38.164 "enable_zerocopy_send_server": true, 00:18:38.164 "enable_zerocopy_send_client": false, 00:18:38.164 "zerocopy_threshold": 0, 00:18:38.164 "tls_version": 0, 00:18:38.164 "enable_ktls": false 00:18:38.164 } 00:18:38.164 } 00:18:38.164 ] 00:18:38.164 }, 00:18:38.164 { 00:18:38.164 "subsystem": "vmd", 00:18:38.164 "config": [] 00:18:38.164 }, 00:18:38.164 { 00:18:38.164 "subsystem": "accel", 00:18:38.164 "config": [ 00:18:38.164 { 00:18:38.164 "method": "accel_set_options", 00:18:38.164 "params": { 00:18:38.164 "small_cache_size": 128, 00:18:38.164 "large_cache_size": 16, 00:18:38.164 "task_count": 2048, 00:18:38.164 "sequence_count": 2048, 00:18:38.164 "buf_count": 2048 00:18:38.164 } 00:18:38.164 } 00:18:38.164 ] 00:18:38.164 }, 00:18:38.164 { 00:18:38.164 "subsystem": "bdev", 00:18:38.164 "config": [ 00:18:38.164 { 00:18:38.164 "method": "bdev_set_options", 00:18:38.164 "params": { 00:18:38.164 "bdev_io_pool_size": 65535, 00:18:38.164 "bdev_io_cache_size": 256, 00:18:38.164 "bdev_auto_examine": true, 00:18:38.164 "iobuf_small_cache_size": 128, 00:18:38.164 "iobuf_large_cache_size": 16 00:18:38.164 } 00:18:38.164 }, 00:18:38.164 { 00:18:38.164 "method": "bdev_raid_set_options", 00:18:38.164 "params": { 00:18:38.164 "process_window_size_kb": 1024, 00:18:38.164 "process_max_bandwidth_mb_sec": 0 00:18:38.164 } 00:18:38.164 }, 00:18:38.164 { 00:18:38.164 "method": "bdev_iscsi_set_options", 00:18:38.164 "params": { 00:18:38.164 "timeout_sec": 30 00:18:38.164 } 00:18:38.164 }, 00:18:38.164 { 00:18:38.164 "method": "bdev_nvme_set_options", 00:18:38.164 "params": { 00:18:38.164 "action_on_timeout": "none", 00:18:38.164 "timeout_us": 0, 00:18:38.164 "timeout_admin_us": 0, 00:18:38.164 "keep_alive_timeout_ms": 10000, 00:18:38.164 "arbitration_burst": 0, 00:18:38.164 "low_priority_weight": 0, 00:18:38.164 "medium_priority_weight": 0, 00:18:38.164 "high_priority_weight": 0, 00:18:38.164 "nvme_adminq_poll_period_us": 10000, 00:18:38.164 "nvme_ioq_poll_period_us": 0, 00:18:38.164 "io_queue_requests": 512, 00:18:38.164 "delay_cmd_submit": true, 00:18:38.164 "transport_retry_count": 4, 00:18:38.164 "bdev_retry_count": 3, 00:18:38.164 "transport_ack_timeout": 0, 00:18:38.164 "ctrlr_loss_timeout_sec": 0, 00:18:38.164 "reconnect_delay_sec": 0, 00:18:38.164 "fast_io_fail_timeout_sec": 0, 00:18:38.164 "disable_auto_failback": false, 00:18:38.164 "generate_uuids": false, 00:18:38.164 "transport_tos": 0, 00:18:38.164 "nvme_error_stat": false, 00:18:38.164 "rdma_srq_size": 0, 00:18:38.164 "io_path_stat": false, 00:18:38.164 "allow_accel_sequence": false, 00:18:38.164 "rdma_max_cq_size": 0, 00:18:38.164 "rdma_cm_event_timeout_ms": 0, 00:18:38.164 "dhchap_digests": [ 00:18:38.164 "sha256", 00:18:38.164 "sha384", 00:18:38.164 "sha512" 00:18:38.164 ], 00:18:38.164 "dhchap_dhgroups": [ 00:18:38.164 "null", 00:18:38.165 "ffdhe2048", 00:18:38.165 "ffdhe3072", 00:18:38.165 "ffdhe4096", 00:18:38.165 "ffdhe6144", 00:18:38.165 "ffdhe8192" 00:18:38.165 ] 00:18:38.165 } 00:18:38.165 }, 00:18:38.165 { 00:18:38.165 "method": "bdev_nvme_attach_controller", 00:18:38.165 "params": { 00:18:38.165 "name": "nvme0", 00:18:38.165 "trtype": "TCP", 00:18:38.165 "adrfam": "IPv4", 00:18:38.165 "traddr": "10.0.0.2", 00:18:38.165 "trsvcid": "4420", 00:18:38.165 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:38.165 "prchk_reftag": false, 00:18:38.165 "prchk_guard": false, 00:18:38.165 "ctrlr_loss_timeout_sec": 0, 00:18:38.165 "reconnect_delay_sec": 0, 00:18:38.165 "fast_io_fail_timeout_sec": 0, 00:18:38.165 "psk": "key0", 00:18:38.165 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:38.165 "hdgst": false, 00:18:38.165 "ddgst": false, 00:18:38.165 "multipath": "multipath" 00:18:38.165 } 00:18:38.165 }, 00:18:38.165 { 00:18:38.165 "method": "bdev_nvme_set_hotplug", 00:18:38.165 "params": { 00:18:38.165 "period_us": 100000, 00:18:38.165 "enable": false 00:18:38.165 } 00:18:38.165 }, 00:18:38.165 { 00:18:38.165 "method": "bdev_enable_histogram", 00:18:38.165 "params": { 00:18:38.165 "name": "nvme0n1", 00:18:38.165 "enable": true 00:18:38.165 } 00:18:38.165 }, 00:18:38.165 { 00:18:38.165 "method": "bdev_wait_for_examine" 00:18:38.165 } 00:18:38.165 ] 00:18:38.165 }, 00:18:38.165 { 00:18:38.165 "subsystem": "nbd", 00:18:38.165 "config": [] 00:18:38.165 } 00:18:38.165 ] 00:18:38.165 }' 00:18:38.165 20:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:38.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:38.165 20:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:38.165 20:48:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:38.165 [2024-11-26 20:48:41.761064] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:18:38.165 [2024-11-26 20:48:41.761156] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1685535 ] 00:18:38.165 [2024-11-26 20:48:41.826195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.423 [2024-11-26 20:48:41.884065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:38.423 [2024-11-26 20:48:42.064407] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:38.680 20:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:38.680 20:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:38.680 20:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:38.680 20:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:18:38.938 20:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.938 20:48:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:38.938 Running I/O for 1 seconds... 00:18:40.311 3525.00 IOPS, 13.77 MiB/s 00:18:40.311 Latency(us) 00:18:40.311 [2024-11-26T19:48:44.008Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.311 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:40.311 Verification LBA range: start 0x0 length 0x2000 00:18:40.311 nvme0n1 : 1.02 3579.72 13.98 0.00 0.00 35401.86 6213.78 27573.67 00:18:40.311 [2024-11-26T19:48:44.008Z] =================================================================================================================== 00:18:40.311 [2024-11-26T19:48:44.008Z] Total : 3579.72 13.98 0.00 0.00 35401.86 6213.78 27573.67 00:18:40.311 { 00:18:40.311 "results": [ 00:18:40.311 { 00:18:40.311 "job": "nvme0n1", 00:18:40.311 "core_mask": "0x2", 00:18:40.311 "workload": "verify", 00:18:40.311 "status": "finished", 00:18:40.311 "verify_range": { 00:18:40.311 "start": 0, 00:18:40.311 "length": 8192 00:18:40.311 }, 00:18:40.311 "queue_depth": 128, 00:18:40.311 "io_size": 4096, 00:18:40.311 "runtime": 1.020751, 00:18:40.311 "iops": 3579.717286586053, 00:18:40.311 "mibps": 13.98327065072677, 00:18:40.311 "io_failed": 0, 00:18:40.311 "io_timeout": 0, 00:18:40.311 "avg_latency_us": 35401.861934359105, 00:18:40.311 "min_latency_us": 6213.783703703703, 00:18:40.311 "max_latency_us": 27573.665185185186 00:18:40.311 } 00:18:40.311 ], 00:18:40.311 "core_count": 1 00:18:40.311 } 00:18:40.311 20:48:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:18:40.311 20:48:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:18:40.311 20:48:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:18:40.311 20:48:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:18:40.311 20:48:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:18:40.311 20:48:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:18:40.311 20:48:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:40.311 20:48:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:18:40.311 20:48:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:18:40.311 20:48:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:18:40.311 20:48:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:40.311 nvmf_trace.0 00:18:40.311 20:48:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:18:40.311 20:48:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1685535 00:18:40.311 20:48:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1685535 ']' 00:18:40.312 20:48:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1685535 00:18:40.312 20:48:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:40.312 20:48:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:40.312 20:48:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1685535 00:18:40.312 20:48:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:40.312 20:48:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:40.312 20:48:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1685535' 00:18:40.312 killing process with pid 1685535 00:18:40.312 20:48:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1685535 00:18:40.312 Received shutdown signal, test time was about 1.000000 seconds 00:18:40.312 00:18:40.312 Latency(us) 00:18:40.312 [2024-11-26T19:48:44.009Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.312 [2024-11-26T19:48:44.009Z] =================================================================================================================== 00:18:40.312 [2024-11-26T19:48:44.009Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:40.312 20:48:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1685535 00:18:40.312 20:48:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:18:40.312 20:48:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:40.312 20:48:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:18:40.312 20:48:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:40.312 20:48:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:18:40.312 20:48:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:40.312 20:48:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:40.312 rmmod nvme_tcp 00:18:40.312 rmmod nvme_fabrics 00:18:40.312 rmmod nvme_keyring 00:18:40.570 20:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:40.570 20:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:18:40.570 20:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:18:40.570 20:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 1685381 ']' 00:18:40.570 20:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 1685381 00:18:40.570 20:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1685381 ']' 00:18:40.570 20:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1685381 00:18:40.570 20:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:40.570 20:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:40.570 20:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1685381 00:18:40.570 20:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:40.570 20:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:40.570 20:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1685381' 00:18:40.570 killing process with pid 1685381 00:18:40.570 20:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1685381 00:18:40.570 20:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1685381 00:18:40.828 20:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:40.828 20:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:40.828 20:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:40.828 20:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:18:40.828 20:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:18:40.828 20:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:40.828 20:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:18:40.828 20:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:40.828 20:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:40.828 20:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:40.828 20:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:40.828 20:48:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:42.737 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:42.737 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.kJX6wm9Yxu /tmp/tmp.c5PW5lVg20 /tmp/tmp.1zPxjs6rVC 00:18:42.737 00:18:42.737 real 1m23.236s 00:18:42.737 user 2m20.457s 00:18:42.737 sys 0m24.460s 00:18:42.737 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:42.737 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:42.737 ************************************ 00:18:42.737 END TEST nvmf_tls 00:18:42.737 ************************************ 00:18:42.737 20:48:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:42.737 20:48:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:42.737 20:48:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:42.737 20:48:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:42.737 ************************************ 00:18:42.737 START TEST nvmf_fips 00:18:42.737 ************************************ 00:18:42.738 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:42.997 * Looking for test storage... 00:18:42.997 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:18:42.997 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:42.997 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:18:42.997 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:42.997 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:42.997 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:42.997 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:42.997 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:42.997 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:18:42.997 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:18:42.997 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:18:42.997 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:42.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.998 --rc genhtml_branch_coverage=1 00:18:42.998 --rc genhtml_function_coverage=1 00:18:42.998 --rc genhtml_legend=1 00:18:42.998 --rc geninfo_all_blocks=1 00:18:42.998 --rc geninfo_unexecuted_blocks=1 00:18:42.998 00:18:42.998 ' 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:42.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.998 --rc genhtml_branch_coverage=1 00:18:42.998 --rc genhtml_function_coverage=1 00:18:42.998 --rc genhtml_legend=1 00:18:42.998 --rc geninfo_all_blocks=1 00:18:42.998 --rc geninfo_unexecuted_blocks=1 00:18:42.998 00:18:42.998 ' 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:42.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.998 --rc genhtml_branch_coverage=1 00:18:42.998 --rc genhtml_function_coverage=1 00:18:42.998 --rc genhtml_legend=1 00:18:42.998 --rc geninfo_all_blocks=1 00:18:42.998 --rc geninfo_unexecuted_blocks=1 00:18:42.998 00:18:42.998 ' 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:42.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:42.998 --rc genhtml_branch_coverage=1 00:18:42.998 --rc genhtml_function_coverage=1 00:18:42.998 --rc genhtml_legend=1 00:18:42.998 --rc geninfo_all_blocks=1 00:18:42.998 --rc geninfo_unexecuted_blocks=1 00:18:42.998 00:18:42.998 ' 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:42.998 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:18:42.998 Error setting digest 00:18:42.998 40E2F426D47F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:18:42.998 40E2F426D47F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:18:42.998 20:48:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:18:45.530 Found 0000:09:00.0 (0x8086 - 0x159b) 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:18:45.530 Found 0000:09:00.1 (0x8086 - 0x159b) 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:18:45.530 Found net devices under 0000:09:00.0: cvl_0_0 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:18:45.530 Found net devices under 0000:09:00.1: cvl_0_1 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:18:45.530 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:45.531 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:45.531 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:45.531 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:45.531 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:45.531 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:45.531 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:45.531 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:45.531 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:45.531 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:45.531 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:45.531 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:45.531 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:45.531 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:45.531 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:45.531 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:45.531 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:45.531 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:45.531 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:45.531 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:45.531 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:45.531 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:45.531 20:48:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:45.531 20:48:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:45.531 20:48:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:45.531 20:48:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:45.531 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:45.531 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:18:45.531 00:18:45.531 --- 10.0.0.2 ping statistics --- 00:18:45.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.531 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:18:45.531 20:48:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:45.531 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:45.531 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:18:45.531 00:18:45.531 --- 10.0.0.1 ping statistics --- 00:18:45.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.531 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:18:45.531 20:48:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:45.531 20:48:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:18:45.531 20:48:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:45.531 20:48:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:45.531 20:48:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:45.531 20:48:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:45.531 20:48:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:45.531 20:48:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:45.531 20:48:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:45.531 20:48:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:18:45.531 20:48:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:45.531 20:48:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:45.531 20:48:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:45.531 20:48:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=1687887 00:18:45.531 20:48:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:45.531 20:48:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 1687887 00:18:45.531 20:48:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1687887 ']' 00:18:45.531 20:48:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:45.531 20:48:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:45.531 20:48:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:45.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:45.531 20:48:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:45.531 20:48:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:45.531 [2024-11-26 20:48:49.120862] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:18:45.531 [2024-11-26 20:48:49.120958] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:45.531 [2024-11-26 20:48:49.204834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.788 [2024-11-26 20:48:49.275453] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:45.788 [2024-11-26 20:48:49.275511] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:45.788 [2024-11-26 20:48:49.275540] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:45.788 [2024-11-26 20:48:49.275558] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:45.788 [2024-11-26 20:48:49.275587] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:45.789 [2024-11-26 20:48:49.276265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:45.789 20:48:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:45.789 20:48:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:18:45.789 20:48:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:45.789 20:48:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:45.789 20:48:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:45.789 20:48:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:45.789 20:48:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:18:45.789 20:48:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:45.789 20:48:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:18:45.789 20:48:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.VVM 00:18:45.789 20:48:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:45.789 20:48:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.VVM 00:18:45.789 20:48:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.VVM 00:18:45.789 20:48:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.VVM 00:18:45.789 20:48:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:46.046 [2024-11-26 20:48:49.686715] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:46.046 [2024-11-26 20:48:49.702687] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:46.046 [2024-11-26 20:48:49.702952] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:46.305 malloc0 00:18:46.305 20:48:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:46.305 20:48:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=1687925 00:18:46.305 20:48:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:46.305 20:48:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 1687925 /var/tmp/bdevperf.sock 00:18:46.305 20:48:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1687925 ']' 00:18:46.305 20:48:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:46.305 20:48:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:46.305 20:48:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:46.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:46.305 20:48:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:46.305 20:48:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:46.305 [2024-11-26 20:48:49.840185] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:18:46.305 [2024-11-26 20:48:49.840287] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1687925 ] 00:18:46.305 [2024-11-26 20:48:49.906005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.305 [2024-11-26 20:48:49.965038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:46.562 20:48:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:46.562 20:48:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:18:46.562 20:48:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.VVM 00:18:46.820 20:48:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:47.078 [2024-11-26 20:48:50.618784] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:47.078 TLSTESTn1 00:18:47.078 20:48:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:47.336 Running I/O for 10 seconds... 00:18:49.200 3192.00 IOPS, 12.47 MiB/s [2024-11-26T19:48:54.269Z] 3345.00 IOPS, 13.07 MiB/s [2024-11-26T19:48:55.200Z] 3434.00 IOPS, 13.41 MiB/s [2024-11-26T19:48:56.194Z] 3461.00 IOPS, 13.52 MiB/s [2024-11-26T19:48:57.146Z] 3455.60 IOPS, 13.50 MiB/s [2024-11-26T19:48:58.073Z] 3451.33 IOPS, 13.48 MiB/s [2024-11-26T19:48:59.003Z] 3467.00 IOPS, 13.54 MiB/s [2024-11-26T19:48:59.934Z] 3484.12 IOPS, 13.61 MiB/s [2024-11-26T19:49:00.865Z] 3493.00 IOPS, 13.64 MiB/s [2024-11-26T19:49:01.123Z] 3497.20 IOPS, 13.66 MiB/s 00:18:57.426 Latency(us) 00:18:57.426 [2024-11-26T19:49:01.123Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.426 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:57.426 Verification LBA range: start 0x0 length 0x2000 00:18:57.426 TLSTESTn1 : 10.02 3502.41 13.68 0.00 0.00 36484.10 6699.24 44273.21 00:18:57.426 [2024-11-26T19:49:01.123Z] =================================================================================================================== 00:18:57.426 [2024-11-26T19:49:01.123Z] Total : 3502.41 13.68 0.00 0.00 36484.10 6699.24 44273.21 00:18:57.426 { 00:18:57.426 "results": [ 00:18:57.426 { 00:18:57.426 "job": "TLSTESTn1", 00:18:57.426 "core_mask": "0x4", 00:18:57.426 "workload": "verify", 00:18:57.426 "status": "finished", 00:18:57.426 "verify_range": { 00:18:57.426 "start": 0, 00:18:57.426 "length": 8192 00:18:57.426 }, 00:18:57.426 "queue_depth": 128, 00:18:57.426 "io_size": 4096, 00:18:57.426 "runtime": 10.021376, 00:18:57.426 "iops": 3502.413241455066, 00:18:57.426 "mibps": 13.681301724433851, 00:18:57.426 "io_failed": 0, 00:18:57.426 "io_timeout": 0, 00:18:57.426 "avg_latency_us": 36484.10080084586, 00:18:57.426 "min_latency_us": 6699.235555555556, 00:18:57.426 "max_latency_us": 44273.20888888889 00:18:57.426 } 00:18:57.426 ], 00:18:57.426 "core_count": 1 00:18:57.426 } 00:18:57.426 20:49:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:18:57.426 20:49:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:18:57.426 20:49:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:18:57.426 20:49:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:18:57.426 20:49:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:18:57.426 20:49:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:57.426 20:49:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:18:57.426 20:49:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:18:57.426 20:49:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:18:57.426 20:49:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:57.426 nvmf_trace.0 00:18:57.426 20:49:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:18:57.426 20:49:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1687925 00:18:57.426 20:49:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1687925 ']' 00:18:57.426 20:49:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1687925 00:18:57.426 20:49:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:18:57.426 20:49:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:57.426 20:49:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1687925 00:18:57.426 20:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:57.426 20:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:57.426 20:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1687925' 00:18:57.426 killing process with pid 1687925 00:18:57.426 20:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1687925 00:18:57.426 Received shutdown signal, test time was about 10.000000 seconds 00:18:57.426 00:18:57.426 Latency(us) 00:18:57.426 [2024-11-26T19:49:01.123Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.426 [2024-11-26T19:49:01.123Z] =================================================================================================================== 00:18:57.426 [2024-11-26T19:49:01.123Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:57.426 20:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1687925 00:18:57.683 20:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:18:57.683 20:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:57.683 20:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:18:57.683 20:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:57.683 20:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:18:57.683 20:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:57.683 20:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:57.683 rmmod nvme_tcp 00:18:57.683 rmmod nvme_fabrics 00:18:57.683 rmmod nvme_keyring 00:18:57.683 20:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:57.683 20:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:18:57.683 20:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:18:57.683 20:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 1687887 ']' 00:18:57.683 20:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 1687887 00:18:57.683 20:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1687887 ']' 00:18:57.683 20:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1687887 00:18:57.683 20:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:18:57.683 20:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:57.683 20:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1687887 00:18:57.683 20:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:57.683 20:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:57.683 20:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1687887' 00:18:57.683 killing process with pid 1687887 00:18:57.683 20:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1687887 00:18:57.683 20:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1687887 00:18:57.941 20:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:57.941 20:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:57.941 20:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:57.941 20:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:18:57.941 20:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:18:57.941 20:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:57.941 20:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:18:57.941 20:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:57.941 20:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:57.941 20:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:57.941 20:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:57.941 20:49:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.VVM 00:19:00.474 00:19:00.474 real 0m17.260s 00:19:00.474 user 0m22.845s 00:19:00.474 sys 0m5.479s 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:00.474 ************************************ 00:19:00.474 END TEST nvmf_fips 00:19:00.474 ************************************ 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:00.474 ************************************ 00:19:00.474 START TEST nvmf_control_msg_list 00:19:00.474 ************************************ 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:00.474 * Looking for test storage... 00:19:00.474 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:00.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:00.474 --rc genhtml_branch_coverage=1 00:19:00.474 --rc genhtml_function_coverage=1 00:19:00.474 --rc genhtml_legend=1 00:19:00.474 --rc geninfo_all_blocks=1 00:19:00.474 --rc geninfo_unexecuted_blocks=1 00:19:00.474 00:19:00.474 ' 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:00.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:00.474 --rc genhtml_branch_coverage=1 00:19:00.474 --rc genhtml_function_coverage=1 00:19:00.474 --rc genhtml_legend=1 00:19:00.474 --rc geninfo_all_blocks=1 00:19:00.474 --rc geninfo_unexecuted_blocks=1 00:19:00.474 00:19:00.474 ' 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:00.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:00.474 --rc genhtml_branch_coverage=1 00:19:00.474 --rc genhtml_function_coverage=1 00:19:00.474 --rc genhtml_legend=1 00:19:00.474 --rc geninfo_all_blocks=1 00:19:00.474 --rc geninfo_unexecuted_blocks=1 00:19:00.474 00:19:00.474 ' 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:00.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:00.474 --rc genhtml_branch_coverage=1 00:19:00.474 --rc genhtml_function_coverage=1 00:19:00.474 --rc genhtml_legend=1 00:19:00.474 --rc geninfo_all_blocks=1 00:19:00.474 --rc geninfo_unexecuted_blocks=1 00:19:00.474 00:19:00.474 ' 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:00.474 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.475 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.475 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.475 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:19:00.475 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:00.475 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:19:00.475 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:00.475 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:00.475 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:00.475 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:00.475 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:00.475 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:00.475 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:00.475 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:00.475 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:00.475 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:00.475 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:19:00.475 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:00.475 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:00.475 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:00.475 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:00.475 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:00.475 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:00.475 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:00.475 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:00.475 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:00.475 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:00.475 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:19:00.475 20:49:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:02.374 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:02.374 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:02.374 Found net devices under 0000:09:00.0: cvl_0_0 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:02.374 Found net devices under 0000:09:00.1: cvl_0_1 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:02.374 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:02.375 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:02.375 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:02.375 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:02.375 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:02.375 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:02.375 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:02.375 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:02.375 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:02.375 20:49:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:02.375 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:02.375 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:02.375 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:02.375 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:02.375 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:02.375 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:02.375 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:02.375 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:02.633 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:02.633 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:19:02.633 00:19:02.633 --- 10.0.0.2 ping statistics --- 00:19:02.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:02.633 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:19:02.633 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:02.633 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:02.633 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:19:02.633 00:19:02.633 --- 10.0.0.1 ping statistics --- 00:19:02.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:02.633 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:19:02.633 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:02.633 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:19:02.633 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:02.633 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:02.633 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:02.633 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:02.633 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:02.633 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:02.633 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:02.633 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:19:02.633 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:02.633 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:02.633 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:02.633 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=1691305 00:19:02.633 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 1691305 00:19:02.633 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:02.633 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 1691305 ']' 00:19:02.633 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:02.633 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:02.633 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:02.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:02.633 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:02.633 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:02.633 [2024-11-26 20:49:06.157971] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:19:02.633 [2024-11-26 20:49:06.158053] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:02.633 [2024-11-26 20:49:06.233040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.633 [2024-11-26 20:49:06.291945] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:02.633 [2024-11-26 20:49:06.292002] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:02.633 [2024-11-26 20:49:06.292015] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:02.633 [2024-11-26 20:49:06.292026] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:02.633 [2024-11-26 20:49:06.292036] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:02.633 [2024-11-26 20:49:06.292657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.891 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:02.892 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:19:02.892 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:02.892 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:02.892 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:02.892 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:02.892 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:02.892 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:02.892 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:19:02.892 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.892 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:02.892 [2024-11-26 20:49:06.448504] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:02.892 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.892 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:19:02.892 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.892 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:02.892 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.892 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:02.892 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.892 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:02.892 Malloc0 00:19:02.892 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.892 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:02.892 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.892 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:02.892 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.892 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:02.892 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.892 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:02.892 [2024-11-26 20:49:06.487138] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:02.892 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.892 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=1691327 00:19:02.892 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:02.892 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=1691328 00:19:02.892 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:02.892 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=1691329 00:19:02.892 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:02.892 20:49:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 1691327 00:19:02.892 [2024-11-26 20:49:06.545687] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:02.892 [2024-11-26 20:49:06.555957] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:02.892 [2024-11-26 20:49:06.556172] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:04.263 Initializing NVMe Controllers 00:19:04.263 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:04.263 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:19:04.263 Initialization complete. Launching workers. 00:19:04.263 ======================================================== 00:19:04.263 Latency(us) 00:19:04.263 Device Information : IOPS MiB/s Average min max 00:19:04.263 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 24.00 0.09 41840.57 40843.63 41969.69 00:19:04.263 ======================================================== 00:19:04.263 Total : 24.00 0.09 41840.57 40843.63 41969.69 00:19:04.263 00:19:04.263 Initializing NVMe Controllers 00:19:04.263 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:04.263 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:19:04.263 Initialization complete. Launching workers. 00:19:04.263 ======================================================== 00:19:04.263 Latency(us) 00:19:04.263 Device Information : IOPS MiB/s Average min max 00:19:04.263 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 27.00 0.11 37856.71 243.65 40975.00 00:19:04.263 ======================================================== 00:19:04.263 Total : 27.00 0.11 37856.71 243.65 40975.00 00:19:04.263 00:19:04.263 20:49:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 1691328 00:19:04.263 Initializing NVMe Controllers 00:19:04.263 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:04.263 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:19:04.263 Initialization complete. Launching workers. 00:19:04.263 ======================================================== 00:19:04.263 Latency(us) 00:19:04.263 Device Information : IOPS MiB/s Average min max 00:19:04.263 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 5977.98 23.35 166.89 153.11 424.80 00:19:04.263 ======================================================== 00:19:04.263 Total : 5977.98 23.35 166.89 153.11 424.80 00:19:04.263 00:19:04.263 20:49:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 1691329 00:19:04.263 20:49:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:04.263 20:49:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:19:04.263 20:49:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:04.263 20:49:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:19:04.263 20:49:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:04.263 20:49:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:19:04.263 20:49:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:04.263 20:49:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:04.263 rmmod nvme_tcp 00:19:04.263 rmmod nvme_fabrics 00:19:04.263 rmmod nvme_keyring 00:19:04.263 20:49:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:04.263 20:49:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:19:04.263 20:49:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:19:04.263 20:49:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 1691305 ']' 00:19:04.263 20:49:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 1691305 00:19:04.263 20:49:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 1691305 ']' 00:19:04.263 20:49:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 1691305 00:19:04.263 20:49:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:19:04.263 20:49:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:04.263 20:49:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1691305 00:19:04.263 20:49:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:04.263 20:49:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:04.263 20:49:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1691305' 00:19:04.263 killing process with pid 1691305 00:19:04.263 20:49:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 1691305 00:19:04.263 20:49:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 1691305 00:19:04.522 20:49:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:04.522 20:49:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:04.522 20:49:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:04.522 20:49:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:19:04.522 20:49:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:19:04.522 20:49:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:04.522 20:49:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:19:04.522 20:49:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:04.522 20:49:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:04.522 20:49:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:04.522 20:49:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:04.522 20:49:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:07.057 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:07.057 00:19:07.057 real 0m6.479s 00:19:07.057 user 0m5.855s 00:19:07.057 sys 0m2.635s 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:07.058 ************************************ 00:19:07.058 END TEST nvmf_control_msg_list 00:19:07.058 ************************************ 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:07.058 ************************************ 00:19:07.058 START TEST nvmf_wait_for_buf 00:19:07.058 ************************************ 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:07.058 * Looking for test storage... 00:19:07.058 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:07.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.058 --rc genhtml_branch_coverage=1 00:19:07.058 --rc genhtml_function_coverage=1 00:19:07.058 --rc genhtml_legend=1 00:19:07.058 --rc geninfo_all_blocks=1 00:19:07.058 --rc geninfo_unexecuted_blocks=1 00:19:07.058 00:19:07.058 ' 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:07.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.058 --rc genhtml_branch_coverage=1 00:19:07.058 --rc genhtml_function_coverage=1 00:19:07.058 --rc genhtml_legend=1 00:19:07.058 --rc geninfo_all_blocks=1 00:19:07.058 --rc geninfo_unexecuted_blocks=1 00:19:07.058 00:19:07.058 ' 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:07.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.058 --rc genhtml_branch_coverage=1 00:19:07.058 --rc genhtml_function_coverage=1 00:19:07.058 --rc genhtml_legend=1 00:19:07.058 --rc geninfo_all_blocks=1 00:19:07.058 --rc geninfo_unexecuted_blocks=1 00:19:07.058 00:19:07.058 ' 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:07.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.058 --rc genhtml_branch_coverage=1 00:19:07.058 --rc genhtml_function_coverage=1 00:19:07.058 --rc genhtml_legend=1 00:19:07.058 --rc geninfo_all_blocks=1 00:19:07.058 --rc geninfo_unexecuted_blocks=1 00:19:07.058 00:19:07.058 ' 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.058 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:19:07.059 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.059 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:19:07.059 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:07.059 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:07.059 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:07.059 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:07.059 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:07.059 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:07.059 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:07.059 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:07.059 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:07.059 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:07.059 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:19:07.059 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:07.059 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:07.059 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:07.059 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:07.059 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:07.059 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:07.059 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:07.059 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:07.059 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:07.059 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:07.059 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:19:07.059 20:49:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:08.966 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:08.966 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:19:08.966 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:08.966 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:08.966 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:08.966 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:08.966 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:08.966 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:19:08.966 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:08.966 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:19:08.966 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:19:08.966 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:19:08.966 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:19:08.966 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:19:08.966 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:19:08.966 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:08.966 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:08.966 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:08.966 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:08.966 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:08.966 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:08.966 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:08.966 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:08.966 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:08.966 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:08.966 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:08.966 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:08.966 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:08.966 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:08.966 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:08.966 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:08.966 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:08.966 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:08.966 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:08.966 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:08.966 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:08.966 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:08.966 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:08.966 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:08.966 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:08.966 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:08.966 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:08.966 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:08.966 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:08.966 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:08.966 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:08.966 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:08.966 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:08.966 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:08.966 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:08.967 Found net devices under 0000:09:00.0: cvl_0_0 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:08.967 Found net devices under 0000:09:00.1: cvl_0_1 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:08.967 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:08.967 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.407 ms 00:19:08.967 00:19:08.967 --- 10.0.0.2 ping statistics --- 00:19:08.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:08.967 rtt min/avg/max/mdev = 0.407/0.407/0.407/0.000 ms 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:08.967 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:08.967 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:19:08.967 00:19:08.967 --- 10.0.0.1 ping statistics --- 00:19:08.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:08.967 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=1693409 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 1693409 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 1693409 ']' 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:08.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:08.967 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:08.967 [2024-11-26 20:49:12.651183] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:19:08.967 [2024-11-26 20:49:12.651259] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:09.225 [2024-11-26 20:49:12.726711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.225 [2024-11-26 20:49:12.784210] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:09.225 [2024-11-26 20:49:12.784265] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:09.225 [2024-11-26 20:49:12.784278] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:09.225 [2024-11-26 20:49:12.784288] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:09.225 [2024-11-26 20:49:12.784297] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:09.225 [2024-11-26 20:49:12.784909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:09.225 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:09.225 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:19:09.225 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:09.225 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:09.225 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:09.225 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:09.225 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:09.225 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:09.225 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:19:09.225 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.225 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:09.225 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.225 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:19:09.225 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.225 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:09.483 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.483 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:19:09.483 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.483 20:49:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:09.483 20:49:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.483 20:49:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:09.483 20:49:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.483 20:49:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:09.483 Malloc0 00:19:09.483 20:49:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.483 20:49:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:19:09.483 20:49:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.483 20:49:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:09.483 [2024-11-26 20:49:13.026017] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:09.483 20:49:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.483 20:49:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:19:09.483 20:49:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.483 20:49:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:09.483 20:49:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.483 20:49:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:09.483 20:49:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.483 20:49:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:09.483 20:49:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.483 20:49:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:09.483 20:49:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.483 20:49:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:09.483 [2024-11-26 20:49:13.050213] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:09.483 20:49:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.483 20:49:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:09.483 [2024-11-26 20:49:13.138466] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:11.382 Initializing NVMe Controllers 00:19:11.382 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:11.382 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:19:11.382 Initialization complete. Launching workers. 00:19:11.382 ======================================================== 00:19:11.382 Latency(us) 00:19:11.382 Device Information : IOPS MiB/s Average min max 00:19:11.382 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 125.00 15.62 33293.84 23972.36 63847.13 00:19:11.382 ======================================================== 00:19:11.382 Total : 125.00 15.62 33293.84 23972.36 63847.13 00:19:11.382 00:19:11.382 20:49:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:19:11.382 20:49:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:19:11.382 20:49:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.382 20:49:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:11.382 20:49:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.382 20:49:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1974 00:19:11.382 20:49:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1974 -eq 0 ]] 00:19:11.382 20:49:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:11.383 20:49:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:19:11.383 20:49:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:11.383 20:49:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:19:11.383 20:49:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:11.383 20:49:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:19:11.383 20:49:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:11.383 20:49:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:11.383 rmmod nvme_tcp 00:19:11.383 rmmod nvme_fabrics 00:19:11.383 rmmod nvme_keyring 00:19:11.383 20:49:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:11.383 20:49:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:19:11.383 20:49:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:19:11.383 20:49:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 1693409 ']' 00:19:11.383 20:49:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 1693409 00:19:11.383 20:49:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 1693409 ']' 00:19:11.383 20:49:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 1693409 00:19:11.383 20:49:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:19:11.383 20:49:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:11.383 20:49:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1693409 00:19:11.383 20:49:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:11.383 20:49:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:11.383 20:49:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1693409' 00:19:11.383 killing process with pid 1693409 00:19:11.383 20:49:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 1693409 00:19:11.383 20:49:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 1693409 00:19:11.383 20:49:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:11.383 20:49:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:11.383 20:49:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:11.383 20:49:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:19:11.383 20:49:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:19:11.383 20:49:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:11.383 20:49:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:19:11.383 20:49:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:11.383 20:49:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:11.383 20:49:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:11.383 20:49:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:11.383 20:49:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:13.907 20:49:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:13.907 00:19:13.907 real 0m6.836s 00:19:13.907 user 0m3.251s 00:19:13.907 sys 0m2.056s 00:19:13.907 20:49:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:13.907 20:49:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:13.907 ************************************ 00:19:13.907 END TEST nvmf_wait_for_buf 00:19:13.907 ************************************ 00:19:13.907 20:49:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:19:13.907 20:49:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:19:13.907 20:49:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:19:13.907 20:49:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:19:13.907 20:49:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:19:13.907 20:49:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:15.806 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:15.806 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:15.806 Found net devices under 0000:09:00.0: cvl_0_0 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:15.806 Found net devices under 0000:09:00.1: cvl_0_1 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:15.806 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:15.807 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:19:15.807 20:49:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:15.807 20:49:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:15.807 20:49:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:15.807 20:49:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:15.807 ************************************ 00:19:15.807 START TEST nvmf_perf_adq 00:19:15.807 ************************************ 00:19:15.807 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:15.807 * Looking for test storage... 00:19:15.807 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:15.807 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:15.807 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:19:15.807 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:16.067 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:16.067 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:16.067 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:16.067 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:16.067 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:19:16.067 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:19:16.067 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:19:16.067 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:19:16.067 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:19:16.067 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:19:16.067 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:19:16.067 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:16.067 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:19:16.067 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:19:16.067 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:16.067 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:16.067 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:19:16.067 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:19:16.067 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:16.067 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:19:16.067 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:19:16.067 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:19:16.067 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:19:16.067 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:16.067 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:19:16.067 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:19:16.067 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:16.067 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:16.067 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:19:16.067 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:16.067 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:16.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.067 --rc genhtml_branch_coverage=1 00:19:16.067 --rc genhtml_function_coverage=1 00:19:16.067 --rc genhtml_legend=1 00:19:16.067 --rc geninfo_all_blocks=1 00:19:16.067 --rc geninfo_unexecuted_blocks=1 00:19:16.067 00:19:16.067 ' 00:19:16.067 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:16.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.067 --rc genhtml_branch_coverage=1 00:19:16.067 --rc genhtml_function_coverage=1 00:19:16.067 --rc genhtml_legend=1 00:19:16.067 --rc geninfo_all_blocks=1 00:19:16.067 --rc geninfo_unexecuted_blocks=1 00:19:16.067 00:19:16.067 ' 00:19:16.067 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:16.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.067 --rc genhtml_branch_coverage=1 00:19:16.067 --rc genhtml_function_coverage=1 00:19:16.067 --rc genhtml_legend=1 00:19:16.067 --rc geninfo_all_blocks=1 00:19:16.067 --rc geninfo_unexecuted_blocks=1 00:19:16.067 00:19:16.067 ' 00:19:16.067 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:16.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.067 --rc genhtml_branch_coverage=1 00:19:16.067 --rc genhtml_function_coverage=1 00:19:16.067 --rc genhtml_legend=1 00:19:16.067 --rc geninfo_all_blocks=1 00:19:16.067 --rc geninfo_unexecuted_blocks=1 00:19:16.067 00:19:16.067 ' 00:19:16.067 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:16.067 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:19:16.067 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:16.067 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:16.067 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:16.067 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:16.067 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:16.067 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:16.067 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:16.067 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:16.067 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:16.067 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:16.067 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:16.067 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:19:16.067 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:16.067 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:16.067 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:16.067 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:16.067 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:16.067 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:19:16.067 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:16.067 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:16.067 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:16.068 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.068 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.068 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.068 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:19:16.068 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.068 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:19:16.068 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:16.068 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:16.068 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:16.068 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:16.068 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:16.068 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:16.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:16.068 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:16.068 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:16.068 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:16.068 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:19:16.068 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:19:16.068 20:49:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:18.596 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:18.596 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:18.596 Found net devices under 0000:09:00.0: cvl_0_0 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:18.596 Found net devices under 0000:09:00.1: cvl_0_1 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:19:18.596 20:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:19:18.855 20:49:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:19:20.754 20:49:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:19:26.025 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:19:26.025 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:26.025 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:26.025 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:26.025 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:26.025 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:26.025 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:26.025 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:26.025 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:26.026 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:26.026 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:26.026 Found net devices under 0000:09:00.0: cvl_0_0 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:26.026 Found net devices under 0000:09:00.1: cvl_0_1 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:26.026 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:26.026 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.359 ms 00:19:26.026 00:19:26.026 --- 10.0.0.2 ping statistics --- 00:19:26.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:26.026 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:19:26.026 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:26.026 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:26.026 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:19:26.026 00:19:26.026 --- 10.0.0.1 ping statistics --- 00:19:26.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:26.027 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:19:26.027 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:26.027 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:19:26.027 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:26.027 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:26.027 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:26.027 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:26.027 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:26.027 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:26.027 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:26.027 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:26.027 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:26.027 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:26.027 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:26.027 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1698271 00:19:26.027 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:26.027 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1698271 00:19:26.027 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1698271 ']' 00:19:26.027 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:26.027 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:26.027 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:26.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:26.027 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:26.027 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:26.027 [2024-11-26 20:49:29.531987] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:19:26.027 [2024-11-26 20:49:29.532053] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:26.027 [2024-11-26 20:49:29.600517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:26.027 [2024-11-26 20:49:29.656892] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:26.027 [2024-11-26 20:49:29.656940] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:26.027 [2024-11-26 20:49:29.656961] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:26.027 [2024-11-26 20:49:29.656971] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:26.027 [2024-11-26 20:49:29.656980] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:26.027 [2024-11-26 20:49:29.658510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:26.027 [2024-11-26 20:49:29.658567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:26.027 [2024-11-26 20:49:29.658640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:26.027 [2024-11-26 20:49:29.658644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:26.353 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:26.353 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:19:26.353 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:26.353 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:26.353 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:26.353 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:26.353 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:19:26.353 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:26.353 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:26.353 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.353 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:26.353 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.353 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:26.353 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:19:26.353 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.353 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:26.353 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.353 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:26.353 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.353 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:26.353 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.353 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:19:26.353 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.353 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:26.353 [2024-11-26 20:49:29.927896] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:26.353 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.353 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:26.353 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.353 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:26.353 Malloc1 00:19:26.353 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.353 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:26.353 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.353 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:26.353 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.353 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:26.353 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.353 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:26.353 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.353 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:26.353 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.353 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:26.353 [2024-11-26 20:49:29.986194] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:26.353 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.353 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=1698417 00:19:26.353 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:26.353 20:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:19:28.897 20:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:19:28.897 20:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.897 20:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:28.897 20:49:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.897 20:49:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:19:28.897 "tick_rate": 2700000000, 00:19:28.897 "poll_groups": [ 00:19:28.897 { 00:19:28.897 "name": "nvmf_tgt_poll_group_000", 00:19:28.897 "admin_qpairs": 1, 00:19:28.897 "io_qpairs": 1, 00:19:28.897 "current_admin_qpairs": 1, 00:19:28.897 "current_io_qpairs": 1, 00:19:28.897 "pending_bdev_io": 0, 00:19:28.897 "completed_nvme_io": 19668, 00:19:28.897 "transports": [ 00:19:28.897 { 00:19:28.897 "trtype": "TCP" 00:19:28.897 } 00:19:28.897 ] 00:19:28.897 }, 00:19:28.897 { 00:19:28.897 "name": "nvmf_tgt_poll_group_001", 00:19:28.897 "admin_qpairs": 0, 00:19:28.897 "io_qpairs": 1, 00:19:28.897 "current_admin_qpairs": 0, 00:19:28.897 "current_io_qpairs": 1, 00:19:28.897 "pending_bdev_io": 0, 00:19:28.897 "completed_nvme_io": 19726, 00:19:28.897 "transports": [ 00:19:28.897 { 00:19:28.897 "trtype": "TCP" 00:19:28.897 } 00:19:28.897 ] 00:19:28.897 }, 00:19:28.897 { 00:19:28.897 "name": "nvmf_tgt_poll_group_002", 00:19:28.897 "admin_qpairs": 0, 00:19:28.897 "io_qpairs": 1, 00:19:28.897 "current_admin_qpairs": 0, 00:19:28.897 "current_io_qpairs": 1, 00:19:28.897 "pending_bdev_io": 0, 00:19:28.897 "completed_nvme_io": 19650, 00:19:28.897 "transports": [ 00:19:28.897 { 00:19:28.897 "trtype": "TCP" 00:19:28.897 } 00:19:28.897 ] 00:19:28.897 }, 00:19:28.897 { 00:19:28.897 "name": "nvmf_tgt_poll_group_003", 00:19:28.897 "admin_qpairs": 0, 00:19:28.897 "io_qpairs": 1, 00:19:28.897 "current_admin_qpairs": 0, 00:19:28.897 "current_io_qpairs": 1, 00:19:28.897 "pending_bdev_io": 0, 00:19:28.897 "completed_nvme_io": 19232, 00:19:28.897 "transports": [ 00:19:28.897 { 00:19:28.897 "trtype": "TCP" 00:19:28.897 } 00:19:28.897 ] 00:19:28.897 } 00:19:28.897 ] 00:19:28.897 }' 00:19:28.897 20:49:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:19:28.897 20:49:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:19:28.897 20:49:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:19:28.897 20:49:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:19:28.897 20:49:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 1698417 00:19:37.002 Initializing NVMe Controllers 00:19:37.002 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:37.002 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:37.002 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:37.002 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:37.002 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:37.002 Initialization complete. Launching workers. 00:19:37.002 ======================================================== 00:19:37.002 Latency(us) 00:19:37.002 Device Information : IOPS MiB/s Average min max 00:19:37.002 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10156.70 39.67 6301.45 2491.05 10003.03 00:19:37.002 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10308.00 40.27 6209.49 2203.55 10278.70 00:19:37.002 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10369.00 40.50 6173.04 2374.53 10260.43 00:19:37.002 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10314.40 40.29 6206.56 2507.26 10333.78 00:19:37.002 ======================================================== 00:19:37.002 Total : 41148.10 160.73 6222.27 2203.55 10333.78 00:19:37.002 00:19:37.002 20:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:19:37.002 20:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:37.002 20:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:19:37.002 20:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:37.002 20:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:19:37.002 20:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:37.002 20:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:37.002 rmmod nvme_tcp 00:19:37.002 rmmod nvme_fabrics 00:19:37.002 rmmod nvme_keyring 00:19:37.002 20:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:37.002 20:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:19:37.002 20:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:19:37.002 20:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1698271 ']' 00:19:37.002 20:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1698271 00:19:37.002 20:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1698271 ']' 00:19:37.002 20:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1698271 00:19:37.002 20:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:19:37.002 20:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:37.002 20:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1698271 00:19:37.002 20:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:37.002 20:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:37.002 20:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1698271' 00:19:37.002 killing process with pid 1698271 00:19:37.002 20:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1698271 00:19:37.002 20:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1698271 00:19:37.002 20:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:37.002 20:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:37.002 20:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:37.002 20:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:19:37.002 20:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:19:37.002 20:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:37.002 20:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:19:37.002 20:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:37.002 20:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:37.002 20:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:37.002 20:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:37.002 20:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:38.907 20:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:38.907 20:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:19:38.907 20:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:19:38.907 20:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:19:39.475 20:49:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:19:42.002 20:49:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:19:47.278 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:19:47.278 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:47.278 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:47.278 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:47.278 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:47.278 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:47.278 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.278 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:47.278 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.278 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:47.278 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:47.278 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:19:47.278 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:47.278 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:47.278 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:19:47.278 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:47.278 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:47.278 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:47.278 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:47.278 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:47.278 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:19:47.278 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:47.278 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:19:47.278 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:19:47.278 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:19:47.278 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:19:47.278 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:19:47.278 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:19:47.278 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:47.278 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:47.278 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:47.278 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:47.278 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:47.278 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:47.278 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:47.278 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:47.278 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:47.278 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:47.278 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:47.278 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:47.278 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:19:47.279 Found 0000:09:00.0 (0x8086 - 0x159b) 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:19:47.279 Found 0000:09:00.1 (0x8086 - 0x159b) 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:19:47.279 Found net devices under 0000:09:00.0: cvl_0_0 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:19:47.279 Found net devices under 0000:09:00.1: cvl_0_1 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:47.279 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:47.279 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:19:47.279 00:19:47.279 --- 10.0.0.2 ping statistics --- 00:19:47.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.279 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:47.279 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:47.279 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:19:47.279 00:19:47.279 --- 10.0.0.1 ping statistics --- 00:19:47.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:47.279 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:19:47.279 net.core.busy_poll = 1 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:19:47.279 net.core.busy_read = 1 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:47.279 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:47.280 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1701051 00:19:47.280 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:47.280 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1701051 00:19:47.280 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1701051 ']' 00:19:47.280 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:47.280 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:47.280 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:47.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:47.280 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:47.280 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:47.280 [2024-11-26 20:49:50.430131] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:19:47.280 [2024-11-26 20:49:50.430217] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:47.280 [2024-11-26 20:49:50.510107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:47.280 [2024-11-26 20:49:50.571851] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:47.280 [2024-11-26 20:49:50.571898] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:47.280 [2024-11-26 20:49:50.571911] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:47.280 [2024-11-26 20:49:50.571922] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:47.280 [2024-11-26 20:49:50.571932] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:47.280 [2024-11-26 20:49:50.573507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:47.280 [2024-11-26 20:49:50.573567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:47.280 [2024-11-26 20:49:50.573590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:47.280 [2024-11-26 20:49:50.573594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:47.280 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:47.280 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:19:47.280 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:47.280 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:47.280 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:47.280 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:47.280 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:19:47.280 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:47.280 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:47.280 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.280 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:47.280 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.280 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:47.280 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:19:47.280 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.280 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:47.280 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.280 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:47.280 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.280 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:47.280 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.280 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:19:47.280 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.280 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:47.280 [2024-11-26 20:49:50.900814] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:47.280 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.280 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:47.280 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.280 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:47.280 Malloc1 00:19:47.280 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.280 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:47.280 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.280 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:47.280 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.280 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:47.280 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.280 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:47.280 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.280 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:47.280 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.280 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:47.280 [2024-11-26 20:49:50.963071] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:47.280 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.280 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=1701085 00:19:47.280 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:19:47.280 20:49:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:49.807 20:49:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:19:49.807 20:49:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.807 20:49:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:49.807 20:49:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.807 20:49:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:19:49.807 "tick_rate": 2700000000, 00:19:49.807 "poll_groups": [ 00:19:49.807 { 00:19:49.807 "name": "nvmf_tgt_poll_group_000", 00:19:49.807 "admin_qpairs": 1, 00:19:49.807 "io_qpairs": 3, 00:19:49.807 "current_admin_qpairs": 1, 00:19:49.807 "current_io_qpairs": 3, 00:19:49.807 "pending_bdev_io": 0, 00:19:49.807 "completed_nvme_io": 25837, 00:19:49.807 "transports": [ 00:19:49.807 { 00:19:49.807 "trtype": "TCP" 00:19:49.807 } 00:19:49.807 ] 00:19:49.807 }, 00:19:49.807 { 00:19:49.807 "name": "nvmf_tgt_poll_group_001", 00:19:49.807 "admin_qpairs": 0, 00:19:49.807 "io_qpairs": 1, 00:19:49.807 "current_admin_qpairs": 0, 00:19:49.807 "current_io_qpairs": 1, 00:19:49.807 "pending_bdev_io": 0, 00:19:49.807 "completed_nvme_io": 25282, 00:19:49.807 "transports": [ 00:19:49.807 { 00:19:49.807 "trtype": "TCP" 00:19:49.807 } 00:19:49.807 ] 00:19:49.807 }, 00:19:49.807 { 00:19:49.807 "name": "nvmf_tgt_poll_group_002", 00:19:49.807 "admin_qpairs": 0, 00:19:49.807 "io_qpairs": 0, 00:19:49.807 "current_admin_qpairs": 0, 00:19:49.807 "current_io_qpairs": 0, 00:19:49.807 "pending_bdev_io": 0, 00:19:49.807 "completed_nvme_io": 0, 00:19:49.807 "transports": [ 00:19:49.807 { 00:19:49.807 "trtype": "TCP" 00:19:49.807 } 00:19:49.807 ] 00:19:49.807 }, 00:19:49.807 { 00:19:49.807 "name": "nvmf_tgt_poll_group_003", 00:19:49.807 "admin_qpairs": 0, 00:19:49.807 "io_qpairs": 0, 00:19:49.807 "current_admin_qpairs": 0, 00:19:49.807 "current_io_qpairs": 0, 00:19:49.807 "pending_bdev_io": 0, 00:19:49.807 "completed_nvme_io": 0, 00:19:49.807 "transports": [ 00:19:49.807 { 00:19:49.807 "trtype": "TCP" 00:19:49.807 } 00:19:49.807 ] 00:19:49.807 } 00:19:49.807 ] 00:19:49.807 }' 00:19:49.807 20:49:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:19:49.807 20:49:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:19:49.807 20:49:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:19:49.807 20:49:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:19:49.807 20:49:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 1701085 00:19:57.913 Initializing NVMe Controllers 00:19:57.913 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:57.913 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:57.913 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:57.913 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:57.913 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:57.913 Initialization complete. Launching workers. 00:19:57.913 ======================================================== 00:19:57.913 Latency(us) 00:19:57.913 Device Information : IOPS MiB/s Average min max 00:19:57.913 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4522.40 17.67 14216.74 1647.86 62522.39 00:19:57.913 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4147.10 16.20 15472.60 1867.54 61745.67 00:19:57.913 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13408.10 52.38 4773.14 1641.79 8480.20 00:19:57.913 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4962.30 19.38 12932.64 1767.82 62134.66 00:19:57.913 ======================================================== 00:19:57.913 Total : 27039.89 105.62 9490.96 1641.79 62522.39 00:19:57.913 00:19:57.913 20:50:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:19:57.913 20:50:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:57.913 20:50:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:19:57.913 20:50:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:57.913 20:50:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:19:57.913 20:50:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:57.913 20:50:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:57.913 rmmod nvme_tcp 00:19:57.913 rmmod nvme_fabrics 00:19:57.914 rmmod nvme_keyring 00:19:57.914 20:50:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:57.914 20:50:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:19:57.914 20:50:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:19:57.914 20:50:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1701051 ']' 00:19:57.914 20:50:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1701051 00:19:57.914 20:50:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1701051 ']' 00:19:57.914 20:50:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1701051 00:19:57.914 20:50:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:19:57.914 20:50:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:57.914 20:50:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1701051 00:19:57.914 20:50:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:57.914 20:50:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:57.914 20:50:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1701051' 00:19:57.914 killing process with pid 1701051 00:19:57.914 20:50:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1701051 00:19:57.914 20:50:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1701051 00:19:57.914 20:50:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:57.914 20:50:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:57.914 20:50:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:57.914 20:50:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:19:57.914 20:50:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:19:57.914 20:50:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:57.914 20:50:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:19:57.914 20:50:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:57.914 20:50:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:57.914 20:50:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:57.914 20:50:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:57.914 20:50:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:01.199 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:01.199 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:20:01.199 00:20:01.199 real 0m45.208s 00:20:01.199 user 2m41.582s 00:20:01.199 sys 0m9.068s 00:20:01.199 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:01.200 ************************************ 00:20:01.200 END TEST nvmf_perf_adq 00:20:01.200 ************************************ 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:01.200 ************************************ 00:20:01.200 START TEST nvmf_shutdown 00:20:01.200 ************************************ 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:01.200 * Looking for test storage... 00:20:01.200 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:01.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.200 --rc genhtml_branch_coverage=1 00:20:01.200 --rc genhtml_function_coverage=1 00:20:01.200 --rc genhtml_legend=1 00:20:01.200 --rc geninfo_all_blocks=1 00:20:01.200 --rc geninfo_unexecuted_blocks=1 00:20:01.200 00:20:01.200 ' 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:01.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.200 --rc genhtml_branch_coverage=1 00:20:01.200 --rc genhtml_function_coverage=1 00:20:01.200 --rc genhtml_legend=1 00:20:01.200 --rc geninfo_all_blocks=1 00:20:01.200 --rc geninfo_unexecuted_blocks=1 00:20:01.200 00:20:01.200 ' 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:01.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.200 --rc genhtml_branch_coverage=1 00:20:01.200 --rc genhtml_function_coverage=1 00:20:01.200 --rc genhtml_legend=1 00:20:01.200 --rc geninfo_all_blocks=1 00:20:01.200 --rc geninfo_unexecuted_blocks=1 00:20:01.200 00:20:01.200 ' 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:01.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.200 --rc genhtml_branch_coverage=1 00:20:01.200 --rc genhtml_function_coverage=1 00:20:01.200 --rc genhtml_legend=1 00:20:01.200 --rc geninfo_all_blocks=1 00:20:01.200 --rc geninfo_unexecuted_blocks=1 00:20:01.200 00:20:01.200 ' 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:20:01.200 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:01.201 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:01.201 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:01.201 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:01.201 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:01.201 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:01.201 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:01.201 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:01.201 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:01.201 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:01.201 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:01.201 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:01.201 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:01.201 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:01.201 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:01.201 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:01.201 ************************************ 00:20:01.201 START TEST nvmf_shutdown_tc1 00:20:01.201 ************************************ 00:20:01.201 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:20:01.201 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:20:01.201 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:01.201 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:01.201 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:01.201 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:01.201 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:01.201 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:01.201 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:01.201 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:01.201 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:01.201 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:01.201 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:01.201 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:01.201 20:50:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:03.732 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:03.733 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:03.733 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:03.733 Found net devices under 0000:09:00.0: cvl_0_0 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:03.733 Found net devices under 0000:09:00.1: cvl_0_1 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:03.733 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:03.734 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:03.734 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:03.734 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:20:03.734 00:20:03.734 --- 10.0.0.2 ping statistics --- 00:20:03.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:03.734 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:20:03.734 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:03.734 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:03.734 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:20:03.734 00:20:03.734 --- 10.0.0.1 ping statistics --- 00:20:03.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:03.734 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:20:03.734 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:03.734 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:20:03.734 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:03.734 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:03.734 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:03.734 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:03.734 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:03.734 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:03.734 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:03.734 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:03.734 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:03.734 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:03.734 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:03.734 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=1704422 00:20:03.734 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:03.734 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 1704422 00:20:03.734 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1704422 ']' 00:20:03.734 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:03.734 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:03.734 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:03.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:03.734 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:03.734 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:03.734 [2024-11-26 20:50:07.243788] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:20:03.734 [2024-11-26 20:50:07.243870] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:03.734 [2024-11-26 20:50:07.321495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:03.734 [2024-11-26 20:50:07.380252] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:03.734 [2024-11-26 20:50:07.380320] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:03.734 [2024-11-26 20:50:07.380335] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:03.734 [2024-11-26 20:50:07.380346] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:03.734 [2024-11-26 20:50:07.380365] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:03.734 [2024-11-26 20:50:07.381953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:03.734 [2024-11-26 20:50:07.382019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:03.734 [2024-11-26 20:50:07.382042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:03.734 [2024-11-26 20:50:07.382047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:03.993 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:03.993 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:20:03.993 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:03.993 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:03.993 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:03.993 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:03.993 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:03.993 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.993 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:03.993 [2024-11-26 20:50:07.520263] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:03.993 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.993 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:03.993 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:03.993 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:03.993 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:03.993 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:03.993 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:03.993 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:03.993 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:03.993 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:03.993 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:03.993 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:03.993 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:03.993 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:03.993 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:03.993 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:03.993 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:03.993 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:03.993 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:03.993 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:03.993 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:03.993 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:03.993 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:03.993 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:03.993 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:03.993 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:03.993 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:03.993 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.993 20:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:03.993 Malloc1 00:20:03.993 [2024-11-26 20:50:07.610234] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:03.993 Malloc2 00:20:03.993 Malloc3 00:20:04.251 Malloc4 00:20:04.251 Malloc5 00:20:04.251 Malloc6 00:20:04.251 Malloc7 00:20:04.251 Malloc8 00:20:04.510 Malloc9 00:20:04.510 Malloc10 00:20:04.510 20:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.510 20:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:04.510 20:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:04.510 20:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:04.510 20:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1704568 00:20:04.510 20:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1704568 /var/tmp/bdevperf.sock 00:20:04.510 20:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1704568 ']' 00:20:04.510 20:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:04.510 20:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:04.510 20:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:04.510 20:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:04.510 20:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:04.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:04.510 20:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:20:04.510 20:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:04.510 20:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:20:04.510 20:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:04.510 20:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:04.510 20:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:04.510 { 00:20:04.510 "params": { 00:20:04.510 "name": "Nvme$subsystem", 00:20:04.510 "trtype": "$TEST_TRANSPORT", 00:20:04.510 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:04.510 "adrfam": "ipv4", 00:20:04.510 "trsvcid": "$NVMF_PORT", 00:20:04.510 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:04.510 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:04.511 "hdgst": ${hdgst:-false}, 00:20:04.511 "ddgst": ${ddgst:-false} 00:20:04.511 }, 00:20:04.511 "method": "bdev_nvme_attach_controller" 00:20:04.511 } 00:20:04.511 EOF 00:20:04.511 )") 00:20:04.511 20:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:04.511 20:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:04.511 20:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:04.511 { 00:20:04.511 "params": { 00:20:04.511 "name": "Nvme$subsystem", 00:20:04.511 "trtype": "$TEST_TRANSPORT", 00:20:04.511 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:04.511 "adrfam": "ipv4", 00:20:04.511 "trsvcid": "$NVMF_PORT", 00:20:04.511 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:04.511 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:04.511 "hdgst": ${hdgst:-false}, 00:20:04.511 "ddgst": ${ddgst:-false} 00:20:04.511 }, 00:20:04.511 "method": "bdev_nvme_attach_controller" 00:20:04.511 } 00:20:04.511 EOF 00:20:04.511 )") 00:20:04.511 20:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:04.511 20:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:04.511 20:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:04.511 { 00:20:04.511 "params": { 00:20:04.511 "name": "Nvme$subsystem", 00:20:04.511 "trtype": "$TEST_TRANSPORT", 00:20:04.511 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:04.511 "adrfam": "ipv4", 00:20:04.511 "trsvcid": "$NVMF_PORT", 00:20:04.511 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:04.511 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:04.511 "hdgst": ${hdgst:-false}, 00:20:04.511 "ddgst": ${ddgst:-false} 00:20:04.511 }, 00:20:04.511 "method": "bdev_nvme_attach_controller" 00:20:04.511 } 00:20:04.511 EOF 00:20:04.511 )") 00:20:04.511 20:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:04.511 20:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:04.511 20:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:04.511 { 00:20:04.511 "params": { 00:20:04.511 "name": "Nvme$subsystem", 00:20:04.511 "trtype": "$TEST_TRANSPORT", 00:20:04.511 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:04.511 "adrfam": "ipv4", 00:20:04.511 "trsvcid": "$NVMF_PORT", 00:20:04.511 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:04.511 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:04.511 "hdgst": ${hdgst:-false}, 00:20:04.511 "ddgst": ${ddgst:-false} 00:20:04.511 }, 00:20:04.511 "method": "bdev_nvme_attach_controller" 00:20:04.511 } 00:20:04.511 EOF 00:20:04.511 )") 00:20:04.511 20:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:04.511 20:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:04.511 20:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:04.511 { 00:20:04.511 "params": { 00:20:04.511 "name": "Nvme$subsystem", 00:20:04.511 "trtype": "$TEST_TRANSPORT", 00:20:04.511 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:04.511 "adrfam": "ipv4", 00:20:04.511 "trsvcid": "$NVMF_PORT", 00:20:04.511 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:04.511 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:04.511 "hdgst": ${hdgst:-false}, 00:20:04.511 "ddgst": ${ddgst:-false} 00:20:04.511 }, 00:20:04.511 "method": "bdev_nvme_attach_controller" 00:20:04.511 } 00:20:04.511 EOF 00:20:04.511 )") 00:20:04.511 20:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:04.511 20:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:04.511 20:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:04.511 { 00:20:04.511 "params": { 00:20:04.511 "name": "Nvme$subsystem", 00:20:04.511 "trtype": "$TEST_TRANSPORT", 00:20:04.511 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:04.511 "adrfam": "ipv4", 00:20:04.511 "trsvcid": "$NVMF_PORT", 00:20:04.511 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:04.511 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:04.511 "hdgst": ${hdgst:-false}, 00:20:04.511 "ddgst": ${ddgst:-false} 00:20:04.511 }, 00:20:04.511 "method": "bdev_nvme_attach_controller" 00:20:04.511 } 00:20:04.511 EOF 00:20:04.511 )") 00:20:04.511 20:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:04.511 20:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:04.511 20:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:04.511 { 00:20:04.511 "params": { 00:20:04.511 "name": "Nvme$subsystem", 00:20:04.511 "trtype": "$TEST_TRANSPORT", 00:20:04.511 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:04.511 "adrfam": "ipv4", 00:20:04.511 "trsvcid": "$NVMF_PORT", 00:20:04.511 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:04.511 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:04.511 "hdgst": ${hdgst:-false}, 00:20:04.511 "ddgst": ${ddgst:-false} 00:20:04.511 }, 00:20:04.511 "method": "bdev_nvme_attach_controller" 00:20:04.511 } 00:20:04.511 EOF 00:20:04.511 )") 00:20:04.511 20:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:04.511 20:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:04.511 20:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:04.511 { 00:20:04.511 "params": { 00:20:04.511 "name": "Nvme$subsystem", 00:20:04.511 "trtype": "$TEST_TRANSPORT", 00:20:04.511 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:04.511 "adrfam": "ipv4", 00:20:04.511 "trsvcid": "$NVMF_PORT", 00:20:04.511 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:04.511 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:04.511 "hdgst": ${hdgst:-false}, 00:20:04.511 "ddgst": ${ddgst:-false} 00:20:04.511 }, 00:20:04.511 "method": "bdev_nvme_attach_controller" 00:20:04.511 } 00:20:04.511 EOF 00:20:04.511 )") 00:20:04.511 20:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:04.511 20:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:04.511 20:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:04.511 { 00:20:04.511 "params": { 00:20:04.511 "name": "Nvme$subsystem", 00:20:04.511 "trtype": "$TEST_TRANSPORT", 00:20:04.511 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:04.511 "adrfam": "ipv4", 00:20:04.511 "trsvcid": "$NVMF_PORT", 00:20:04.511 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:04.511 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:04.511 "hdgst": ${hdgst:-false}, 00:20:04.511 "ddgst": ${ddgst:-false} 00:20:04.511 }, 00:20:04.511 "method": "bdev_nvme_attach_controller" 00:20:04.511 } 00:20:04.511 EOF 00:20:04.511 )") 00:20:04.511 20:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:04.511 20:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:04.511 20:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:04.511 { 00:20:04.511 "params": { 00:20:04.511 "name": "Nvme$subsystem", 00:20:04.511 "trtype": "$TEST_TRANSPORT", 00:20:04.511 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:04.511 "adrfam": "ipv4", 00:20:04.511 "trsvcid": "$NVMF_PORT", 00:20:04.511 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:04.511 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:04.511 "hdgst": ${hdgst:-false}, 00:20:04.511 "ddgst": ${ddgst:-false} 00:20:04.511 }, 00:20:04.511 "method": "bdev_nvme_attach_controller" 00:20:04.511 } 00:20:04.511 EOF 00:20:04.511 )") 00:20:04.511 20:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:04.511 20:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:20:04.511 20:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:20:04.511 20:50:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:04.511 "params": { 00:20:04.511 "name": "Nvme1", 00:20:04.511 "trtype": "tcp", 00:20:04.511 "traddr": "10.0.0.2", 00:20:04.511 "adrfam": "ipv4", 00:20:04.511 "trsvcid": "4420", 00:20:04.511 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:04.511 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:04.511 "hdgst": false, 00:20:04.511 "ddgst": false 00:20:04.511 }, 00:20:04.511 "method": "bdev_nvme_attach_controller" 00:20:04.511 },{ 00:20:04.511 "params": { 00:20:04.511 "name": "Nvme2", 00:20:04.511 "trtype": "tcp", 00:20:04.511 "traddr": "10.0.0.2", 00:20:04.511 "adrfam": "ipv4", 00:20:04.511 "trsvcid": "4420", 00:20:04.511 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:04.511 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:04.511 "hdgst": false, 00:20:04.511 "ddgst": false 00:20:04.511 }, 00:20:04.511 "method": "bdev_nvme_attach_controller" 00:20:04.511 },{ 00:20:04.511 "params": { 00:20:04.511 "name": "Nvme3", 00:20:04.511 "trtype": "tcp", 00:20:04.511 "traddr": "10.0.0.2", 00:20:04.511 "adrfam": "ipv4", 00:20:04.512 "trsvcid": "4420", 00:20:04.512 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:04.512 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:04.512 "hdgst": false, 00:20:04.512 "ddgst": false 00:20:04.512 }, 00:20:04.512 "method": "bdev_nvme_attach_controller" 00:20:04.512 },{ 00:20:04.512 "params": { 00:20:04.512 "name": "Nvme4", 00:20:04.512 "trtype": "tcp", 00:20:04.512 "traddr": "10.0.0.2", 00:20:04.512 "adrfam": "ipv4", 00:20:04.512 "trsvcid": "4420", 00:20:04.512 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:04.512 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:04.512 "hdgst": false, 00:20:04.512 "ddgst": false 00:20:04.512 }, 00:20:04.512 "method": "bdev_nvme_attach_controller" 00:20:04.512 },{ 00:20:04.512 "params": { 00:20:04.512 "name": "Nvme5", 00:20:04.512 "trtype": "tcp", 00:20:04.512 "traddr": "10.0.0.2", 00:20:04.512 "adrfam": "ipv4", 00:20:04.512 "trsvcid": "4420", 00:20:04.512 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:04.512 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:04.512 "hdgst": false, 00:20:04.512 "ddgst": false 00:20:04.512 }, 00:20:04.512 "method": "bdev_nvme_attach_controller" 00:20:04.512 },{ 00:20:04.512 "params": { 00:20:04.512 "name": "Nvme6", 00:20:04.512 "trtype": "tcp", 00:20:04.512 "traddr": "10.0.0.2", 00:20:04.512 "adrfam": "ipv4", 00:20:04.512 "trsvcid": "4420", 00:20:04.512 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:04.512 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:04.512 "hdgst": false, 00:20:04.512 "ddgst": false 00:20:04.512 }, 00:20:04.512 "method": "bdev_nvme_attach_controller" 00:20:04.512 },{ 00:20:04.512 "params": { 00:20:04.512 "name": "Nvme7", 00:20:04.512 "trtype": "tcp", 00:20:04.512 "traddr": "10.0.0.2", 00:20:04.512 "adrfam": "ipv4", 00:20:04.512 "trsvcid": "4420", 00:20:04.512 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:04.512 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:04.512 "hdgst": false, 00:20:04.512 "ddgst": false 00:20:04.512 }, 00:20:04.512 "method": "bdev_nvme_attach_controller" 00:20:04.512 },{ 00:20:04.512 "params": { 00:20:04.512 "name": "Nvme8", 00:20:04.512 "trtype": "tcp", 00:20:04.512 "traddr": "10.0.0.2", 00:20:04.512 "adrfam": "ipv4", 00:20:04.512 "trsvcid": "4420", 00:20:04.512 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:04.512 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:04.512 "hdgst": false, 00:20:04.512 "ddgst": false 00:20:04.512 }, 00:20:04.512 "method": "bdev_nvme_attach_controller" 00:20:04.512 },{ 00:20:04.512 "params": { 00:20:04.512 "name": "Nvme9", 00:20:04.512 "trtype": "tcp", 00:20:04.512 "traddr": "10.0.0.2", 00:20:04.512 "adrfam": "ipv4", 00:20:04.512 "trsvcid": "4420", 00:20:04.512 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:04.512 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:04.512 "hdgst": false, 00:20:04.512 "ddgst": false 00:20:04.512 }, 00:20:04.512 "method": "bdev_nvme_attach_controller" 00:20:04.512 },{ 00:20:04.512 "params": { 00:20:04.512 "name": "Nvme10", 00:20:04.512 "trtype": "tcp", 00:20:04.512 "traddr": "10.0.0.2", 00:20:04.512 "adrfam": "ipv4", 00:20:04.512 "trsvcid": "4420", 00:20:04.512 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:04.512 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:04.512 "hdgst": false, 00:20:04.512 "ddgst": false 00:20:04.512 }, 00:20:04.512 "method": "bdev_nvme_attach_controller" 00:20:04.512 }' 00:20:04.512 [2024-11-26 20:50:08.141351] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:20:04.512 [2024-11-26 20:50:08.141434] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:04.770 [2024-11-26 20:50:08.213719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.770 [2024-11-26 20:50:08.274640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.664 20:50:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:06.664 20:50:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:20:06.664 20:50:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:06.664 20:50:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.664 20:50:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:06.664 20:50:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.664 20:50:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1704568 00:20:06.664 20:50:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:20:06.664 20:50:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:20:07.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1704568 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:07.597 20:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1704422 00:20:07.597 20:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:07.597 20:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:07.597 20:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:20:07.597 20:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:20:07.597 20:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:07.597 20:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:07.597 { 00:20:07.597 "params": { 00:20:07.597 "name": "Nvme$subsystem", 00:20:07.597 "trtype": "$TEST_TRANSPORT", 00:20:07.597 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:07.597 "adrfam": "ipv4", 00:20:07.597 "trsvcid": "$NVMF_PORT", 00:20:07.597 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:07.597 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:07.597 "hdgst": ${hdgst:-false}, 00:20:07.597 "ddgst": ${ddgst:-false} 00:20:07.597 }, 00:20:07.597 "method": "bdev_nvme_attach_controller" 00:20:07.597 } 00:20:07.597 EOF 00:20:07.597 )") 00:20:07.597 20:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:07.597 20:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:07.597 20:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:07.597 { 00:20:07.597 "params": { 00:20:07.597 "name": "Nvme$subsystem", 00:20:07.597 "trtype": "$TEST_TRANSPORT", 00:20:07.597 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:07.597 "adrfam": "ipv4", 00:20:07.597 "trsvcid": "$NVMF_PORT", 00:20:07.597 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:07.597 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:07.597 "hdgst": ${hdgst:-false}, 00:20:07.597 "ddgst": ${ddgst:-false} 00:20:07.597 }, 00:20:07.597 "method": "bdev_nvme_attach_controller" 00:20:07.597 } 00:20:07.597 EOF 00:20:07.597 )") 00:20:07.597 20:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:07.597 20:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:07.597 20:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:07.597 { 00:20:07.597 "params": { 00:20:07.597 "name": "Nvme$subsystem", 00:20:07.597 "trtype": "$TEST_TRANSPORT", 00:20:07.597 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:07.597 "adrfam": "ipv4", 00:20:07.597 "trsvcid": "$NVMF_PORT", 00:20:07.597 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:07.597 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:07.597 "hdgst": ${hdgst:-false}, 00:20:07.597 "ddgst": ${ddgst:-false} 00:20:07.597 }, 00:20:07.597 "method": "bdev_nvme_attach_controller" 00:20:07.597 } 00:20:07.597 EOF 00:20:07.597 )") 00:20:07.597 20:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:07.597 20:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:07.597 20:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:07.597 { 00:20:07.597 "params": { 00:20:07.597 "name": "Nvme$subsystem", 00:20:07.597 "trtype": "$TEST_TRANSPORT", 00:20:07.597 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:07.597 "adrfam": "ipv4", 00:20:07.597 "trsvcid": "$NVMF_PORT", 00:20:07.597 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:07.597 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:07.597 "hdgst": ${hdgst:-false}, 00:20:07.597 "ddgst": ${ddgst:-false} 00:20:07.597 }, 00:20:07.597 "method": "bdev_nvme_attach_controller" 00:20:07.597 } 00:20:07.597 EOF 00:20:07.597 )") 00:20:07.597 20:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:07.597 20:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:07.597 20:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:07.597 { 00:20:07.597 "params": { 00:20:07.597 "name": "Nvme$subsystem", 00:20:07.597 "trtype": "$TEST_TRANSPORT", 00:20:07.597 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:07.597 "adrfam": "ipv4", 00:20:07.597 "trsvcid": "$NVMF_PORT", 00:20:07.597 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:07.597 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:07.597 "hdgst": ${hdgst:-false}, 00:20:07.597 "ddgst": ${ddgst:-false} 00:20:07.597 }, 00:20:07.597 "method": "bdev_nvme_attach_controller" 00:20:07.597 } 00:20:07.597 EOF 00:20:07.597 )") 00:20:07.597 20:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:07.597 20:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:07.597 20:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:07.597 { 00:20:07.597 "params": { 00:20:07.597 "name": "Nvme$subsystem", 00:20:07.597 "trtype": "$TEST_TRANSPORT", 00:20:07.597 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:07.597 "adrfam": "ipv4", 00:20:07.597 "trsvcid": "$NVMF_PORT", 00:20:07.597 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:07.597 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:07.597 "hdgst": ${hdgst:-false}, 00:20:07.597 "ddgst": ${ddgst:-false} 00:20:07.597 }, 00:20:07.597 "method": "bdev_nvme_attach_controller" 00:20:07.597 } 00:20:07.597 EOF 00:20:07.597 )") 00:20:07.597 20:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:07.597 20:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:07.597 20:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:07.597 { 00:20:07.597 "params": { 00:20:07.597 "name": "Nvme$subsystem", 00:20:07.597 "trtype": "$TEST_TRANSPORT", 00:20:07.597 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:07.597 "adrfam": "ipv4", 00:20:07.597 "trsvcid": "$NVMF_PORT", 00:20:07.597 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:07.597 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:07.597 "hdgst": ${hdgst:-false}, 00:20:07.597 "ddgst": ${ddgst:-false} 00:20:07.597 }, 00:20:07.597 "method": "bdev_nvme_attach_controller" 00:20:07.597 } 00:20:07.597 EOF 00:20:07.597 )") 00:20:07.597 20:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:07.597 20:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:07.597 20:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:07.597 { 00:20:07.597 "params": { 00:20:07.597 "name": "Nvme$subsystem", 00:20:07.597 "trtype": "$TEST_TRANSPORT", 00:20:07.597 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:07.597 "adrfam": "ipv4", 00:20:07.597 "trsvcid": "$NVMF_PORT", 00:20:07.597 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:07.597 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:07.597 "hdgst": ${hdgst:-false}, 00:20:07.597 "ddgst": ${ddgst:-false} 00:20:07.597 }, 00:20:07.597 "method": "bdev_nvme_attach_controller" 00:20:07.597 } 00:20:07.597 EOF 00:20:07.597 )") 00:20:07.597 20:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:07.597 20:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:07.597 20:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:07.597 { 00:20:07.597 "params": { 00:20:07.597 "name": "Nvme$subsystem", 00:20:07.597 "trtype": "$TEST_TRANSPORT", 00:20:07.597 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:07.597 "adrfam": "ipv4", 00:20:07.597 "trsvcid": "$NVMF_PORT", 00:20:07.597 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:07.597 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:07.597 "hdgst": ${hdgst:-false}, 00:20:07.597 "ddgst": ${ddgst:-false} 00:20:07.597 }, 00:20:07.597 "method": "bdev_nvme_attach_controller" 00:20:07.597 } 00:20:07.597 EOF 00:20:07.597 )") 00:20:07.597 20:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:07.597 20:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:07.597 20:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:07.597 { 00:20:07.597 "params": { 00:20:07.597 "name": "Nvme$subsystem", 00:20:07.597 "trtype": "$TEST_TRANSPORT", 00:20:07.597 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:07.597 "adrfam": "ipv4", 00:20:07.597 "trsvcid": "$NVMF_PORT", 00:20:07.597 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:07.597 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:07.597 "hdgst": ${hdgst:-false}, 00:20:07.597 "ddgst": ${ddgst:-false} 00:20:07.597 }, 00:20:07.597 "method": "bdev_nvme_attach_controller" 00:20:07.597 } 00:20:07.597 EOF 00:20:07.597 )") 00:20:07.597 20:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:07.597 20:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:20:07.598 20:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:20:07.598 20:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:07.598 "params": { 00:20:07.598 "name": "Nvme1", 00:20:07.598 "trtype": "tcp", 00:20:07.598 "traddr": "10.0.0.2", 00:20:07.598 "adrfam": "ipv4", 00:20:07.598 "trsvcid": "4420", 00:20:07.598 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:07.598 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:07.598 "hdgst": false, 00:20:07.598 "ddgst": false 00:20:07.598 }, 00:20:07.598 "method": "bdev_nvme_attach_controller" 00:20:07.598 },{ 00:20:07.598 "params": { 00:20:07.598 "name": "Nvme2", 00:20:07.598 "trtype": "tcp", 00:20:07.598 "traddr": "10.0.0.2", 00:20:07.598 "adrfam": "ipv4", 00:20:07.598 "trsvcid": "4420", 00:20:07.598 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:07.598 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:07.598 "hdgst": false, 00:20:07.598 "ddgst": false 00:20:07.598 }, 00:20:07.598 "method": "bdev_nvme_attach_controller" 00:20:07.598 },{ 00:20:07.598 "params": { 00:20:07.598 "name": "Nvme3", 00:20:07.598 "trtype": "tcp", 00:20:07.598 "traddr": "10.0.0.2", 00:20:07.598 "adrfam": "ipv4", 00:20:07.598 "trsvcid": "4420", 00:20:07.598 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:07.598 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:07.598 "hdgst": false, 00:20:07.598 "ddgst": false 00:20:07.598 }, 00:20:07.598 "method": "bdev_nvme_attach_controller" 00:20:07.598 },{ 00:20:07.598 "params": { 00:20:07.598 "name": "Nvme4", 00:20:07.598 "trtype": "tcp", 00:20:07.598 "traddr": "10.0.0.2", 00:20:07.598 "adrfam": "ipv4", 00:20:07.598 "trsvcid": "4420", 00:20:07.598 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:07.598 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:07.598 "hdgst": false, 00:20:07.598 "ddgst": false 00:20:07.598 }, 00:20:07.598 "method": "bdev_nvme_attach_controller" 00:20:07.598 },{ 00:20:07.598 "params": { 00:20:07.598 "name": "Nvme5", 00:20:07.598 "trtype": "tcp", 00:20:07.598 "traddr": "10.0.0.2", 00:20:07.598 "adrfam": "ipv4", 00:20:07.598 "trsvcid": "4420", 00:20:07.598 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:07.598 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:07.598 "hdgst": false, 00:20:07.598 "ddgst": false 00:20:07.598 }, 00:20:07.598 "method": "bdev_nvme_attach_controller" 00:20:07.598 },{ 00:20:07.598 "params": { 00:20:07.598 "name": "Nvme6", 00:20:07.598 "trtype": "tcp", 00:20:07.598 "traddr": "10.0.0.2", 00:20:07.598 "adrfam": "ipv4", 00:20:07.598 "trsvcid": "4420", 00:20:07.598 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:07.598 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:07.598 "hdgst": false, 00:20:07.598 "ddgst": false 00:20:07.598 }, 00:20:07.598 "method": "bdev_nvme_attach_controller" 00:20:07.598 },{ 00:20:07.598 "params": { 00:20:07.598 "name": "Nvme7", 00:20:07.598 "trtype": "tcp", 00:20:07.598 "traddr": "10.0.0.2", 00:20:07.598 "adrfam": "ipv4", 00:20:07.598 "trsvcid": "4420", 00:20:07.598 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:07.598 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:07.598 "hdgst": false, 00:20:07.598 "ddgst": false 00:20:07.598 }, 00:20:07.598 "method": "bdev_nvme_attach_controller" 00:20:07.598 },{ 00:20:07.598 "params": { 00:20:07.598 "name": "Nvme8", 00:20:07.598 "trtype": "tcp", 00:20:07.598 "traddr": "10.0.0.2", 00:20:07.598 "adrfam": "ipv4", 00:20:07.598 "trsvcid": "4420", 00:20:07.598 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:07.598 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:07.598 "hdgst": false, 00:20:07.598 "ddgst": false 00:20:07.598 }, 00:20:07.598 "method": "bdev_nvme_attach_controller" 00:20:07.598 },{ 00:20:07.598 "params": { 00:20:07.598 "name": "Nvme9", 00:20:07.598 "trtype": "tcp", 00:20:07.598 "traddr": "10.0.0.2", 00:20:07.598 "adrfam": "ipv4", 00:20:07.598 "trsvcid": "4420", 00:20:07.598 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:07.598 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:07.598 "hdgst": false, 00:20:07.598 "ddgst": false 00:20:07.598 }, 00:20:07.598 "method": "bdev_nvme_attach_controller" 00:20:07.598 },{ 00:20:07.598 "params": { 00:20:07.598 "name": "Nvme10", 00:20:07.598 "trtype": "tcp", 00:20:07.598 "traddr": "10.0.0.2", 00:20:07.598 "adrfam": "ipv4", 00:20:07.598 "trsvcid": "4420", 00:20:07.598 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:07.598 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:07.598 "hdgst": false, 00:20:07.598 "ddgst": false 00:20:07.598 }, 00:20:07.598 "method": "bdev_nvme_attach_controller" 00:20:07.598 }' 00:20:07.598 [2024-11-26 20:50:11.264765] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:20:07.598 [2024-11-26 20:50:11.264859] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1704991 ] 00:20:07.856 [2024-11-26 20:50:11.337743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.856 [2024-11-26 20:50:11.400488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.228 Running I/O for 1 seconds... 00:20:10.465 1809.00 IOPS, 113.06 MiB/s 00:20:10.465 Latency(us) 00:20:10.465 [2024-11-26T19:50:14.162Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:10.465 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:10.465 Verification LBA range: start 0x0 length 0x400 00:20:10.465 Nvme1n1 : 1.12 227.79 14.24 0.00 0.00 272460.42 20291.89 251658.24 00:20:10.465 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:10.465 Verification LBA range: start 0x0 length 0x400 00:20:10.465 Nvme2n1 : 1.09 243.11 15.19 0.00 0.00 252367.55 11990.66 245444.46 00:20:10.465 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:10.465 Verification LBA range: start 0x0 length 0x400 00:20:10.465 Nvme3n1 : 1.08 240.43 15.03 0.00 0.00 252739.86 9077.95 239230.67 00:20:10.465 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:10.465 Verification LBA range: start 0x0 length 0x400 00:20:10.465 Nvme4n1 : 1.09 235.22 14.70 0.00 0.00 255475.11 19612.25 250104.79 00:20:10.465 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:10.465 Verification LBA range: start 0x0 length 0x400 00:20:10.465 Nvme5n1 : 1.15 223.55 13.97 0.00 0.00 265082.12 23010.42 256318.58 00:20:10.465 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:10.465 Verification LBA range: start 0x0 length 0x400 00:20:10.465 Nvme6n1 : 1.13 229.76 14.36 0.00 0.00 252136.22 4514.70 257872.02 00:20:10.465 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:10.465 Verification LBA range: start 0x0 length 0x400 00:20:10.465 Nvme7n1 : 1.18 271.55 16.97 0.00 0.00 211380.00 13883.92 251658.24 00:20:10.465 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:10.465 Verification LBA range: start 0x0 length 0x400 00:20:10.465 Nvme8n1 : 1.14 225.04 14.06 0.00 0.00 249484.52 20097.71 254765.13 00:20:10.465 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:10.465 Verification LBA range: start 0x0 length 0x400 00:20:10.465 Nvme9n1 : 1.17 218.45 13.65 0.00 0.00 253516.80 23107.51 284280.60 00:20:10.465 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:10.465 Verification LBA range: start 0x0 length 0x400 00:20:10.465 Nvme10n1 : 1.18 270.04 16.88 0.00 0.00 201978.42 5364.24 268746.15 00:20:10.465 [2024-11-26T19:50:14.162Z] =================================================================================================================== 00:20:10.465 [2024-11-26T19:50:14.162Z] Total : 2384.95 149.06 0.00 0.00 244806.19 4514.70 284280.60 00:20:10.723 20:50:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:20:10.723 20:50:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:10.723 20:50:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:10.723 20:50:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:10.723 20:50:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:10.723 20:50:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:10.723 20:50:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:20:10.723 20:50:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:10.723 20:50:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:20:10.723 20:50:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:10.723 20:50:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:10.723 rmmod nvme_tcp 00:20:10.723 rmmod nvme_fabrics 00:20:10.723 rmmod nvme_keyring 00:20:10.723 20:50:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:10.723 20:50:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:20:10.723 20:50:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:20:10.723 20:50:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 1704422 ']' 00:20:10.723 20:50:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 1704422 00:20:10.723 20:50:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 1704422 ']' 00:20:10.723 20:50:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 1704422 00:20:10.723 20:50:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:20:10.723 20:50:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:10.723 20:50:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1704422 00:20:10.723 20:50:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:10.723 20:50:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:10.723 20:50:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1704422' 00:20:10.723 killing process with pid 1704422 00:20:10.723 20:50:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 1704422 00:20:10.723 20:50:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 1704422 00:20:11.288 20:50:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:11.288 20:50:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:11.288 20:50:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:11.288 20:50:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:20:11.288 20:50:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:20:11.288 20:50:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:11.288 20:50:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:20:11.288 20:50:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:11.288 20:50:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:11.288 20:50:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:11.288 20:50:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:11.288 20:50:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:13.186 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:13.186 00:20:13.186 real 0m11.984s 00:20:13.186 user 0m34.443s 00:20:13.186 sys 0m3.390s 00:20:13.186 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:13.186 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:13.186 ************************************ 00:20:13.186 END TEST nvmf_shutdown_tc1 00:20:13.186 ************************************ 00:20:13.186 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:20:13.186 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:13.186 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:13.186 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:13.445 ************************************ 00:20:13.445 START TEST nvmf_shutdown_tc2 00:20:13.445 ************************************ 00:20:13.445 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:20:13.445 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:20:13.445 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:13.445 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:13.445 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:13.445 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:13.445 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:13.445 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:13.445 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:13.445 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:13.445 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:13.445 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:13.445 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:13.445 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:13.445 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:13.445 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:13.445 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:13.445 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:13.445 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:13.445 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:13.445 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:13.445 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:13.445 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:20:13.445 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:13.445 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:20:13.445 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:20:13.445 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:20:13.445 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:20:13.445 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:20:13.445 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:13.445 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:13.445 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:13.445 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:13.445 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:13.445 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:13.445 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:13.445 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:13.445 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:13.446 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:13.446 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:13.446 Found net devices under 0000:09:00.0: cvl_0_0 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:13.446 Found net devices under 0000:09:00.1: cvl_0_1 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:13.446 20:50:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:13.446 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:13.446 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:13.446 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:13.446 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:13.446 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:13.446 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:20:13.446 00:20:13.446 --- 10.0.0.2 ping statistics --- 00:20:13.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.446 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:20:13.446 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:13.446 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:13.446 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:20:13.446 00:20:13.446 --- 10.0.0.1 ping statistics --- 00:20:13.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.446 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:20:13.446 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:13.446 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:20:13.446 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:13.446 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:13.446 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:13.446 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:13.446 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:13.446 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:13.446 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:13.446 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:13.446 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:13.446 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:13.447 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:13.447 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1705757 00:20:13.447 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:13.447 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1705757 00:20:13.447 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1705757 ']' 00:20:13.447 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.447 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:13.447 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:13.447 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:13.447 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:13.447 [2024-11-26 20:50:17.130031] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:20:13.447 [2024-11-26 20:50:17.130143] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:13.704 [2024-11-26 20:50:17.202419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:13.704 [2024-11-26 20:50:17.256939] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:13.704 [2024-11-26 20:50:17.256994] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:13.704 [2024-11-26 20:50:17.257017] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:13.704 [2024-11-26 20:50:17.257027] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:13.704 [2024-11-26 20:50:17.257036] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:13.704 [2024-11-26 20:50:17.258471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:13.704 [2024-11-26 20:50:17.258532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:13.704 [2024-11-26 20:50:17.258599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:13.704 [2024-11-26 20:50:17.258602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:13.704 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:13.704 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:20:13.704 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:13.704 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:13.704 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:13.704 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:13.705 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:13.705 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.962 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:13.962 [2024-11-26 20:50:17.404833] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:13.962 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.962 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:13.963 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:13.963 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:13.963 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:13.963 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:13.963 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:13.963 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:13.963 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:13.963 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:13.963 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:13.963 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:13.963 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:13.963 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:13.963 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:13.963 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:13.963 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:13.963 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:13.963 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:13.963 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:13.963 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:13.963 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:13.963 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:13.963 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:13.963 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:13.963 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:13.963 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:13.963 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.963 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:13.963 Malloc1 00:20:13.963 [2024-11-26 20:50:17.495747] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:13.963 Malloc2 00:20:13.963 Malloc3 00:20:13.963 Malloc4 00:20:14.221 Malloc5 00:20:14.221 Malloc6 00:20:14.221 Malloc7 00:20:14.221 Malloc8 00:20:14.221 Malloc9 00:20:14.480 Malloc10 00:20:14.480 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.480 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:14.480 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:14.480 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:14.480 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1705940 00:20:14.480 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1705940 /var/tmp/bdevperf.sock 00:20:14.480 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1705940 ']' 00:20:14.480 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:14.480 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:14.480 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:14.480 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:14.480 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:20:14.480 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:14.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:14.480 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:20:14.480 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:14.480 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:14.480 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:14.480 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:14.480 { 00:20:14.480 "params": { 00:20:14.480 "name": "Nvme$subsystem", 00:20:14.480 "trtype": "$TEST_TRANSPORT", 00:20:14.480 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:14.480 "adrfam": "ipv4", 00:20:14.480 "trsvcid": "$NVMF_PORT", 00:20:14.480 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:14.480 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:14.480 "hdgst": ${hdgst:-false}, 00:20:14.480 "ddgst": ${ddgst:-false} 00:20:14.480 }, 00:20:14.480 "method": "bdev_nvme_attach_controller" 00:20:14.480 } 00:20:14.480 EOF 00:20:14.480 )") 00:20:14.480 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:14.480 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:14.480 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:14.480 { 00:20:14.480 "params": { 00:20:14.480 "name": "Nvme$subsystem", 00:20:14.480 "trtype": "$TEST_TRANSPORT", 00:20:14.480 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:14.480 "adrfam": "ipv4", 00:20:14.480 "trsvcid": "$NVMF_PORT", 00:20:14.480 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:14.480 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:14.480 "hdgst": ${hdgst:-false}, 00:20:14.480 "ddgst": ${ddgst:-false} 00:20:14.480 }, 00:20:14.480 "method": "bdev_nvme_attach_controller" 00:20:14.480 } 00:20:14.480 EOF 00:20:14.480 )") 00:20:14.480 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:14.480 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:14.480 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:14.480 { 00:20:14.480 "params": { 00:20:14.480 "name": "Nvme$subsystem", 00:20:14.480 "trtype": "$TEST_TRANSPORT", 00:20:14.480 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:14.480 "adrfam": "ipv4", 00:20:14.480 "trsvcid": "$NVMF_PORT", 00:20:14.480 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:14.480 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:14.480 "hdgst": ${hdgst:-false}, 00:20:14.480 "ddgst": ${ddgst:-false} 00:20:14.480 }, 00:20:14.480 "method": "bdev_nvme_attach_controller" 00:20:14.480 } 00:20:14.480 EOF 00:20:14.480 )") 00:20:14.480 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:14.480 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:14.480 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:14.480 { 00:20:14.480 "params": { 00:20:14.480 "name": "Nvme$subsystem", 00:20:14.480 "trtype": "$TEST_TRANSPORT", 00:20:14.480 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:14.480 "adrfam": "ipv4", 00:20:14.480 "trsvcid": "$NVMF_PORT", 00:20:14.480 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:14.480 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:14.480 "hdgst": ${hdgst:-false}, 00:20:14.480 "ddgst": ${ddgst:-false} 00:20:14.480 }, 00:20:14.480 "method": "bdev_nvme_attach_controller" 00:20:14.480 } 00:20:14.480 EOF 00:20:14.480 )") 00:20:14.480 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:14.480 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:14.480 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:14.480 { 00:20:14.480 "params": { 00:20:14.480 "name": "Nvme$subsystem", 00:20:14.480 "trtype": "$TEST_TRANSPORT", 00:20:14.480 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:14.480 "adrfam": "ipv4", 00:20:14.480 "trsvcid": "$NVMF_PORT", 00:20:14.480 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:14.480 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:14.480 "hdgst": ${hdgst:-false}, 00:20:14.480 "ddgst": ${ddgst:-false} 00:20:14.480 }, 00:20:14.480 "method": "bdev_nvme_attach_controller" 00:20:14.480 } 00:20:14.480 EOF 00:20:14.480 )") 00:20:14.480 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:14.480 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:14.480 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:14.480 { 00:20:14.480 "params": { 00:20:14.480 "name": "Nvme$subsystem", 00:20:14.480 "trtype": "$TEST_TRANSPORT", 00:20:14.480 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:14.480 "adrfam": "ipv4", 00:20:14.480 "trsvcid": "$NVMF_PORT", 00:20:14.480 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:14.480 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:14.480 "hdgst": ${hdgst:-false}, 00:20:14.480 "ddgst": ${ddgst:-false} 00:20:14.480 }, 00:20:14.480 "method": "bdev_nvme_attach_controller" 00:20:14.480 } 00:20:14.480 EOF 00:20:14.480 )") 00:20:14.481 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:14.481 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:14.481 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:14.481 { 00:20:14.481 "params": { 00:20:14.481 "name": "Nvme$subsystem", 00:20:14.481 "trtype": "$TEST_TRANSPORT", 00:20:14.481 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:14.481 "adrfam": "ipv4", 00:20:14.481 "trsvcid": "$NVMF_PORT", 00:20:14.481 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:14.481 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:14.481 "hdgst": ${hdgst:-false}, 00:20:14.481 "ddgst": ${ddgst:-false} 00:20:14.481 }, 00:20:14.481 "method": "bdev_nvme_attach_controller" 00:20:14.481 } 00:20:14.481 EOF 00:20:14.481 )") 00:20:14.481 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:14.481 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:14.481 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:14.481 { 00:20:14.481 "params": { 00:20:14.481 "name": "Nvme$subsystem", 00:20:14.481 "trtype": "$TEST_TRANSPORT", 00:20:14.481 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:14.481 "adrfam": "ipv4", 00:20:14.481 "trsvcid": "$NVMF_PORT", 00:20:14.481 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:14.481 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:14.481 "hdgst": ${hdgst:-false}, 00:20:14.481 "ddgst": ${ddgst:-false} 00:20:14.481 }, 00:20:14.481 "method": "bdev_nvme_attach_controller" 00:20:14.481 } 00:20:14.481 EOF 00:20:14.481 )") 00:20:14.481 20:50:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:14.481 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:14.481 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:14.481 { 00:20:14.481 "params": { 00:20:14.481 "name": "Nvme$subsystem", 00:20:14.481 "trtype": "$TEST_TRANSPORT", 00:20:14.481 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:14.481 "adrfam": "ipv4", 00:20:14.481 "trsvcid": "$NVMF_PORT", 00:20:14.481 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:14.481 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:14.481 "hdgst": ${hdgst:-false}, 00:20:14.481 "ddgst": ${ddgst:-false} 00:20:14.481 }, 00:20:14.481 "method": "bdev_nvme_attach_controller" 00:20:14.481 } 00:20:14.481 EOF 00:20:14.481 )") 00:20:14.481 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:14.481 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:14.481 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:14.481 { 00:20:14.481 "params": { 00:20:14.481 "name": "Nvme$subsystem", 00:20:14.481 "trtype": "$TEST_TRANSPORT", 00:20:14.481 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:14.481 "adrfam": "ipv4", 00:20:14.481 "trsvcid": "$NVMF_PORT", 00:20:14.481 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:14.481 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:14.481 "hdgst": ${hdgst:-false}, 00:20:14.481 "ddgst": ${ddgst:-false} 00:20:14.481 }, 00:20:14.481 "method": "bdev_nvme_attach_controller" 00:20:14.481 } 00:20:14.481 EOF 00:20:14.481 )") 00:20:14.481 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:14.481 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:20:14.481 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:20:14.481 20:50:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:14.481 "params": { 00:20:14.481 "name": "Nvme1", 00:20:14.481 "trtype": "tcp", 00:20:14.481 "traddr": "10.0.0.2", 00:20:14.481 "adrfam": "ipv4", 00:20:14.481 "trsvcid": "4420", 00:20:14.481 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:14.481 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:14.481 "hdgst": false, 00:20:14.481 "ddgst": false 00:20:14.481 }, 00:20:14.481 "method": "bdev_nvme_attach_controller" 00:20:14.481 },{ 00:20:14.481 "params": { 00:20:14.481 "name": "Nvme2", 00:20:14.481 "trtype": "tcp", 00:20:14.481 "traddr": "10.0.0.2", 00:20:14.481 "adrfam": "ipv4", 00:20:14.481 "trsvcid": "4420", 00:20:14.481 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:14.481 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:14.481 "hdgst": false, 00:20:14.481 "ddgst": false 00:20:14.481 }, 00:20:14.481 "method": "bdev_nvme_attach_controller" 00:20:14.481 },{ 00:20:14.481 "params": { 00:20:14.481 "name": "Nvme3", 00:20:14.481 "trtype": "tcp", 00:20:14.481 "traddr": "10.0.0.2", 00:20:14.481 "adrfam": "ipv4", 00:20:14.481 "trsvcid": "4420", 00:20:14.481 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:14.481 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:14.481 "hdgst": false, 00:20:14.481 "ddgst": false 00:20:14.481 }, 00:20:14.481 "method": "bdev_nvme_attach_controller" 00:20:14.481 },{ 00:20:14.481 "params": { 00:20:14.481 "name": "Nvme4", 00:20:14.481 "trtype": "tcp", 00:20:14.481 "traddr": "10.0.0.2", 00:20:14.481 "adrfam": "ipv4", 00:20:14.481 "trsvcid": "4420", 00:20:14.481 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:14.481 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:14.481 "hdgst": false, 00:20:14.481 "ddgst": false 00:20:14.481 }, 00:20:14.481 "method": "bdev_nvme_attach_controller" 00:20:14.481 },{ 00:20:14.481 "params": { 00:20:14.481 "name": "Nvme5", 00:20:14.481 "trtype": "tcp", 00:20:14.481 "traddr": "10.0.0.2", 00:20:14.481 "adrfam": "ipv4", 00:20:14.481 "trsvcid": "4420", 00:20:14.481 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:14.481 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:14.481 "hdgst": false, 00:20:14.481 "ddgst": false 00:20:14.481 }, 00:20:14.481 "method": "bdev_nvme_attach_controller" 00:20:14.481 },{ 00:20:14.481 "params": { 00:20:14.481 "name": "Nvme6", 00:20:14.481 "trtype": "tcp", 00:20:14.481 "traddr": "10.0.0.2", 00:20:14.481 "adrfam": "ipv4", 00:20:14.481 "trsvcid": "4420", 00:20:14.481 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:14.481 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:14.481 "hdgst": false, 00:20:14.481 "ddgst": false 00:20:14.481 }, 00:20:14.481 "method": "bdev_nvme_attach_controller" 00:20:14.481 },{ 00:20:14.481 "params": { 00:20:14.481 "name": "Nvme7", 00:20:14.481 "trtype": "tcp", 00:20:14.481 "traddr": "10.0.0.2", 00:20:14.481 "adrfam": "ipv4", 00:20:14.481 "trsvcid": "4420", 00:20:14.481 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:14.481 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:14.481 "hdgst": false, 00:20:14.481 "ddgst": false 00:20:14.481 }, 00:20:14.481 "method": "bdev_nvme_attach_controller" 00:20:14.481 },{ 00:20:14.481 "params": { 00:20:14.481 "name": "Nvme8", 00:20:14.481 "trtype": "tcp", 00:20:14.481 "traddr": "10.0.0.2", 00:20:14.481 "adrfam": "ipv4", 00:20:14.481 "trsvcid": "4420", 00:20:14.481 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:14.481 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:14.481 "hdgst": false, 00:20:14.481 "ddgst": false 00:20:14.481 }, 00:20:14.481 "method": "bdev_nvme_attach_controller" 00:20:14.481 },{ 00:20:14.481 "params": { 00:20:14.481 "name": "Nvme9", 00:20:14.481 "trtype": "tcp", 00:20:14.481 "traddr": "10.0.0.2", 00:20:14.481 "adrfam": "ipv4", 00:20:14.481 "trsvcid": "4420", 00:20:14.481 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:14.481 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:14.481 "hdgst": false, 00:20:14.481 "ddgst": false 00:20:14.481 }, 00:20:14.481 "method": "bdev_nvme_attach_controller" 00:20:14.481 },{ 00:20:14.481 "params": { 00:20:14.481 "name": "Nvme10", 00:20:14.481 "trtype": "tcp", 00:20:14.481 "traddr": "10.0.0.2", 00:20:14.481 "adrfam": "ipv4", 00:20:14.481 "trsvcid": "4420", 00:20:14.481 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:14.481 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:14.481 "hdgst": false, 00:20:14.481 "ddgst": false 00:20:14.481 }, 00:20:14.481 "method": "bdev_nvme_attach_controller" 00:20:14.481 }' 00:20:14.481 [2024-11-26 20:50:18.022796] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:20:14.481 [2024-11-26 20:50:18.022888] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1705940 ] 00:20:14.481 [2024-11-26 20:50:18.095300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.481 [2024-11-26 20:50:18.155321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.380 Running I/O for 10 seconds... 00:20:16.639 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:16.639 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:20:16.639 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:16.639 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.639 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:16.639 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.639 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:16.639 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:16.639 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:20:16.639 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:20:16.639 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:20:16.639 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:20:16.639 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:16.639 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:16.639 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:16.639 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.639 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:16.639 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.639 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:20:16.639 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:20:16.639 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:20:16.897 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:20:16.897 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:16.897 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:16.897 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:16.897 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.897 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:16.897 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.897 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:20:16.897 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:20:16.897 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:20:17.155 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:20:17.155 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:17.155 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:17.155 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:17.155 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.155 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:17.155 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.155 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:20:17.155 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:20:17.155 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:20:17.156 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:20:17.156 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:20:17.156 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1705940 00:20:17.156 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1705940 ']' 00:20:17.156 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1705940 00:20:17.156 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:20:17.156 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:17.156 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1705940 00:20:17.156 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:17.156 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:17.156 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1705940' 00:20:17.156 killing process with pid 1705940 00:20:17.156 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1705940 00:20:17.156 20:50:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1705940 00:20:17.414 Received shutdown signal, test time was about 0.976332 seconds 00:20:17.414 00:20:17.414 Latency(us) 00:20:17.414 [2024-11-26T19:50:21.111Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:17.414 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:17.414 Verification LBA range: start 0x0 length 0x400 00:20:17.414 Nvme1n1 : 0.97 263.32 16.46 0.00 0.00 240281.22 19515.16 246997.90 00:20:17.414 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:17.414 Verification LBA range: start 0x0 length 0x400 00:20:17.414 Nvme2n1 : 0.93 217.66 13.60 0.00 0.00 281440.17 5121.52 233016.89 00:20:17.414 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:17.414 Verification LBA range: start 0x0 length 0x400 00:20:17.414 Nvme3n1 : 0.97 272.74 17.05 0.00 0.00 221369.49 3737.98 242337.56 00:20:17.414 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:17.414 Verification LBA range: start 0x0 length 0x400 00:20:17.414 Nvme4n1 : 0.97 265.01 16.56 0.00 0.00 224779.19 17961.72 259425.47 00:20:17.414 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:17.414 Verification LBA range: start 0x0 length 0x400 00:20:17.414 Nvme5n1 : 0.94 203.34 12.71 0.00 0.00 286525.82 21068.61 267192.70 00:20:17.414 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:17.414 Verification LBA range: start 0x0 length 0x400 00:20:17.414 Nvme6n1 : 0.95 206.15 12.88 0.00 0.00 275656.03 3665.16 257872.02 00:20:17.414 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:17.414 Verification LBA range: start 0x0 length 0x400 00:20:17.414 Nvme7n1 : 0.93 206.15 12.88 0.00 0.00 269826.78 21262.79 242337.56 00:20:17.414 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:17.414 Verification LBA range: start 0x0 length 0x400 00:20:17.414 Nvme8n1 : 0.98 262.43 16.40 0.00 0.00 208985.51 17087.91 278066.82 00:20:17.414 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:17.414 Verification LBA range: start 0x0 length 0x400 00:20:17.414 Nvme9n1 : 0.96 200.93 12.56 0.00 0.00 266235.70 21748.24 267192.70 00:20:17.414 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:17.414 Verification LBA range: start 0x0 length 0x400 00:20:17.414 Nvme10n1 : 0.96 199.58 12.47 0.00 0.00 262461.82 23592.96 285834.05 00:20:17.414 [2024-11-26T19:50:21.111Z] =================================================================================================================== 00:20:17.414 [2024-11-26T19:50:21.111Z] Total : 2297.32 143.58 0.00 0.00 250321.39 3665.16 285834.05 00:20:17.414 20:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:20:18.787 20:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1705757 00:20:18.787 20:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:20:18.787 20:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:18.787 20:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:18.787 20:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:18.787 20:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:18.787 20:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:18.787 20:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:20:18.787 20:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:18.787 20:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:20:18.787 20:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:18.787 20:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:18.787 rmmod nvme_tcp 00:20:18.787 rmmod nvme_fabrics 00:20:18.787 rmmod nvme_keyring 00:20:18.787 20:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:18.787 20:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:20:18.787 20:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:20:18.787 20:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 1705757 ']' 00:20:18.787 20:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 1705757 00:20:18.787 20:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1705757 ']' 00:20:18.787 20:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1705757 00:20:18.787 20:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:20:18.787 20:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:18.787 20:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1705757 00:20:18.787 20:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:18.787 20:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:18.787 20:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1705757' 00:20:18.787 killing process with pid 1705757 00:20:18.787 20:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1705757 00:20:18.787 20:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1705757 00:20:19.046 20:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:19.046 20:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:19.046 20:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:19.046 20:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:20:19.046 20:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:20:19.046 20:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:19.046 20:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:20:19.046 20:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:19.046 20:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:19.046 20:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:19.046 20:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:19.046 20:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:21.583 00:20:21.583 real 0m7.850s 00:20:21.583 user 0m24.368s 00:20:21.583 sys 0m1.506s 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:21.583 ************************************ 00:20:21.583 END TEST nvmf_shutdown_tc2 00:20:21.583 ************************************ 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:21.583 ************************************ 00:20:21.583 START TEST nvmf_shutdown_tc3 00:20:21.583 ************************************ 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:21.583 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:21.583 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:21.583 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:21.584 Found net devices under 0000:09:00.0: cvl_0_0 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:21.584 Found net devices under 0000:09:00.1: cvl_0_1 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:21.584 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:21.584 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.329 ms 00:20:21.584 00:20:21.584 --- 10.0.0.2 ping statistics --- 00:20:21.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.584 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:21.584 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:21.584 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:20:21.584 00:20:21.584 --- 10.0.0.1 ping statistics --- 00:20:21.584 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.584 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=1706851 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 1706851 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1706851 ']' 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:21.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:21.584 20:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:21.585 [2024-11-26 20:50:25.023866] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:20:21.585 [2024-11-26 20:50:25.023939] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:21.585 [2024-11-26 20:50:25.096517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:21.585 [2024-11-26 20:50:25.157421] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:21.585 [2024-11-26 20:50:25.157475] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:21.585 [2024-11-26 20:50:25.157489] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:21.585 [2024-11-26 20:50:25.157501] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:21.585 [2024-11-26 20:50:25.157511] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:21.585 [2024-11-26 20:50:25.159127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:21.585 [2024-11-26 20:50:25.159191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:21.585 [2024-11-26 20:50:25.159256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:21.585 [2024-11-26 20:50:25.159259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:21.861 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:21.861 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:20:21.861 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:21.861 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:21.861 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:21.861 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:21.861 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:21.861 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.861 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:21.861 [2024-11-26 20:50:25.321861] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:21.861 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.861 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:21.861 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:21.861 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:21.861 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:21.861 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:21.861 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:21.861 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:21.861 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:21.861 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:21.861 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:21.862 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:21.862 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:21.862 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:21.862 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:21.862 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:21.862 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:21.862 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:21.862 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:21.862 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:21.862 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:21.862 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:21.862 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:21.862 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:21.862 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:21.862 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:21.862 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:21.862 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.862 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:21.862 Malloc1 00:20:21.862 [2024-11-26 20:50:25.421005] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:21.862 Malloc2 00:20:21.862 Malloc3 00:20:21.862 Malloc4 00:20:22.120 Malloc5 00:20:22.120 Malloc6 00:20:22.120 Malloc7 00:20:22.120 Malloc8 00:20:22.120 Malloc9 00:20:22.379 Malloc10 00:20:22.379 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.379 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:22.379 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:22.379 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:22.379 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1707029 00:20:22.379 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1707029 /var/tmp/bdevperf.sock 00:20:22.379 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1707029 ']' 00:20:22.379 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:22.379 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:22.379 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:22.379 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:22.379 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:20:22.380 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:22.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:22.380 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:20:22.380 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:22.380 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:22.380 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:22.380 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:22.380 { 00:20:22.380 "params": { 00:20:22.380 "name": "Nvme$subsystem", 00:20:22.380 "trtype": "$TEST_TRANSPORT", 00:20:22.380 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:22.380 "adrfam": "ipv4", 00:20:22.380 "trsvcid": "$NVMF_PORT", 00:20:22.380 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:22.380 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:22.380 "hdgst": ${hdgst:-false}, 00:20:22.380 "ddgst": ${ddgst:-false} 00:20:22.380 }, 00:20:22.380 "method": "bdev_nvme_attach_controller" 00:20:22.380 } 00:20:22.380 EOF 00:20:22.380 )") 00:20:22.380 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:22.380 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:22.380 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:22.380 { 00:20:22.380 "params": { 00:20:22.380 "name": "Nvme$subsystem", 00:20:22.380 "trtype": "$TEST_TRANSPORT", 00:20:22.380 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:22.380 "adrfam": "ipv4", 00:20:22.380 "trsvcid": "$NVMF_PORT", 00:20:22.380 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:22.380 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:22.380 "hdgst": ${hdgst:-false}, 00:20:22.380 "ddgst": ${ddgst:-false} 00:20:22.380 }, 00:20:22.380 "method": "bdev_nvme_attach_controller" 00:20:22.380 } 00:20:22.380 EOF 00:20:22.380 )") 00:20:22.380 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:22.380 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:22.380 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:22.380 { 00:20:22.380 "params": { 00:20:22.380 "name": "Nvme$subsystem", 00:20:22.380 "trtype": "$TEST_TRANSPORT", 00:20:22.380 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:22.380 "adrfam": "ipv4", 00:20:22.380 "trsvcid": "$NVMF_PORT", 00:20:22.380 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:22.380 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:22.380 "hdgst": ${hdgst:-false}, 00:20:22.380 "ddgst": ${ddgst:-false} 00:20:22.380 }, 00:20:22.380 "method": "bdev_nvme_attach_controller" 00:20:22.380 } 00:20:22.380 EOF 00:20:22.380 )") 00:20:22.380 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:22.380 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:22.380 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:22.380 { 00:20:22.380 "params": { 00:20:22.380 "name": "Nvme$subsystem", 00:20:22.380 "trtype": "$TEST_TRANSPORT", 00:20:22.380 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:22.380 "adrfam": "ipv4", 00:20:22.380 "trsvcid": "$NVMF_PORT", 00:20:22.380 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:22.380 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:22.380 "hdgst": ${hdgst:-false}, 00:20:22.380 "ddgst": ${ddgst:-false} 00:20:22.380 }, 00:20:22.380 "method": "bdev_nvme_attach_controller" 00:20:22.380 } 00:20:22.380 EOF 00:20:22.380 )") 00:20:22.380 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:22.380 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:22.380 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:22.380 { 00:20:22.380 "params": { 00:20:22.380 "name": "Nvme$subsystem", 00:20:22.380 "trtype": "$TEST_TRANSPORT", 00:20:22.380 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:22.380 "adrfam": "ipv4", 00:20:22.380 "trsvcid": "$NVMF_PORT", 00:20:22.380 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:22.380 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:22.380 "hdgst": ${hdgst:-false}, 00:20:22.380 "ddgst": ${ddgst:-false} 00:20:22.380 }, 00:20:22.380 "method": "bdev_nvme_attach_controller" 00:20:22.380 } 00:20:22.380 EOF 00:20:22.380 )") 00:20:22.380 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:22.380 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:22.380 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:22.380 { 00:20:22.380 "params": { 00:20:22.380 "name": "Nvme$subsystem", 00:20:22.380 "trtype": "$TEST_TRANSPORT", 00:20:22.380 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:22.380 "adrfam": "ipv4", 00:20:22.380 "trsvcid": "$NVMF_PORT", 00:20:22.380 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:22.380 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:22.380 "hdgst": ${hdgst:-false}, 00:20:22.380 "ddgst": ${ddgst:-false} 00:20:22.380 }, 00:20:22.380 "method": "bdev_nvme_attach_controller" 00:20:22.380 } 00:20:22.380 EOF 00:20:22.380 )") 00:20:22.380 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:22.380 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:22.380 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:22.380 { 00:20:22.380 "params": { 00:20:22.380 "name": "Nvme$subsystem", 00:20:22.380 "trtype": "$TEST_TRANSPORT", 00:20:22.380 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:22.380 "adrfam": "ipv4", 00:20:22.380 "trsvcid": "$NVMF_PORT", 00:20:22.380 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:22.380 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:22.380 "hdgst": ${hdgst:-false}, 00:20:22.380 "ddgst": ${ddgst:-false} 00:20:22.380 }, 00:20:22.380 "method": "bdev_nvme_attach_controller" 00:20:22.380 } 00:20:22.380 EOF 00:20:22.380 )") 00:20:22.380 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:22.380 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:22.380 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:22.380 { 00:20:22.380 "params": { 00:20:22.380 "name": "Nvme$subsystem", 00:20:22.380 "trtype": "$TEST_TRANSPORT", 00:20:22.380 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:22.380 "adrfam": "ipv4", 00:20:22.380 "trsvcid": "$NVMF_PORT", 00:20:22.380 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:22.380 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:22.380 "hdgst": ${hdgst:-false}, 00:20:22.380 "ddgst": ${ddgst:-false} 00:20:22.380 }, 00:20:22.380 "method": "bdev_nvme_attach_controller" 00:20:22.380 } 00:20:22.380 EOF 00:20:22.380 )") 00:20:22.380 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:22.380 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:22.380 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:22.380 { 00:20:22.380 "params": { 00:20:22.380 "name": "Nvme$subsystem", 00:20:22.380 "trtype": "$TEST_TRANSPORT", 00:20:22.380 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:22.380 "adrfam": "ipv4", 00:20:22.380 "trsvcid": "$NVMF_PORT", 00:20:22.380 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:22.380 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:22.380 "hdgst": ${hdgst:-false}, 00:20:22.380 "ddgst": ${ddgst:-false} 00:20:22.380 }, 00:20:22.380 "method": "bdev_nvme_attach_controller" 00:20:22.380 } 00:20:22.380 EOF 00:20:22.380 )") 00:20:22.380 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:22.380 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:22.380 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:22.381 { 00:20:22.381 "params": { 00:20:22.381 "name": "Nvme$subsystem", 00:20:22.381 "trtype": "$TEST_TRANSPORT", 00:20:22.381 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:22.381 "adrfam": "ipv4", 00:20:22.381 "trsvcid": "$NVMF_PORT", 00:20:22.381 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:22.381 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:22.381 "hdgst": ${hdgst:-false}, 00:20:22.381 "ddgst": ${ddgst:-false} 00:20:22.381 }, 00:20:22.381 "method": "bdev_nvme_attach_controller" 00:20:22.381 } 00:20:22.381 EOF 00:20:22.381 )") 00:20:22.381 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:22.381 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:20:22.381 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:20:22.381 20:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:22.381 "params": { 00:20:22.381 "name": "Nvme1", 00:20:22.381 "trtype": "tcp", 00:20:22.381 "traddr": "10.0.0.2", 00:20:22.381 "adrfam": "ipv4", 00:20:22.381 "trsvcid": "4420", 00:20:22.381 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:22.381 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:22.381 "hdgst": false, 00:20:22.381 "ddgst": false 00:20:22.381 }, 00:20:22.381 "method": "bdev_nvme_attach_controller" 00:20:22.381 },{ 00:20:22.381 "params": { 00:20:22.381 "name": "Nvme2", 00:20:22.381 "trtype": "tcp", 00:20:22.381 "traddr": "10.0.0.2", 00:20:22.381 "adrfam": "ipv4", 00:20:22.381 "trsvcid": "4420", 00:20:22.381 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:22.381 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:22.381 "hdgst": false, 00:20:22.381 "ddgst": false 00:20:22.381 }, 00:20:22.381 "method": "bdev_nvme_attach_controller" 00:20:22.381 },{ 00:20:22.381 "params": { 00:20:22.381 "name": "Nvme3", 00:20:22.381 "trtype": "tcp", 00:20:22.381 "traddr": "10.0.0.2", 00:20:22.381 "adrfam": "ipv4", 00:20:22.381 "trsvcid": "4420", 00:20:22.381 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:22.381 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:22.381 "hdgst": false, 00:20:22.381 "ddgst": false 00:20:22.381 }, 00:20:22.381 "method": "bdev_nvme_attach_controller" 00:20:22.381 },{ 00:20:22.381 "params": { 00:20:22.381 "name": "Nvme4", 00:20:22.381 "trtype": "tcp", 00:20:22.381 "traddr": "10.0.0.2", 00:20:22.381 "adrfam": "ipv4", 00:20:22.381 "trsvcid": "4420", 00:20:22.381 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:22.381 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:22.381 "hdgst": false, 00:20:22.381 "ddgst": false 00:20:22.381 }, 00:20:22.381 "method": "bdev_nvme_attach_controller" 00:20:22.381 },{ 00:20:22.381 "params": { 00:20:22.381 "name": "Nvme5", 00:20:22.381 "trtype": "tcp", 00:20:22.381 "traddr": "10.0.0.2", 00:20:22.381 "adrfam": "ipv4", 00:20:22.381 "trsvcid": "4420", 00:20:22.381 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:22.381 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:22.381 "hdgst": false, 00:20:22.381 "ddgst": false 00:20:22.381 }, 00:20:22.381 "method": "bdev_nvme_attach_controller" 00:20:22.381 },{ 00:20:22.381 "params": { 00:20:22.381 "name": "Nvme6", 00:20:22.381 "trtype": "tcp", 00:20:22.381 "traddr": "10.0.0.2", 00:20:22.381 "adrfam": "ipv4", 00:20:22.381 "trsvcid": "4420", 00:20:22.381 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:22.381 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:22.381 "hdgst": false, 00:20:22.381 "ddgst": false 00:20:22.381 }, 00:20:22.381 "method": "bdev_nvme_attach_controller" 00:20:22.381 },{ 00:20:22.381 "params": { 00:20:22.381 "name": "Nvme7", 00:20:22.381 "trtype": "tcp", 00:20:22.381 "traddr": "10.0.0.2", 00:20:22.381 "adrfam": "ipv4", 00:20:22.381 "trsvcid": "4420", 00:20:22.381 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:22.381 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:22.381 "hdgst": false, 00:20:22.381 "ddgst": false 00:20:22.381 }, 00:20:22.381 "method": "bdev_nvme_attach_controller" 00:20:22.381 },{ 00:20:22.381 "params": { 00:20:22.381 "name": "Nvme8", 00:20:22.381 "trtype": "tcp", 00:20:22.381 "traddr": "10.0.0.2", 00:20:22.381 "adrfam": "ipv4", 00:20:22.381 "trsvcid": "4420", 00:20:22.381 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:22.381 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:22.381 "hdgst": false, 00:20:22.381 "ddgst": false 00:20:22.381 }, 00:20:22.381 "method": "bdev_nvme_attach_controller" 00:20:22.381 },{ 00:20:22.381 "params": { 00:20:22.381 "name": "Nvme9", 00:20:22.381 "trtype": "tcp", 00:20:22.381 "traddr": "10.0.0.2", 00:20:22.381 "adrfam": "ipv4", 00:20:22.381 "trsvcid": "4420", 00:20:22.381 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:22.381 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:22.381 "hdgst": false, 00:20:22.381 "ddgst": false 00:20:22.381 }, 00:20:22.381 "method": "bdev_nvme_attach_controller" 00:20:22.381 },{ 00:20:22.381 "params": { 00:20:22.381 "name": "Nvme10", 00:20:22.381 "trtype": "tcp", 00:20:22.381 "traddr": "10.0.0.2", 00:20:22.381 "adrfam": "ipv4", 00:20:22.381 "trsvcid": "4420", 00:20:22.381 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:22.381 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:22.381 "hdgst": false, 00:20:22.381 "ddgst": false 00:20:22.381 }, 00:20:22.381 "method": "bdev_nvme_attach_controller" 00:20:22.381 }' 00:20:22.381 [2024-11-26 20:50:25.948328] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:20:22.381 [2024-11-26 20:50:25.948411] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1707029 ] 00:20:22.381 [2024-11-26 20:50:26.020738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.638 [2024-11-26 20:50:26.082149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:24.009 Running I/O for 10 seconds... 00:20:24.575 20:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:24.575 20:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:20:24.575 20:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:24.575 20:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.575 20:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:24.575 20:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.575 20:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:24.575 20:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:24.575 20:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:24.575 20:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:20:24.575 20:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:20:24.575 20:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:20:24.575 20:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:20:24.575 20:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:24.575 20:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:24.576 20:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:24.576 20:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.576 20:50:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:24.576 20:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.576 20:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:20:24.576 20:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:20:24.576 20:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:20:24.850 20:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:20:24.850 20:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:24.851 20:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:24.851 20:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:24.851 20:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.851 20:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:24.851 20:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.851 20:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:20:24.851 20:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:20:24.851 20:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:20:24.851 20:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:20:24.851 20:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:20:24.851 20:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1706851 00:20:24.851 20:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1706851 ']' 00:20:24.851 20:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1706851 00:20:24.851 20:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:20:24.851 20:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:24.851 20:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1706851 00:20:24.851 20:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:24.851 20:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:24.851 20:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1706851' 00:20:24.851 killing process with pid 1706851 00:20:24.851 20:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 1706851 00:20:24.851 20:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 1706851 00:20:24.851 [2024-11-26 20:50:28.354523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.354615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.354631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.354652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.354680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.354692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.354714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.354727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219ece0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.355755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.355791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.355811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.355822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.355834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.355846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.355857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.355869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.355881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.355892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.355903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.355915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.355927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.355939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.355950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.355962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.355974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.355985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.355996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.356008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.356019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.356031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.356043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.356054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.356066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.356090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.356103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.356115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.356127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.356139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.356151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.356162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.356174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.356186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.356197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.356209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.356221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.356232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.356244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.356255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.356267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.356293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.356314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.356329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.356341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.356353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.356365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.356377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.356389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.356401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.356413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.356426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.356442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.356455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.356467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.851 [2024-11-26 20:50:28.356479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.356490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.356502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.356514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.356525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.356537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.356549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.356561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e82c0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.358877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219f1b0 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.362047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.362074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.362094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.362106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.362118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.362130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.362141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.362152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.362169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.362181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.362193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.362204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.362216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.852 [2024-11-26 20:50:28.362228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.853 [2024-11-26 20:50:28.362240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.853 [2024-11-26 20:50:28.362252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.853 [2024-11-26 20:50:28.362263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.853 [2024-11-26 20:50:28.362275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.853 [2024-11-26 20:50:28.362297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.853 [2024-11-26 20:50:28.362334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.853 [2024-11-26 20:50:28.362348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.853 [2024-11-26 20:50:28.362360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.853 [2024-11-26 20:50:28.362372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.853 [2024-11-26 20:50:28.362384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.853 [2024-11-26 20:50:28.362396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.853 [2024-11-26 20:50:28.362409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.853 [2024-11-26 20:50:28.362421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.853 [2024-11-26 20:50:28.362433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.853 [2024-11-26 20:50:28.362446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.853 [2024-11-26 20:50:28.362458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.853 [2024-11-26 20:50:28.362470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.853 [2024-11-26 20:50:28.362482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.853 [2024-11-26 20:50:28.362495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.853 [2024-11-26 20:50:28.362507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.853 [2024-11-26 20:50:28.362519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.853 [2024-11-26 20:50:28.362535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.853 [2024-11-26 20:50:28.362548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.853 [2024-11-26 20:50:28.362561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.853 [2024-11-26 20:50:28.362574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.853 [2024-11-26 20:50:28.362596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.853 [2024-11-26 20:50:28.362608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.853 [2024-11-26 20:50:28.362621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.853 [2024-11-26 20:50:28.362633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.853 [2024-11-26 20:50:28.362645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.853 [2024-11-26 20:50:28.362658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.853 [2024-11-26 20:50:28.362670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.853 [2024-11-26 20:50:28.362682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.853 [2024-11-26 20:50:28.362694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.853 [2024-11-26 20:50:28.362707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.853 [2024-11-26 20:50:28.362720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.853 [2024-11-26 20:50:28.362732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.853 [2024-11-26 20:50:28.362745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.853 [2024-11-26 20:50:28.362764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.853 [2024-11-26 20:50:28.362777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.853 [2024-11-26 20:50:28.362789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.853 [2024-11-26 20:50:28.362801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.853 [2024-11-26 20:50:28.362821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.853 [2024-11-26 20:50:28.362834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.853 [2024-11-26 20:50:28.362846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.853 [2024-11-26 20:50:28.362857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.853 [2024-11-26 20:50:28.362870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.853 [2024-11-26 20:50:28.362882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.853 [2024-11-26 20:50:28.362898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0040 is same with the state(6) to be set 00:20:24.853 [2024-11-26 20:50:28.362891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.853 [2024-11-26 20:50:28.362932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.853 [2024-11-26 20:50:28.362959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.853 [2024-11-26 20:50:28.362976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.853 [2024-11-26 20:50:28.362992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.853 [2024-11-26 20:50:28.363007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.853 [2024-11-26 20:50:28.363023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.853 [2024-11-26 20:50:28.363036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.853 [2024-11-26 20:50:28.363052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.853 [2024-11-26 20:50:28.363066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.853 [2024-11-26 20:50:28.363082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.853 [2024-11-26 20:50:28.363095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.853 [2024-11-26 20:50:28.363111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.853 [2024-11-26 20:50:28.363125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.853 [2024-11-26 20:50:28.363140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.853 [2024-11-26 20:50:28.363154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.853 [2024-11-26 20:50:28.363169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.853 [2024-11-26 20:50:28.363183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.853 [2024-11-26 20:50:28.363199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.853 [2024-11-26 20:50:28.363213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.853 [2024-11-26 20:50:28.363228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.853 [2024-11-26 20:50:28.363242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.853 [2024-11-26 20:50:28.363257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.853 [2024-11-26 20:50:28.363271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.853 [2024-11-26 20:50:28.363301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.853 [2024-11-26 20:50:28.363328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.853 [2024-11-26 20:50:28.363344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.853 [2024-11-26 20:50:28.363359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.853 [2024-11-26 20:50:28.363375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.853 [2024-11-26 20:50:28.363389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.853 [2024-11-26 20:50:28.363405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.853 [2024-11-26 20:50:28.363418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.854 [2024-11-26 20:50:28.363436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.854 [2024-11-26 20:50:28.363450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.854 [2024-11-26 20:50:28.363466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.854 [2024-11-26 20:50:28.363480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.854 [2024-11-26 20:50:28.363496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.854 [2024-11-26 20:50:28.363510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.854 [2024-11-26 20:50:28.363526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.854 [2024-11-26 20:50:28.363540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.854 [2024-11-26 20:50:28.363555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.854 [2024-11-26 20:50:28.363569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.854 [2024-11-26 20:50:28.363596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.854 [2024-11-26 20:50:28.363625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.854 [2024-11-26 20:50:28.363642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.854 [2024-11-26 20:50:28.363656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.854 [2024-11-26 20:50:28.363671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.854 [2024-11-26 20:50:28.363685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.854 [2024-11-26 20:50:28.363700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.854 [2024-11-26 20:50:28.363718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.854 [2024-11-26 20:50:28.363734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.854 [2024-11-26 20:50:28.363748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.854 [2024-11-26 20:50:28.363764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.854 [2024-11-26 20:50:28.363777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.854 [2024-11-26 20:50:28.363793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.854 [2024-11-26 20:50:28.363806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.854 [2024-11-26 20:50:28.363821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.854 [2024-11-26 20:50:28.363835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.854 [2024-11-26 20:50:28.363850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.854 [2024-11-26 20:50:28.363863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.854 [2024-11-26 20:50:28.363878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.854 [2024-11-26 20:50:28.363892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.854 [2024-11-26 20:50:28.363908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.854 [2024-11-26 20:50:28.363922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.854 [2024-11-26 20:50:28.363938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.854 [2024-11-26 20:50:28.363952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.854 [2024-11-26 20:50:28.363968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.854 [2024-11-26 20:50:28.363983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.854 [2024-11-26 20:50:28.363998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.854 [2024-11-26 20:50:28.364013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.854 [2024-11-26 20:50:28.364028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.854 [2024-11-26 20:50:28.364042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.854 [2024-11-26 20:50:28.364058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.854 [2024-11-26 20:50:28.364071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.854 [2024-11-26 20:50:28.364090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.854 [2024-11-26 20:50:28.364104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.854 [2024-11-26 20:50:28.364113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with [2024-11-26 20:50:28.364119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:1the state(6) to be set 00:20:24.854 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.854 [2024-11-26 20:50:28.364137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-26 20:50:28.364138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.854 the state(6) to be set 00:20:24.854 [2024-11-26 20:50:28.364152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with the state(6) to be set 00:20:24.854 [2024-11-26 20:50:28.364155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.854 [2024-11-26 20:50:28.364164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with the state(6) to be set 00:20:24.854 [2024-11-26 20:50:28.364169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.854 [2024-11-26 20:50:28.364176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with the state(6) to be set 00:20:24.854 [2024-11-26 20:50:28.364184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.854 [2024-11-26 20:50:28.364188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with the state(6) to be set 00:20:24.854 [2024-11-26 20:50:28.364197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.854 [2024-11-26 20:50:28.364200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with the state(6) to be set 00:20:24.854 [2024-11-26 20:50:28.364212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with the state(6) to be set 00:20:24.854 [2024-11-26 20:50:28.364213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.854 [2024-11-26 20:50:28.364223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with the state(6) to be set 00:20:24.854 [2024-11-26 20:50:28.364227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.854 [2024-11-26 20:50:28.364235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with the state(6) to be set 00:20:24.854 [2024-11-26 20:50:28.364242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.854 [2024-11-26 20:50:28.364246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with the state(6) to be set 00:20:24.854 [2024-11-26 20:50:28.364256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.854 [2024-11-26 20:50:28.364258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with the state(6) to be set 00:20:24.854 [2024-11-26 20:50:28.364270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with the state(6) to be set 00:20:24.854 [2024-11-26 20:50:28.364271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.854 [2024-11-26 20:50:28.364314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with the state(6) to be set 00:20:24.854 [2024-11-26 20:50:28.364322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.854 [2024-11-26 20:50:28.364329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with the state(6) to be set 00:20:24.854 [2024-11-26 20:50:28.364341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with the state(6) to be set 00:20:24.854 [2024-11-26 20:50:28.364342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.854 [2024-11-26 20:50:28.364353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with the state(6) to be set 00:20:24.854 [2024-11-26 20:50:28.364357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.854 [2024-11-26 20:50:28.364366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with the state(6) to be set 00:20:24.854 [2024-11-26 20:50:28.364372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.854 [2024-11-26 20:50:28.364378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with the state(6) to be set 00:20:24.854 [2024-11-26 20:50:28.364386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.854 [2024-11-26 20:50:28.364390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with the state(6) to be set 00:20:24.855 [2024-11-26 20:50:28.364402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:1[2024-11-26 20:50:28.364403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.855 the state(6) to be set 00:20:24.855 [2024-11-26 20:50:28.364417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with [2024-11-26 20:50:28.364417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:20:24.855 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.855 [2024-11-26 20:50:28.364432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with the state(6) to be set 00:20:24.855 [2024-11-26 20:50:28.364436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.855 [2024-11-26 20:50:28.364445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with the state(6) to be set 00:20:24.855 [2024-11-26 20:50:28.364450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.855 [2024-11-26 20:50:28.364457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with the state(6) to be set 00:20:24.855 [2024-11-26 20:50:28.364467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.855 [2024-11-26 20:50:28.364470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with the state(6) to be set 00:20:24.855 [2024-11-26 20:50:28.364481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-26 20:50:28.364483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.855 the state(6) to be set 00:20:24.855 [2024-11-26 20:50:28.364497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with [2024-11-26 20:50:28.364499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:1the state(6) to be set 00:20:24.855 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.855 [2024-11-26 20:50:28.364515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with [2024-11-26 20:50:28.364517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:20:24.855 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.855 [2024-11-26 20:50:28.364530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with the state(6) to be set 00:20:24.855 [2024-11-26 20:50:28.364534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.855 [2024-11-26 20:50:28.364543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with the state(6) to be set 00:20:24.855 [2024-11-26 20:50:28.364549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.855 [2024-11-26 20:50:28.364555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with the state(6) to be set 00:20:24.855 [2024-11-26 20:50:28.364565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.855 [2024-11-26 20:50:28.364569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with the state(6) to be set 00:20:24.855 [2024-11-26 20:50:28.364589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with [2024-11-26 20:50:28.364590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:20:24.855 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.855 [2024-11-26 20:50:28.364603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with the state(6) to be set 00:20:24.855 [2024-11-26 20:50:28.364622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.855 [2024-11-26 20:50:28.364631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with the state(6) to be set 00:20:24.855 [2024-11-26 20:50:28.364637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.855 [2024-11-26 20:50:28.364643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with the state(6) to be set 00:20:24.855 [2024-11-26 20:50:28.364653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:1[2024-11-26 20:50:28.364655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.855 the state(6) to be set 00:20:24.855 [2024-11-26 20:50:28.364669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with [2024-11-26 20:50:28.364669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:20:24.855 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.855 [2024-11-26 20:50:28.364683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with the state(6) to be set 00:20:24.855 [2024-11-26 20:50:28.364687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.855 [2024-11-26 20:50:28.364695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with the state(6) to be set 00:20:24.855 [2024-11-26 20:50:28.364701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.855 [2024-11-26 20:50:28.364706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with the state(6) to be set 00:20:24.855 [2024-11-26 20:50:28.364717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.855 [2024-11-26 20:50:28.364722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with the state(6) to be set 00:20:24.855 [2024-11-26 20:50:28.364731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.855 [2024-11-26 20:50:28.364736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with the state(6) to be set 00:20:24.855 [2024-11-26 20:50:28.364746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128[2024-11-26 20:50:28.364748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.855 the state(6) to be set 00:20:24.855 [2024-11-26 20:50:28.364761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with [2024-11-26 20:50:28.364761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:20:24.855 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.855 [2024-11-26 20:50:28.364775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with the state(6) to be set 00:20:24.855 [2024-11-26 20:50:28.364779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.855 [2024-11-26 20:50:28.364787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with the state(6) to be set 00:20:24.855 [2024-11-26 20:50:28.364793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.855 [2024-11-26 20:50:28.364799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with the state(6) to be set 00:20:24.855 [2024-11-26 20:50:28.364808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.855 [2024-11-26 20:50:28.364811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with the state(6) to be set 00:20:24.855 [2024-11-26 20:50:28.364822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-26 20:50:28.364824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.855 the state(6) to be set 00:20:24.855 [2024-11-26 20:50:28.364837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with the state(6) to be set 00:20:24.855 [2024-11-26 20:50:28.364839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.855 [2024-11-26 20:50:28.364848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with the state(6) to be set 00:20:24.855 [2024-11-26 20:50:28.364853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.855 [2024-11-26 20:50:28.364860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with the state(6) to be set 00:20:24.855 [2024-11-26 20:50:28.364869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.855 [2024-11-26 20:50:28.364872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with the state(6) to be set 00:20:24.855 [2024-11-26 20:50:28.364882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-26 20:50:28.364884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.855 the state(6) to be set 00:20:24.855 [2024-11-26 20:50:28.364899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128[2024-11-26 20:50:28.364900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.855 the state(6) to be set 00:20:24.856 [2024-11-26 20:50:28.364915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with [2024-11-26 20:50:28.364915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:20:24.856 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.856 [2024-11-26 20:50:28.364929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with the state(6) to be set 00:20:24.856 [2024-11-26 20:50:28.364933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.856 [2024-11-26 20:50:28.364941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with the state(6) to be set 00:20:24.856 [2024-11-26 20:50:28.364947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.856 [2024-11-26 20:50:28.364953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with the state(6) to be set 00:20:24.856 [2024-11-26 20:50:28.364962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.856 [2024-11-26 20:50:28.364965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with the state(6) to be set 00:20:24.856 [2024-11-26 20:50:28.364976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-26 20:50:28.364977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a0510 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.856 the state(6) to be set 00:20:24.856 [2024-11-26 20:50:28.365024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:24.856 [2024-11-26 20:50:28.365595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.856 [2024-11-26 20:50:28.365621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.856 [2024-11-26 20:50:28.365638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.856 [2024-11-26 20:50:28.365653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.856 [2024-11-26 20:50:28.365668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.856 [2024-11-26 20:50:28.365682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.856 [2024-11-26 20:50:28.365697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.856 [2024-11-26 20:50:28.365710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.856 [2024-11-26 20:50:28.365724] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26373e0 is same with the state(6) to be set 00:20:24.856 [2024-11-26 20:50:28.365781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.856 [2024-11-26 20:50:28.365802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.856 [2024-11-26 20:50:28.365817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.856 [2024-11-26 20:50:28.365836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.856 [2024-11-26 20:50:28.365851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.856 [2024-11-26 20:50:28.365865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.856 [2024-11-26 20:50:28.365880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.856 [2024-11-26 20:50:28.365893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.856 [2024-11-26 20:50:28.365907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26049e0 is same with the state(6) to be set 00:20:24.856 [2024-11-26 20:50:28.365956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.856 [2024-11-26 20:50:28.365977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.856 [2024-11-26 20:50:28.365992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.856 [2024-11-26 20:50:28.366007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.856 [2024-11-26 20:50:28.366021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.856 [2024-11-26 20:50:28.366035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.856 [2024-11-26 20:50:28.366049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.856 [2024-11-26 20:50:28.366062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.856 [2024-11-26 20:50:28.366076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2141110 is same with the state(6) to be set 00:20:24.856 [2024-11-26 20:50:28.366152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.856 [2024-11-26 20:50:28.366174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.856 [2024-11-26 20:50:28.366189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.856 [2024-11-26 20:50:28.366203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.856 [2024-11-26 20:50:28.366218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.856 [2024-11-26 20:50:28.366237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.856 [2024-11-26 20:50:28.366252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.856 [2024-11-26 20:50:28.366266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.856 [2024-11-26 20:50:28.366279] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd6f0 is same with the state(6) to be set 00:20:24.856 [2024-11-26 20:50:28.366340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.856 [2024-11-26 20:50:28.366362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.856 [2024-11-26 20:50:28.366381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.856 [2024-11-26 20:50:28.366396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.856 [2024-11-26 20:50:28.366410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.856 [2024-11-26 20:50:28.366423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.856 [2024-11-26 20:50:28.366438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.856 [2024-11-26 20:50:28.366452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.856 [2024-11-26 20:50:28.366445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410690 is same with the state(6) to be set 00:20:24.856 [2024-11-26 20:50:28.366467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2604fc0 is same with the state(6) to be set 00:20:24.856 [2024-11-26 20:50:28.366476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410690 is same with the state(6) to be set 00:20:24.856 [2024-11-26 20:50:28.366492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410690 is same with the state(6) to be set 00:20:24.856 [2024-11-26 20:50:28.366504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410690 is same with the state(6) to be set 00:20:24.856 [2024-11-26 20:50:28.366506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.856 [2024-11-26 20:50:28.366522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410690 is same with the state(6) to be set 00:20:24.856 [2024-11-26 20:50:28.366526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.856 [2024-11-26 20:50:28.366535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410690 is same with the state(6) to be set 00:20:24.856 [2024-11-26 20:50:28.366541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.856 [2024-11-26 20:50:28.366548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410690 is same with the state(6) to be set 00:20:24.856 [2024-11-26 20:50:28.366556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.856 [2024-11-26 20:50:28.366560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410690 is same with the state(6) to be set 00:20:24.856 [2024-11-26 20:50:28.366571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-11-26 20:50:28.366573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410690 is same with id:0 cdw10:00000000 cdw11:00000000 00:20:24.856 the state(6) to be set 00:20:24.856 [2024-11-26 20:50:28.366595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-11-26 20:50:28.366596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410690 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.856 the state(6) to be set 00:20:24.856 [2024-11-26 20:50:28.366610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410690 is same with [2024-11-26 20:50:28.366611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsthe state(6) to be set 00:20:24.856 id:0 cdw10:00000000 cdw11:00000000 00:20:24.856 [2024-11-26 20:50:28.366624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410690 is same with [2024-11-26 20:50:28.366625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(6) to be set 00:20:24.856 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.856 [2024-11-26 20:50:28.366643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410690 is same with [2024-11-26 20:50:28.366644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d9270 is same the state(6) to be set 00:20:24.856 with the state(6) to be set 00:20:24.856 [2024-11-26 20:50:28.366657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410690 is same with the state(6) to be set 00:20:24.856 [2024-11-26 20:50:28.366670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410690 is same with the state(6) to be set 00:20:24.856 [2024-11-26 20:50:28.366682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410690 is same with the state(6) to be set 00:20:24.857 [2024-11-26 20:50:28.366694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410690 is same with the state(6) to be set 00:20:24.857 [2024-11-26 20:50:28.366691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.857 [2024-11-26 20:50:28.366706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410690 is same with the state(6) to be set 00:20:24.857 [2024-11-26 20:50:28.366713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.857 [2024-11-26 20:50:28.366718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410690 is same with the state(6) to be set 00:20:24.857 [2024-11-26 20:50:28.366729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-11-26 20:50:28.366730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410690 is same with id:0 cdw10:00000000 cdw11:00000000 00:20:24.857 the state(6) to be set 00:20:24.857 [2024-11-26 20:50:28.366745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410690 is same with [2024-11-26 20:50:28.366745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(6) to be set 00:20:24.857 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.857 [2024-11-26 20:50:28.366759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410690 is same with the state(6) to be set 00:20:24.857 [2024-11-26 20:50:28.366762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.857 [2024-11-26 20:50:28.366771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410690 is same with the state(6) to be set 00:20:24.857 [2024-11-26 20:50:28.366777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.857 [2024-11-26 20:50:28.366783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410690 is same with the state(6) to be set 00:20:24.857 [2024-11-26 20:50:28.366791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.857 [2024-11-26 20:50:28.366795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410690 is same with the state(6) to be set 00:20:24.857 [2024-11-26 20:50:28.366804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.857 [2024-11-26 20:50:28.366808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410690 is same with the state(6) to be set 00:20:24.857 [2024-11-26 20:50:28.366818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d9700 is same [2024-11-26 20:50:28.366820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410690 is same with with the state(6) to be set 00:20:24.857 the state(6) to be set 00:20:24.857 [2024-11-26 20:50:28.366837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410690 is same with the state(6) to be set 00:20:24.857 [2024-11-26 20:50:28.366849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410690 is same with the state(6) to be set 00:20:24.857 [2024-11-26 20:50:28.366861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410690 is same with the state(6) to be set 00:20:24.857 [2024-11-26 20:50:28.366873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410690 is same with the state(6) to be set 00:20:24.857 [2024-11-26 20:50:28.366885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410690 is same with the state(6) to be set 00:20:24.857 [2024-11-26 20:50:28.366897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410690 is same with the state(6) to be set 00:20:24.857 [2024-11-26 20:50:28.366909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410690 is same with the state(6) to be set 00:20:24.857 [2024-11-26 20:50:28.366921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410690 is same with the state(6) to be set 00:20:24.857 [2024-11-26 20:50:28.366933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410690 is same with the state(6) to be set 00:20:24.857 [2024-11-26 20:50:28.366951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410690 is same with the state(6) to be set 00:20:24.857 [2024-11-26 20:50:28.366963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410690 is same with the state(6) to be set 00:20:24.857 [2024-11-26 20:50:28.366975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410690 is same with the state(6) to be set 00:20:24.857 [2024-11-26 20:50:28.366987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410690 is same with the state(6) to be set 00:20:24.857 [2024-11-26 20:50:28.366999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410690 is same with the state(6) to be set 00:20:24.857 [2024-11-26 20:50:28.367010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410690 is same with the state(6) to be set 00:20:24.857 [2024-11-26 20:50:28.367022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410690 is same with the state(6) to be set 00:20:24.857 [2024-11-26 20:50:28.367034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410690 is same with the state(6) to be set 00:20:24.857 [2024-11-26 20:50:28.367045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410690 is same with the state(6) to be set 00:20:24.857 [2024-11-26 20:50:28.367057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410690 is same with the state(6) to be set 00:20:24.857 [2024-11-26 20:50:28.367069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410690 is same with the state(6) to be set 00:20:24.857 [2024-11-26 20:50:28.367081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410690 is same with the state(6) to be set 00:20:24.857 [2024-11-26 20:50:28.367092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410690 is same with the state(6) to be set 00:20:24.857 [2024-11-26 20:50:28.367104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410690 is same with the state(6) to be set 00:20:24.857 [2024-11-26 20:50:28.367115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410690 is same with the state(6) to be set 00:20:24.857 [2024-11-26 20:50:28.367127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410690 is same with the state(6) to be set 00:20:24.857 [2024-11-26 20:50:28.367139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410690 is same with the state(6) to be set 00:20:24.857 [2024-11-26 20:50:28.367151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410690 is same with the state(6) to be set 00:20:24.857 [2024-11-26 20:50:28.367166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410690 is same with the state(6) to be set 00:20:24.857 [2024-11-26 20:50:28.367178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410690 is same with the state(6) to be set 00:20:24.857 [2024-11-26 20:50:28.367376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.857 [2024-11-26 20:50:28.367402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.857 [2024-11-26 20:50:28.367424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.857 [2024-11-26 20:50:28.367441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.857 [2024-11-26 20:50:28.367457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.857 [2024-11-26 20:50:28.367472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.857 [2024-11-26 20:50:28.367488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.857 [2024-11-26 20:50:28.367502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.857 [2024-11-26 20:50:28.367518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.857 [2024-11-26 20:50:28.367533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.857 [2024-11-26 20:50:28.367550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.857 [2024-11-26 20:50:28.367564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.857 [2024-11-26 20:50:28.367580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.857 [2024-11-26 20:50:28.367606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.857 [2024-11-26 20:50:28.367622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.857 [2024-11-26 20:50:28.367636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.857 [2024-11-26 20:50:28.367667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.857 [2024-11-26 20:50:28.367682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.857 [2024-11-26 20:50:28.367699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.857 [2024-11-26 20:50:28.367713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.857 [2024-11-26 20:50:28.367729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.857 [2024-11-26 20:50:28.367743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.857 [2024-11-26 20:50:28.367759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.857 [2024-11-26 20:50:28.367773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.857 [2024-11-26 20:50:28.367796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.857 [2024-11-26 20:50:28.367811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.857 [2024-11-26 20:50:28.367827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.857 [2024-11-26 20:50:28.367841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.857 [2024-11-26 20:50:28.367856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.857 [2024-11-26 20:50:28.367870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.857 [2024-11-26 20:50:28.367885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.857 [2024-11-26 20:50:28.367899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.857 [2024-11-26 20:50:28.367915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.857 [2024-11-26 20:50:28.367930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.858 [2024-11-26 20:50:28.367945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.858 [2024-11-26 20:50:28.367959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.858 [2024-11-26 20:50:28.367974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.858 [2024-11-26 20:50:28.367988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.858 [2024-11-26 20:50:28.368003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.858 [2024-11-26 20:50:28.368017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.858 [2024-11-26 20:50:28.368048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.858 [2024-11-26 20:50:28.368063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.858 [2024-11-26 20:50:28.368078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.858 [2024-11-26 20:50:28.368092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.858 [2024-11-26 20:50:28.368108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.858 [2024-11-26 20:50:28.368121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.858 [2024-11-26 20:50:28.368137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.858 [2024-11-26 20:50:28.368152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.858 [2024-11-26 20:50:28.368168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.858 [2024-11-26 20:50:28.368186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.858 [2024-11-26 20:50:28.368202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.858 [2024-11-26 20:50:28.368216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.858 [2024-11-26 20:50:28.368232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.858 [2024-11-26 20:50:28.368247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.858 [2024-11-26 20:50:28.368262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.858 [2024-11-26 20:50:28.368262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410b60 is same with [2024-11-26 20:50:28.368277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:20:24.858 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.858 [2024-11-26 20:50:28.368308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.858 [2024-11-26 20:50:28.368314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410b60 is same with the state(6) to be set 00:20:24.858 [2024-11-26 20:50:28.368325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.858 [2024-11-26 20:50:28.368332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410b60 is same with the state(6) to be set 00:20:24.858 [2024-11-26 20:50:28.368341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:1[2024-11-26 20:50:28.368344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410b60 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.858 the state(6) to be set 00:20:24.858 [2024-11-26 20:50:28.368358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410b60 is same with [2024-11-26 20:50:28.368358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:20:24.858 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.858 [2024-11-26 20:50:28.368372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410b60 is same with the state(6) to be set 00:20:24.858 [2024-11-26 20:50:28.368376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.858 [2024-11-26 20:50:28.368385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410b60 is same with the state(6) to be set 00:20:24.858 [2024-11-26 20:50:28.368391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.858 [2024-11-26 20:50:28.368398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410b60 is same with the state(6) to be set 00:20:24.858 [2024-11-26 20:50:28.368407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.858 [2024-11-26 20:50:28.368410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410b60 is same with the state(6) to be set 00:20:24.858 [2024-11-26 20:50:28.368421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-26 20:50:28.368422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410b60 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.858 the state(6) to be set 00:20:24.858 [2024-11-26 20:50:28.368440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410b60 is same with the state(6) to be set 00:20:24.858 [2024-11-26 20:50:28.368443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.858 [2024-11-26 20:50:28.368451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410b60 is same with the state(6) to be set 00:20:24.858 [2024-11-26 20:50:28.368458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.858 [2024-11-26 20:50:28.368464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410b60 is same with the state(6) to be set 00:20:24.858 [2024-11-26 20:50:28.368474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:1[2024-11-26 20:50:28.368476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410b60 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.858 the state(6) to be set 00:20:24.858 [2024-11-26 20:50:28.368490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410b60 is same with [2024-11-26 20:50:28.368491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:20:24.858 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.858 [2024-11-26 20:50:28.368504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410b60 is same with the state(6) to be set 00:20:24.858 [2024-11-26 20:50:28.368508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.858 [2024-11-26 20:50:28.368516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410b60 is same with the state(6) to be set 00:20:24.858 [2024-11-26 20:50:28.368523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.858 [2024-11-26 20:50:28.368528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410b60 is same with the state(6) to be set 00:20:24.858 [2024-11-26 20:50:28.368539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:1[2024-11-26 20:50:28.368540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410b60 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.858 the state(6) to be set 00:20:24.858 [2024-11-26 20:50:28.368555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410b60 is same with [2024-11-26 20:50:28.368555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:20:24.858 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.858 [2024-11-26 20:50:28.368569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410b60 is same with the state(6) to be set 00:20:24.858 [2024-11-26 20:50:28.368573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.858 [2024-11-26 20:50:28.368593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410b60 is same with [2024-11-26 20:50:28.368594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:20:24.858 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.858 [2024-11-26 20:50:28.368622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410b60 is same with the state(6) to be set 00:20:24.858 [2024-11-26 20:50:28.368627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.858 [2024-11-26 20:50:28.368634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410b60 is same with the state(6) to be set 00:20:24.858 [2024-11-26 20:50:28.368642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.858 [2024-11-26 20:50:28.368646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410b60 is same with the state(6) to be set 00:20:24.858 [2024-11-26 20:50:28.368659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410b60 is same with the state(6) to be set 00:20:24.858 [2024-11-26 20:50:28.368660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.858 [2024-11-26 20:50:28.368670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410b60 is same with the state(6) to be set 00:20:24.858 [2024-11-26 20:50:28.368675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.858 [2024-11-26 20:50:28.368682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410b60 is same with the state(6) to be set 00:20:24.858 [2024-11-26 20:50:28.368690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.858 [2024-11-26 20:50:28.368694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410b60 is same with the state(6) to be set 00:20:24.858 [2024-11-26 20:50:28.368704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.858 [2024-11-26 20:50:28.368706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410b60 is same with the state(6) to be set 00:20:24.858 [2024-11-26 20:50:28.368719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410b60 is same with [2024-11-26 20:50:28.368719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:1the state(6) to be set 00:20:24.858 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.858 [2024-11-26 20:50:28.368732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410b60 is same with the state(6) to be set 00:20:24.858 [2024-11-26 20:50:28.368734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.858 [2024-11-26 20:50:28.368745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410b60 is same with the state(6) to be set 00:20:24.859 [2024-11-26 20:50:28.368750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.859 [2024-11-26 20:50:28.368756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410b60 is same with the state(6) to be set 00:20:24.859 [2024-11-26 20:50:28.368764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.859 [2024-11-26 20:50:28.368769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410b60 is same with the state(6) to be set 00:20:24.859 [2024-11-26 20:50:28.368779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:1[2024-11-26 20:50:28.368781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410b60 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.859 the state(6) to be set 00:20:24.859 [2024-11-26 20:50:28.368795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410b60 is same with [2024-11-26 20:50:28.368795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:20:24.859 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.859 [2024-11-26 20:50:28.368809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410b60 is same with the state(6) to be set 00:20:24.859 [2024-11-26 20:50:28.368813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.859 [2024-11-26 20:50:28.368821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410b60 is same with the state(6) to be set 00:20:24.859 [2024-11-26 20:50:28.368827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.859 [2024-11-26 20:50:28.368832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410b60 is same with the state(6) to be set 00:20:24.859 [2024-11-26 20:50:28.368844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410b60 is same with the state(6) to be set 00:20:24.859 [2024-11-26 20:50:28.368845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.859 [2024-11-26 20:50:28.368856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410b60 is same with the state(6) to be set 00:20:24.859 [2024-11-26 20:50:28.368860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.859 [2024-11-26 20:50:28.368868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410b60 is same with the state(6) to be set 00:20:24.859 [2024-11-26 20:50:28.368875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.859 [2024-11-26 20:50:28.368879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410b60 is same with the state(6) to be set 00:20:24.859 [2024-11-26 20:50:28.368889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.859 [2024-11-26 20:50:28.368891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410b60 is same with the state(6) to be set 00:20:24.859 [2024-11-26 20:50:28.368903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410b60 is same with the state(6) to be set 00:20:24.859 [2024-11-26 20:50:28.368904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.859 [2024-11-26 20:50:28.368914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410b60 is same with the state(6) to be set 00:20:24.859 [2024-11-26 20:50:28.368918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.859 [2024-11-26 20:50:28.368927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410b60 is same with the state(6) to be set 00:20:24.859 [2024-11-26 20:50:28.368934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.859 [2024-11-26 20:50:28.368938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410b60 is same with the state(6) to be set 00:20:24.859 [2024-11-26 20:50:28.368948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.859 [2024-11-26 20:50:28.368950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410b60 is same with the state(6) to be set 00:20:24.859 [2024-11-26 20:50:28.368962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410b60 is same with the state(6) to be set 00:20:24.859 [2024-11-26 20:50:28.368963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.859 [2024-11-26 20:50:28.368974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410b60 is same with the state(6) to be set 00:20:24.859 [2024-11-26 20:50:28.368977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.859 [2024-11-26 20:50:28.368986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410b60 is same with the state(6) to be set 00:20:24.859 [2024-11-26 20:50:28.368992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.859 [2024-11-26 20:50:28.368997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410b60 is same with the state(6) to be set 00:20:24.859 [2024-11-26 20:50:28.369006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.859 [2024-11-26 20:50:28.369015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410b60 is same with the state(6) to be set 00:20:24.859 [2024-11-26 20:50:28.369021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.859 [2024-11-26 20:50:28.369027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410b60 is same with the state(6) to be set 00:20:24.859 [2024-11-26 20:50:28.369035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.859 [2024-11-26 20:50:28.369039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410b60 is same with the state(6) to be set 00:20:24.859 [2024-11-26 20:50:28.369050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:1[2024-11-26 20:50:28.369051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2410b60 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.859 the state(6) to be set 00:20:24.859 [2024-11-26 20:50:28.369066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.859 [2024-11-26 20:50:28.369081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.859 [2024-11-26 20:50:28.369095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.859 [2024-11-26 20:50:28.369110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.859 [2024-11-26 20:50:28.369124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.859 [2024-11-26 20:50:28.369139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.859 [2024-11-26 20:50:28.369153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.859 [2024-11-26 20:50:28.369168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.859 [2024-11-26 20:50:28.369181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.859 [2024-11-26 20:50:28.369196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.859 [2024-11-26 20:50:28.369210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.859 [2024-11-26 20:50:28.369225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.859 [2024-11-26 20:50:28.369238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.859 [2024-11-26 20:50:28.369253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.859 [2024-11-26 20:50:28.369267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.859 [2024-11-26 20:50:28.369282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.859 [2024-11-26 20:50:28.369296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.859 [2024-11-26 20:50:28.369341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.859 [2024-11-26 20:50:28.369357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.859 [2024-11-26 20:50:28.369372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.859 [2024-11-26 20:50:28.369387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.859 [2024-11-26 20:50:28.369402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.859 [2024-11-26 20:50:28.369416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.859 [2024-11-26 20:50:28.369432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.859 [2024-11-26 20:50:28.369446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.860 [2024-11-26 20:50:28.369465] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ccc70 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.369797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.369823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.369836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.369848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.369859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.369870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.369882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.369893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.369905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.369916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.369927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.369938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.369949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.369960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.369972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.369983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.369994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.370011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.370024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.370036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.370048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.370060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.370071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.370083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.370094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.370106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.370118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.370130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.370142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.370159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.370172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.370184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.370196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.370212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.370224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.370236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.370247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.370259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.370270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.370282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.370295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.370334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.370348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.370360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.370372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.370388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.370400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.370412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.370424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.370435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.370447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.370459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.370470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.370482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.370494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.370505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.370517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.370529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.370541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.370553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.370564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.370593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.370605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e7dd0 is same with the state(6) to be set 00:20:24.860 [2024-11-26 20:50:28.371037] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:20:24.860 [2024-11-26 20:50:28.371075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2604fc0 (9): Bad file descriptor 00:20:24.860 [2024-11-26 20:50:28.371135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.860 [2024-11-26 20:50:28.371156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.860 [2024-11-26 20:50:28.371176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.860 [2024-11-26 20:50:28.371192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.860 [2024-11-26 20:50:28.371208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.860 [2024-11-26 20:50:28.371222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.860 [2024-11-26 20:50:28.371243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.860 [2024-11-26 20:50:28.371258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.860 [2024-11-26 20:50:28.371274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.860 [2024-11-26 20:50:28.371298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.860 [2024-11-26 20:50:28.371324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.860 [2024-11-26 20:50:28.371339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.860 [2024-11-26 20:50:28.371355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.860 [2024-11-26 20:50:28.371369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.860 [2024-11-26 20:50:28.371386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.860 [2024-11-26 20:50:28.371400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.860 [2024-11-26 20:50:28.371416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.861 [2024-11-26 20:50:28.371430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.861 [2024-11-26 20:50:28.371446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.861 [2024-11-26 20:50:28.371460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.861 [2024-11-26 20:50:28.371475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.861 [2024-11-26 20:50:28.371489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.861 [2024-11-26 20:50:28.371504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.861 [2024-11-26 20:50:28.371518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.861 [2024-11-26 20:50:28.371533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.861 [2024-11-26 20:50:28.371547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.861 [2024-11-26 20:50:28.371563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.861 [2024-11-26 20:50:28.371577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.861 [2024-11-26 20:50:28.371602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.861 [2024-11-26 20:50:28.371617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.861 [2024-11-26 20:50:28.371633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.861 [2024-11-26 20:50:28.371651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.861 [2024-11-26 20:50:28.371668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.861 [2024-11-26 20:50:28.371682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.861 [2024-11-26 20:50:28.371698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.861 [2024-11-26 20:50:28.371713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.861 [2024-11-26 20:50:28.371728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.861 [2024-11-26 20:50:28.371742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.861 [2024-11-26 20:50:28.371758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.861 [2024-11-26 20:50:28.371773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.861 [2024-11-26 20:50:28.371789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.861 [2024-11-26 20:50:28.371803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.861 [2024-11-26 20:50:28.371819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.861 [2024-11-26 20:50:28.371849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.861 [2024-11-26 20:50:28.371865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.861 [2024-11-26 20:50:28.371881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.861 [2024-11-26 20:50:28.371897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.861 [2024-11-26 20:50:28.371911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.861 [2024-11-26 20:50:28.371926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.861 [2024-11-26 20:50:28.371940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.861 [2024-11-26 20:50:28.371955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.861 [2024-11-26 20:50:28.371970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.861 [2024-11-26 20:50:28.371985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.861 [2024-11-26 20:50:28.372000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.861 [2024-11-26 20:50:28.372015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.861 [2024-11-26 20:50:28.372029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.861 [2024-11-26 20:50:28.372048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.861 [2024-11-26 20:50:28.372063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.861 [2024-11-26 20:50:28.372078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.861 [2024-11-26 20:50:28.372092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.861 [2024-11-26 20:50:28.372108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.861 [2024-11-26 20:50:28.372122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.861 [2024-11-26 20:50:28.372138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.861 [2024-11-26 20:50:28.372152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.861 [2024-11-26 20:50:28.372167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.861 [2024-11-26 20:50:28.372181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.861 [2024-11-26 20:50:28.372197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.861 [2024-11-26 20:50:28.372210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.861 [2024-11-26 20:50:28.372226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.861 [2024-11-26 20:50:28.372239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.861 [2024-11-26 20:50:28.372255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.861 [2024-11-26 20:50:28.372268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.861 [2024-11-26 20:50:28.372317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.861 [2024-11-26 20:50:28.372334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.861 [2024-11-26 20:50:28.372350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.861 [2024-11-26 20:50:28.372365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.861 [2024-11-26 20:50:28.372381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.861 [2024-11-26 20:50:28.372395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.861 [2024-11-26 20:50:28.372417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.861 [2024-11-26 20:50:28.372432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.861 [2024-11-26 20:50:28.372448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.861 [2024-11-26 20:50:28.372466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.861 [2024-11-26 20:50:28.372483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.861 [2024-11-26 20:50:28.372497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.861 [2024-11-26 20:50:28.372513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.861 [2024-11-26 20:50:28.372527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.861 [2024-11-26 20:50:28.372542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.861 [2024-11-26 20:50:28.372557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.861 [2024-11-26 20:50:28.372573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.861 [2024-11-26 20:50:28.372591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.861 [2024-11-26 20:50:28.372607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.861 [2024-11-26 20:50:28.372621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.861 [2024-11-26 20:50:28.372636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.861 [2024-11-26 20:50:28.372651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.861 [2024-11-26 20:50:28.372667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.861 [2024-11-26 20:50:28.372681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.862 [2024-11-26 20:50:28.372696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.862 [2024-11-26 20:50:28.372710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.862 [2024-11-26 20:50:28.372726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.862 [2024-11-26 20:50:28.372740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.862 [2024-11-26 20:50:28.372755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.862 [2024-11-26 20:50:28.372769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.862 [2024-11-26 20:50:28.372785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.862 [2024-11-26 20:50:28.372799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.862 [2024-11-26 20:50:28.372815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.862 [2024-11-26 20:50:28.372829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.862 [2024-11-26 20:50:28.372845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.862 [2024-11-26 20:50:28.372862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.862 [2024-11-26 20:50:28.372878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.862 [2024-11-26 20:50:28.372893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.862 [2024-11-26 20:50:28.372913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.862 [2024-11-26 20:50:28.372928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.862 [2024-11-26 20:50:28.372943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.862 [2024-11-26 20:50:28.372957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.862 [2024-11-26 20:50:28.372972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.862 [2024-11-26 20:50:28.372986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.862 [2024-11-26 20:50:28.373002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.862 [2024-11-26 20:50:28.373016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.862 [2024-11-26 20:50:28.373031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.862 [2024-11-26 20:50:28.373046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.862 [2024-11-26 20:50:28.373061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.862 [2024-11-26 20:50:28.373075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.862 [2024-11-26 20:50:28.373090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.862 [2024-11-26 20:50:28.373105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.862 [2024-11-26 20:50:28.373120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.862 [2024-11-26 20:50:28.373135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.862 [2024-11-26 20:50:28.373150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.862 [2024-11-26 20:50:28.373165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.862 [2024-11-26 20:50:28.375007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:20:24.862 [2024-11-26 20:50:28.375041] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21d9270 (9): Bad file descriptor 00:20:24.862 [2024-11-26 20:50:28.376985] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:24.862 [2024-11-26 20:50:28.377020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21d9700 (9): Bad file descriptor 00:20:24.862 [2024-11-26 20:50:28.377160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:24.862 [2024-11-26 20:50:28.377190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2604fc0 with addr=10.0.0.2, port=4420 00:20:24.862 [2024-11-26 20:50:28.377208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2604fc0 is same with the state(6) to be set 00:20:24.862 [2024-11-26 20:50:28.377261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26373e0 (9): Bad file descriptor 00:20:24.862 [2024-11-26 20:50:28.377324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26049e0 (9): Bad file descriptor 00:20:24.862 [2024-11-26 20:50:28.377362] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2141110 (9): Bad file descriptor 00:20:24.862 [2024-11-26 20:50:28.377420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.862 [2024-11-26 20:50:28.377443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.862 [2024-11-26 20:50:28.377459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.862 [2024-11-26 20:50:28.377474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.862 [2024-11-26 20:50:28.377488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.862 [2024-11-26 20:50:28.377502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.862 [2024-11-26 20:50:28.377516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.862 [2024-11-26 20:50:28.377530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.862 [2024-11-26 20:50:28.377543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2646270 is same with the state(6) to be set 00:20:24.862 [2024-11-26 20:50:28.377593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.862 [2024-11-26 20:50:28.377620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.862 [2024-11-26 20:50:28.377635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.862 [2024-11-26 20:50:28.377649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.862 [2024-11-26 20:50:28.377664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.862 [2024-11-26 20:50:28.377677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.862 [2024-11-26 20:50:28.377691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.862 [2024-11-26 20:50:28.377705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.862 [2024-11-26 20:50:28.377718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2646450 is same with the state(6) to be set 00:20:24.862 [2024-11-26 20:50:28.377766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.862 [2024-11-26 20:50:28.377788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.862 [2024-11-26 20:50:28.377809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.862 [2024-11-26 20:50:28.377823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.862 [2024-11-26 20:50:28.377838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.862 [2024-11-26 20:50:28.377851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.862 [2024-11-26 20:50:28.377865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:24.862 [2024-11-26 20:50:28.377878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.862 [2024-11-26 20:50:28.377891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637200 is same with the state(6) to be set 00:20:24.862 [2024-11-26 20:50:28.377922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21cd6f0 (9): Bad file descriptor 00:20:24.862 [2024-11-26 20:50:28.378397] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:24.862 [2024-11-26 20:50:28.378741] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:24.862 [2024-11-26 20:50:28.378850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:24.862 [2024-11-26 20:50:28.378878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d9270 with addr=10.0.0.2, port=4420 00:20:24.862 [2024-11-26 20:50:28.378895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d9270 is same with the state(6) to be set 00:20:24.862 [2024-11-26 20:50:28.378924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2604fc0 (9): Bad file descriptor 00:20:24.862 [2024-11-26 20:50:28.379275] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:24.862 [2024-11-26 20:50:28.379399] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:24.862 [2024-11-26 20:50:28.379468] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:24.862 [2024-11-26 20:50:28.379541] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:24.862 [2024-11-26 20:50:28.379737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:24.862 [2024-11-26 20:50:28.379766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d9700 with addr=10.0.0.2, port=4420 00:20:24.862 [2024-11-26 20:50:28.379782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d9700 is same with the state(6) to be set 00:20:24.862 [2024-11-26 20:50:28.379802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21d9270 (9): Bad file descriptor 00:20:24.863 [2024-11-26 20:50:28.379821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:20:24.863 [2024-11-26 20:50:28.379835] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:20:24.863 [2024-11-26 20:50:28.379851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:20:24.863 [2024-11-26 20:50:28.379867] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:20:24.863 [2024-11-26 20:50:28.379964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.863 [2024-11-26 20:50:28.379987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.863 [2024-11-26 20:50:28.380012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.863 [2024-11-26 20:50:28.380029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.863 [2024-11-26 20:50:28.380052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.863 [2024-11-26 20:50:28.380068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.863 [2024-11-26 20:50:28.380085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.863 [2024-11-26 20:50:28.380099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.863 [2024-11-26 20:50:28.380115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.863 [2024-11-26 20:50:28.380129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.863 [2024-11-26 20:50:28.380145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.863 [2024-11-26 20:50:28.380159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.863 [2024-11-26 20:50:28.380176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.863 [2024-11-26 20:50:28.380191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.863 [2024-11-26 20:50:28.380207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.863 [2024-11-26 20:50:28.380221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.863 [2024-11-26 20:50:28.380237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.863 [2024-11-26 20:50:28.380251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.863 [2024-11-26 20:50:28.380268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.863 [2024-11-26 20:50:28.380282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.863 [2024-11-26 20:50:28.380324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.863 [2024-11-26 20:50:28.380343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.863 [2024-11-26 20:50:28.380358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.863 [2024-11-26 20:50:28.380373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.863 [2024-11-26 20:50:28.380389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.863 [2024-11-26 20:50:28.380404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.863 [2024-11-26 20:50:28.380421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.863 [2024-11-26 20:50:28.380436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.863 [2024-11-26 20:50:28.380452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.863 [2024-11-26 20:50:28.380471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.863 [2024-11-26 20:50:28.380488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.863 [2024-11-26 20:50:28.380502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.863 [2024-11-26 20:50:28.380518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.863 [2024-11-26 20:50:28.380532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.863 [2024-11-26 20:50:28.380548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.863 [2024-11-26 20:50:28.380563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.863 [2024-11-26 20:50:28.380579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.863 [2024-11-26 20:50:28.380596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.863 [2024-11-26 20:50:28.380612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.863 [2024-11-26 20:50:28.380626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.863 [2024-11-26 20:50:28.380642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.863 [2024-11-26 20:50:28.380657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.863 [2024-11-26 20:50:28.380672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.863 [2024-11-26 20:50:28.380687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.863 [2024-11-26 20:50:28.380703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.863 [2024-11-26 20:50:28.380718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.863 [2024-11-26 20:50:28.380734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.863 [2024-11-26 20:50:28.380749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.863 [2024-11-26 20:50:28.380764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.863 [2024-11-26 20:50:28.380778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.863 [2024-11-26 20:50:28.380795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.863 [2024-11-26 20:50:28.380809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.863 [2024-11-26 20:50:28.380825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.863 [2024-11-26 20:50:28.380839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.863 [2024-11-26 20:50:28.380864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.863 [2024-11-26 20:50:28.380879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.863 [2024-11-26 20:50:28.380895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.863 [2024-11-26 20:50:28.380909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.863 [2024-11-26 20:50:28.380925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.863 [2024-11-26 20:50:28.380939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.863 [2024-11-26 20:50:28.380955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.863 [2024-11-26 20:50:28.380969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.863 [2024-11-26 20:50:28.380985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.863 [2024-11-26 20:50:28.380999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.863 [2024-11-26 20:50:28.381015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.863 [2024-11-26 20:50:28.381029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.863 [2024-11-26 20:50:28.381045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.863 [2024-11-26 20:50:28.381059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.863 [2024-11-26 20:50:28.381075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.863 [2024-11-26 20:50:28.381089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.863 [2024-11-26 20:50:28.381105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.863 [2024-11-26 20:50:28.381119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.863 [2024-11-26 20:50:28.381135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.863 [2024-11-26 20:50:28.381149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.863 [2024-11-26 20:50:28.381165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.863 [2024-11-26 20:50:28.381179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.864 [2024-11-26 20:50:28.381195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.864 [2024-11-26 20:50:28.381209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.864 [2024-11-26 20:50:28.381225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.864 [2024-11-26 20:50:28.381243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.864 [2024-11-26 20:50:28.381260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.864 [2024-11-26 20:50:28.381274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.864 [2024-11-26 20:50:28.381299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.864 [2024-11-26 20:50:28.381321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.864 [2024-11-26 20:50:28.381338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.864 [2024-11-26 20:50:28.381352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.864 [2024-11-26 20:50:28.381369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.864 [2024-11-26 20:50:28.381383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.864 [2024-11-26 20:50:28.381399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.864 [2024-11-26 20:50:28.381413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.864 [2024-11-26 20:50:28.381429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.864 [2024-11-26 20:50:28.381443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.864 [2024-11-26 20:50:28.381459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.864 [2024-11-26 20:50:28.381473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.864 [2024-11-26 20:50:28.381489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.864 [2024-11-26 20:50:28.381506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.864 [2024-11-26 20:50:28.381523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.864 [2024-11-26 20:50:28.381537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.864 [2024-11-26 20:50:28.381553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.864 [2024-11-26 20:50:28.381568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.864 [2024-11-26 20:50:28.381585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.864 [2024-11-26 20:50:28.381608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.864 [2024-11-26 20:50:28.381624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.864 [2024-11-26 20:50:28.381639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.864 [2024-11-26 20:50:28.381659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.864 [2024-11-26 20:50:28.381674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.864 [2024-11-26 20:50:28.381691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.864 [2024-11-26 20:50:28.381705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.864 [2024-11-26 20:50:28.381721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.864 [2024-11-26 20:50:28.381735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.864 [2024-11-26 20:50:28.381751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.864 [2024-11-26 20:50:28.381766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.864 [2024-11-26 20:50:28.381782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.864 [2024-11-26 20:50:28.381796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.864 [2024-11-26 20:50:28.381813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.864 [2024-11-26 20:50:28.381827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.864 [2024-11-26 20:50:28.381843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.864 [2024-11-26 20:50:28.381857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.864 [2024-11-26 20:50:28.381873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.864 [2024-11-26 20:50:28.381888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.864 [2024-11-26 20:50:28.381904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.864 [2024-11-26 20:50:28.381919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.864 [2024-11-26 20:50:28.381935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.864 [2024-11-26 20:50:28.381950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.864 [2024-11-26 20:50:28.381966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.864 [2024-11-26 20:50:28.381980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.864 [2024-11-26 20:50:28.381996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.864 [2024-11-26 20:50:28.382011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.864 [2024-11-26 20:50:28.382025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ddcc0 is same with the state(6) to be set 00:20:24.864 [2024-11-26 20:50:28.382199] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:24.864 [2024-11-26 20:50:28.382249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21d9700 (9): Bad file descriptor 00:20:24.864 [2024-11-26 20:50:28.382273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:20:24.864 [2024-11-26 20:50:28.382297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:20:24.864 [2024-11-26 20:50:28.382320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:20:24.864 [2024-11-26 20:50:28.382335] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:20:24.864 [2024-11-26 20:50:28.383591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:20:24.864 [2024-11-26 20:50:28.383629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2646270 (9): Bad file descriptor 00:20:24.864 [2024-11-26 20:50:28.383651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:24.864 [2024-11-26 20:50:28.383666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:24.864 [2024-11-26 20:50:28.383681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:24.864 [2024-11-26 20:50:28.383694] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:24.864 [2024-11-26 20:50:28.384184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:24.864 [2024-11-26 20:50:28.384213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2646270 with addr=10.0.0.2, port=4420 00:20:24.864 [2024-11-26 20:50:28.384230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2646270 is same with the state(6) to be set 00:20:24.864 [2024-11-26 20:50:28.384297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2646270 (9): Bad file descriptor 00:20:24.864 [2024-11-26 20:50:28.384375] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:20:24.864 [2024-11-26 20:50:28.384394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:20:24.864 [2024-11-26 20:50:28.384408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:20:24.865 [2024-11-26 20:50:28.384422] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:20:24.865 [2024-11-26 20:50:28.385172] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:20:24.865 [2024-11-26 20:50:28.385328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:24.865 [2024-11-26 20:50:28.385357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2604fc0 with addr=10.0.0.2, port=4420 00:20:24.865 [2024-11-26 20:50:28.385373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2604fc0 is same with the state(6) to be set 00:20:24.865 [2024-11-26 20:50:28.385427] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2604fc0 (9): Bad file descriptor 00:20:24.865 [2024-11-26 20:50:28.385482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:20:24.865 [2024-11-26 20:50:28.385499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:20:24.865 [2024-11-26 20:50:28.385513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:20:24.865 [2024-11-26 20:50:28.385529] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:20:24.865 [2024-11-26 20:50:28.387058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2646450 (9): Bad file descriptor 00:20:24.865 [2024-11-26 20:50:28.387095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2637200 (9): Bad file descriptor 00:20:24.865 [2024-11-26 20:50:28.387236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.865 [2024-11-26 20:50:28.387260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.865 [2024-11-26 20:50:28.387284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.865 [2024-11-26 20:50:28.387300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.865 [2024-11-26 20:50:28.387327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.865 [2024-11-26 20:50:28.387343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.865 [2024-11-26 20:50:28.387359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.865 [2024-11-26 20:50:28.387374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.865 [2024-11-26 20:50:28.387389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.865 [2024-11-26 20:50:28.387403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.865 [2024-11-26 20:50:28.387419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.865 [2024-11-26 20:50:28.387433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.865 [2024-11-26 20:50:28.387449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.865 [2024-11-26 20:50:28.387464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.865 [2024-11-26 20:50:28.387479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.865 [2024-11-26 20:50:28.387494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.865 [2024-11-26 20:50:28.387510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.865 [2024-11-26 20:50:28.387524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.865 [2024-11-26 20:50:28.387540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.865 [2024-11-26 20:50:28.387554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.865 [2024-11-26 20:50:28.387571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.865 [2024-11-26 20:50:28.387585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.865 [2024-11-26 20:50:28.387600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.865 [2024-11-26 20:50:28.387620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.865 [2024-11-26 20:50:28.387636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.865 [2024-11-26 20:50:28.387651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.865 [2024-11-26 20:50:28.387667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.865 [2024-11-26 20:50:28.387681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.865 [2024-11-26 20:50:28.387698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.865 [2024-11-26 20:50:28.387712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.865 [2024-11-26 20:50:28.387728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.865 [2024-11-26 20:50:28.387742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.865 [2024-11-26 20:50:28.387758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.865 [2024-11-26 20:50:28.387773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.865 [2024-11-26 20:50:28.387789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.865 [2024-11-26 20:50:28.387803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.865 [2024-11-26 20:50:28.387819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.865 [2024-11-26 20:50:28.387834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.865 [2024-11-26 20:50:28.387850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.865 [2024-11-26 20:50:28.387864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.865 [2024-11-26 20:50:28.387880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.865 [2024-11-26 20:50:28.387894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.865 [2024-11-26 20:50:28.387910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.865 [2024-11-26 20:50:28.387924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.865 [2024-11-26 20:50:28.387940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.865 [2024-11-26 20:50:28.387954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.865 [2024-11-26 20:50:28.387970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.865 [2024-11-26 20:50:28.387984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.865 [2024-11-26 20:50:28.388000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.865 [2024-11-26 20:50:28.388018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.865 [2024-11-26 20:50:28.388034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.865 [2024-11-26 20:50:28.388049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.865 [2024-11-26 20:50:28.388066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.865 [2024-11-26 20:50:28.388080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.865 [2024-11-26 20:50:28.388096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.865 [2024-11-26 20:50:28.388110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.865 [2024-11-26 20:50:28.388126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.865 [2024-11-26 20:50:28.388141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.865 [2024-11-26 20:50:28.388156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.865 [2024-11-26 20:50:28.388170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.865 [2024-11-26 20:50:28.388186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.865 [2024-11-26 20:50:28.388201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.865 [2024-11-26 20:50:28.388217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.865 [2024-11-26 20:50:28.388233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.865 [2024-11-26 20:50:28.388249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.865 [2024-11-26 20:50:28.388263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.865 [2024-11-26 20:50:28.388279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.865 [2024-11-26 20:50:28.388294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.866 [2024-11-26 20:50:28.388319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.866 [2024-11-26 20:50:28.388334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.866 [2024-11-26 20:50:28.388351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.866 [2024-11-26 20:50:28.388365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.866 [2024-11-26 20:50:28.388381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.866 [2024-11-26 20:50:28.388395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.866 [2024-11-26 20:50:28.388415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.866 [2024-11-26 20:50:28.388430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.866 [2024-11-26 20:50:28.388445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.866 [2024-11-26 20:50:28.388459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.866 [2024-11-26 20:50:28.388476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.866 [2024-11-26 20:50:28.388490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.866 [2024-11-26 20:50:28.388506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.866 [2024-11-26 20:50:28.388520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.866 [2024-11-26 20:50:28.388536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.866 [2024-11-26 20:50:28.388550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.866 [2024-11-26 20:50:28.388566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.866 [2024-11-26 20:50:28.388581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.866 [2024-11-26 20:50:28.388596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.866 [2024-11-26 20:50:28.388611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.866 [2024-11-26 20:50:28.388627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.866 [2024-11-26 20:50:28.388641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.866 [2024-11-26 20:50:28.388657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.866 [2024-11-26 20:50:28.388671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.866 [2024-11-26 20:50:28.388687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.866 [2024-11-26 20:50:28.388702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.866 [2024-11-26 20:50:28.388718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.866 [2024-11-26 20:50:28.388732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.866 [2024-11-26 20:50:28.388747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.866 [2024-11-26 20:50:28.388761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.866 [2024-11-26 20:50:28.388777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.866 [2024-11-26 20:50:28.388795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.866 [2024-11-26 20:50:28.388812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.866 [2024-11-26 20:50:28.388826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.866 [2024-11-26 20:50:28.388842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.866 [2024-11-26 20:50:28.388856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.866 [2024-11-26 20:50:28.388872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.866 [2024-11-26 20:50:28.388887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.866 [2024-11-26 20:50:28.388903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.866 [2024-11-26 20:50:28.388918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.866 [2024-11-26 20:50:28.388933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.866 [2024-11-26 20:50:28.388947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.866 [2024-11-26 20:50:28.388963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.866 [2024-11-26 20:50:28.388978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.866 [2024-11-26 20:50:28.388994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.866 [2024-11-26 20:50:28.389008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.866 [2024-11-26 20:50:28.389023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.866 [2024-11-26 20:50:28.389038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.866 [2024-11-26 20:50:28.389054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.866 [2024-11-26 20:50:28.389068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.866 [2024-11-26 20:50:28.389084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.866 [2024-11-26 20:50:28.389098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.866 [2024-11-26 20:50:28.389114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.866 [2024-11-26 20:50:28.389128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.866 [2024-11-26 20:50:28.389144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.866 [2024-11-26 20:50:28.389158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.866 [2024-11-26 20:50:28.389178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.866 [2024-11-26 20:50:28.389194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.866 [2024-11-26 20:50:28.389209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.866 [2024-11-26 20:50:28.389224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.866 [2024-11-26 20:50:28.389238] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de950 is same with the state(6) to be set 00:20:24.866 [2024-11-26 20:50:28.390521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.866 [2024-11-26 20:50:28.390544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.866 [2024-11-26 20:50:28.390567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.866 [2024-11-26 20:50:28.390583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.866 [2024-11-26 20:50:28.390599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.866 [2024-11-26 20:50:28.390613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.866 [2024-11-26 20:50:28.390630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.866 [2024-11-26 20:50:28.390644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.866 [2024-11-26 20:50:28.390660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.866 [2024-11-26 20:50:28.390674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.866 [2024-11-26 20:50:28.390689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.866 [2024-11-26 20:50:28.390703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.866 [2024-11-26 20:50:28.390719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.866 [2024-11-26 20:50:28.390733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.866 [2024-11-26 20:50:28.390749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.866 [2024-11-26 20:50:28.390763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.867 [2024-11-26 20:50:28.390779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.867 [2024-11-26 20:50:28.390793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.867 [2024-11-26 20:50:28.390810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.867 [2024-11-26 20:50:28.390824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.867 [2024-11-26 20:50:28.390845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.867 [2024-11-26 20:50:28.390860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.867 [2024-11-26 20:50:28.390876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.867 [2024-11-26 20:50:28.390890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.867 [2024-11-26 20:50:28.390906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.867 [2024-11-26 20:50:28.390920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.867 [2024-11-26 20:50:28.390936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.867 [2024-11-26 20:50:28.390950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.867 [2024-11-26 20:50:28.390966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.867 [2024-11-26 20:50:28.390980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.867 [2024-11-26 20:50:28.390996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.867 [2024-11-26 20:50:28.391010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.867 [2024-11-26 20:50:28.391026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.867 [2024-11-26 20:50:28.391040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.867 [2024-11-26 20:50:28.391056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.867 [2024-11-26 20:50:28.391071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.867 [2024-11-26 20:50:28.391086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.867 [2024-11-26 20:50:28.391100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.867 [2024-11-26 20:50:28.391116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.867 [2024-11-26 20:50:28.391130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.867 [2024-11-26 20:50:28.391146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.867 [2024-11-26 20:50:28.391160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.867 [2024-11-26 20:50:28.391176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.867 [2024-11-26 20:50:28.391189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.867 [2024-11-26 20:50:28.391205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.867 [2024-11-26 20:50:28.391224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.867 [2024-11-26 20:50:28.391240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.867 [2024-11-26 20:50:28.391255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.867 [2024-11-26 20:50:28.391271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.867 [2024-11-26 20:50:28.391285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.867 [2024-11-26 20:50:28.391308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.867 [2024-11-26 20:50:28.391325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.867 [2024-11-26 20:50:28.391341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.867 [2024-11-26 20:50:28.391355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.867 [2024-11-26 20:50:28.391372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.867 [2024-11-26 20:50:28.391386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.867 [2024-11-26 20:50:28.391402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.867 [2024-11-26 20:50:28.391416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.867 [2024-11-26 20:50:28.391431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.867 [2024-11-26 20:50:28.391445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.867 [2024-11-26 20:50:28.391461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.867 [2024-11-26 20:50:28.391475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.867 [2024-11-26 20:50:28.391491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.867 [2024-11-26 20:50:28.391505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.867 [2024-11-26 20:50:28.391521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.867 [2024-11-26 20:50:28.391535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.867 [2024-11-26 20:50:28.391553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.867 [2024-11-26 20:50:28.391570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.867 [2024-11-26 20:50:28.391586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.867 [2024-11-26 20:50:28.391600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.867 [2024-11-26 20:50:28.391621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.867 [2024-11-26 20:50:28.391637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.867 [2024-11-26 20:50:28.391653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.867 [2024-11-26 20:50:28.391668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.867 [2024-11-26 20:50:28.391684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.867 [2024-11-26 20:50:28.391698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.867 [2024-11-26 20:50:28.391714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.867 [2024-11-26 20:50:28.391729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.867 [2024-11-26 20:50:28.391745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.867 [2024-11-26 20:50:28.391759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.867 [2024-11-26 20:50:28.391776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.867 [2024-11-26 20:50:28.391790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.867 [2024-11-26 20:50:28.391806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.867 [2024-11-26 20:50:28.391820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.867 [2024-11-26 20:50:28.391836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.867 [2024-11-26 20:50:28.391850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.867 [2024-11-26 20:50:28.391866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.867 [2024-11-26 20:50:28.391880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.867 [2024-11-26 20:50:28.391896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.867 [2024-11-26 20:50:28.391911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.867 [2024-11-26 20:50:28.391927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.867 [2024-11-26 20:50:28.391942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.867 [2024-11-26 20:50:28.391958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.867 [2024-11-26 20:50:28.391972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.867 [2024-11-26 20:50:28.391988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.868 [2024-11-26 20:50:28.392006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.868 [2024-11-26 20:50:28.392023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.868 [2024-11-26 20:50:28.392037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.868 [2024-11-26 20:50:28.392055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.868 [2024-11-26 20:50:28.392070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.868 [2024-11-26 20:50:28.392086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.868 [2024-11-26 20:50:28.392101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.868 [2024-11-26 20:50:28.392118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.868 [2024-11-26 20:50:28.392132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.868 [2024-11-26 20:50:28.392149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.868 [2024-11-26 20:50:28.392164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.868 [2024-11-26 20:50:28.392180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.868 [2024-11-26 20:50:28.392195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.868 [2024-11-26 20:50:28.392211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.868 [2024-11-26 20:50:28.392225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.868 [2024-11-26 20:50:28.392241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.868 [2024-11-26 20:50:28.392255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.868 [2024-11-26 20:50:28.392272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.868 [2024-11-26 20:50:28.392286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.868 [2024-11-26 20:50:28.392308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.868 [2024-11-26 20:50:28.392324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.868 [2024-11-26 20:50:28.392341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.868 [2024-11-26 20:50:28.392356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.868 [2024-11-26 20:50:28.392372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.868 [2024-11-26 20:50:28.392387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.868 [2024-11-26 20:50:28.392407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.868 [2024-11-26 20:50:28.392424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.868 [2024-11-26 20:50:28.392440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.868 [2024-11-26 20:50:28.392454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.868 [2024-11-26 20:50:28.392470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.868 [2024-11-26 20:50:28.392485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.868 [2024-11-26 20:50:28.392501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.868 [2024-11-26 20:50:28.392515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.868 [2024-11-26 20:50:28.392530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25db240 is same with the state(6) to be set 00:20:24.868 [2024-11-26 20:50:28.393758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.868 [2024-11-26 20:50:28.393782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.868 [2024-11-26 20:50:28.393804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.868 [2024-11-26 20:50:28.393820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.868 [2024-11-26 20:50:28.393837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.868 [2024-11-26 20:50:28.393851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.868 [2024-11-26 20:50:28.393867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.868 [2024-11-26 20:50:28.393881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.868 [2024-11-26 20:50:28.393897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.868 [2024-11-26 20:50:28.393911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.868 [2024-11-26 20:50:28.393927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.868 [2024-11-26 20:50:28.393941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.868 [2024-11-26 20:50:28.393957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.868 [2024-11-26 20:50:28.393972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.868 [2024-11-26 20:50:28.393988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.868 [2024-11-26 20:50:28.394002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.868 [2024-11-26 20:50:28.394026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.868 [2024-11-26 20:50:28.394042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.868 [2024-11-26 20:50:28.394058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.868 [2024-11-26 20:50:28.394073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.868 [2024-11-26 20:50:28.394089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.868 [2024-11-26 20:50:28.394103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.868 [2024-11-26 20:50:28.394119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.868 [2024-11-26 20:50:28.394133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.868 [2024-11-26 20:50:28.394149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.868 [2024-11-26 20:50:28.394164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.868 [2024-11-26 20:50:28.394180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.868 [2024-11-26 20:50:28.394194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.868 [2024-11-26 20:50:28.394210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.868 [2024-11-26 20:50:28.394224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.868 [2024-11-26 20:50:28.394240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.868 [2024-11-26 20:50:28.394255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.868 [2024-11-26 20:50:28.394270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.868 [2024-11-26 20:50:28.394285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.868 [2024-11-26 20:50:28.394313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.869 [2024-11-26 20:50:28.394331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.869 [2024-11-26 20:50:28.394347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.869 [2024-11-26 20:50:28.394363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.869 [2024-11-26 20:50:28.394379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.869 [2024-11-26 20:50:28.394393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.869 [2024-11-26 20:50:28.394409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.869 [2024-11-26 20:50:28.394428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.869 [2024-11-26 20:50:28.394444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.869 [2024-11-26 20:50:28.394459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.869 [2024-11-26 20:50:28.394474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.869 [2024-11-26 20:50:28.394488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.869 [2024-11-26 20:50:28.394504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.869 [2024-11-26 20:50:28.394519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.869 [2024-11-26 20:50:28.394535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.869 [2024-11-26 20:50:28.394549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.869 [2024-11-26 20:50:28.394565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.869 [2024-11-26 20:50:28.394579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.869 [2024-11-26 20:50:28.394595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.869 [2024-11-26 20:50:28.394610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.869 [2024-11-26 20:50:28.394625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.869 [2024-11-26 20:50:28.394639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.869 [2024-11-26 20:50:28.394655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.869 [2024-11-26 20:50:28.394669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.869 [2024-11-26 20:50:28.394685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.869 [2024-11-26 20:50:28.394698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.869 [2024-11-26 20:50:28.394714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.869 [2024-11-26 20:50:28.394728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.869 [2024-11-26 20:50:28.394745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.869 [2024-11-26 20:50:28.394759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.869 [2024-11-26 20:50:28.394776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.869 [2024-11-26 20:50:28.394790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.869 [2024-11-26 20:50:28.394809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.869 [2024-11-26 20:50:28.394825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.869 [2024-11-26 20:50:28.394842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.869 [2024-11-26 20:50:28.394856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.869 [2024-11-26 20:50:28.394872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.869 [2024-11-26 20:50:28.394885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.869 [2024-11-26 20:50:28.394902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.869 [2024-11-26 20:50:28.394916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.869 [2024-11-26 20:50:28.394932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.869 [2024-11-26 20:50:28.394946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.869 [2024-11-26 20:50:28.394962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.869 [2024-11-26 20:50:28.394976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.869 [2024-11-26 20:50:28.394993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.869 [2024-11-26 20:50:28.395007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.869 [2024-11-26 20:50:28.395023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.869 [2024-11-26 20:50:28.395037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.869 [2024-11-26 20:50:28.395053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.869 [2024-11-26 20:50:28.395067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.869 [2024-11-26 20:50:28.395083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.869 [2024-11-26 20:50:28.395097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.869 [2024-11-26 20:50:28.395113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.869 [2024-11-26 20:50:28.395127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.869 [2024-11-26 20:50:28.395143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.869 [2024-11-26 20:50:28.395157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.869 [2024-11-26 20:50:28.395174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.869 [2024-11-26 20:50:28.395192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.869 [2024-11-26 20:50:28.395208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.869 [2024-11-26 20:50:28.395223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.869 [2024-11-26 20:50:28.395239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.869 [2024-11-26 20:50:28.395254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.869 [2024-11-26 20:50:28.395269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.869 [2024-11-26 20:50:28.395283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.869 [2024-11-26 20:50:28.395299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.869 [2024-11-26 20:50:28.395322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.869 [2024-11-26 20:50:28.395339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.869 [2024-11-26 20:50:28.395354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.869 [2024-11-26 20:50:28.395370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.869 [2024-11-26 20:50:28.395384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.869 [2024-11-26 20:50:28.395400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.869 [2024-11-26 20:50:28.395414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.869 [2024-11-26 20:50:28.395431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.869 [2024-11-26 20:50:28.395445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.869 [2024-11-26 20:50:28.395461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.869 [2024-11-26 20:50:28.395475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.869 [2024-11-26 20:50:28.395493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.869 [2024-11-26 20:50:28.395508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.869 [2024-11-26 20:50:28.395525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.870 [2024-11-26 20:50:28.395539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.870 [2024-11-26 20:50:28.395555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.870 [2024-11-26 20:50:28.395569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.870 [2024-11-26 20:50:28.395589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.870 [2024-11-26 20:50:28.395605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.870 [2024-11-26 20:50:28.395621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.870 [2024-11-26 20:50:28.395635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.870 [2024-11-26 20:50:28.395652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.870 [2024-11-26 20:50:28.395666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.870 [2024-11-26 20:50:28.395682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.870 [2024-11-26 20:50:28.395697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.870 [2024-11-26 20:50:28.395714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.870 [2024-11-26 20:50:28.395729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.870 [2024-11-26 20:50:28.395745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.870 [2024-11-26 20:50:28.395760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.870 [2024-11-26 20:50:28.395775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25dc780 is same with the state(6) to be set 00:20:24.870 [2024-11-26 20:50:28.397055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.870 [2024-11-26 20:50:28.397079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.870 [2024-11-26 20:50:28.397101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.870 [2024-11-26 20:50:28.397117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.870 [2024-11-26 20:50:28.397134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.870 [2024-11-26 20:50:28.397150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.870 [2024-11-26 20:50:28.397166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.870 [2024-11-26 20:50:28.397181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.870 [2024-11-26 20:50:28.397197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.870 [2024-11-26 20:50:28.397211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.870 [2024-11-26 20:50:28.397228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.870 [2024-11-26 20:50:28.397243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.870 [2024-11-26 20:50:28.397259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.870 [2024-11-26 20:50:28.397279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.870 [2024-11-26 20:50:28.397296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.870 [2024-11-26 20:50:28.397319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.870 [2024-11-26 20:50:28.397336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.870 [2024-11-26 20:50:28.397351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.870 [2024-11-26 20:50:28.397367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.870 [2024-11-26 20:50:28.397382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.870 [2024-11-26 20:50:28.397399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.870 [2024-11-26 20:50:28.397413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.870 [2024-11-26 20:50:28.397429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.870 [2024-11-26 20:50:28.397443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.870 [2024-11-26 20:50:28.397459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.870 [2024-11-26 20:50:28.397473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.870 [2024-11-26 20:50:28.397489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.870 [2024-11-26 20:50:28.397503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.870 [2024-11-26 20:50:28.397519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.870 [2024-11-26 20:50:28.397533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.870 [2024-11-26 20:50:28.397549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.870 [2024-11-26 20:50:28.397563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.870 [2024-11-26 20:50:28.397579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.870 [2024-11-26 20:50:28.397594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.870 [2024-11-26 20:50:28.397610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.870 [2024-11-26 20:50:28.397624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.870 [2024-11-26 20:50:28.397640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.870 [2024-11-26 20:50:28.397654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.870 [2024-11-26 20:50:28.397674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.870 [2024-11-26 20:50:28.397689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.870 [2024-11-26 20:50:28.397705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.870 [2024-11-26 20:50:28.397720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.870 [2024-11-26 20:50:28.397736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.870 [2024-11-26 20:50:28.397750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.870 [2024-11-26 20:50:28.397766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.870 [2024-11-26 20:50:28.397780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.870 [2024-11-26 20:50:28.397796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.870 [2024-11-26 20:50:28.397811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.870 [2024-11-26 20:50:28.397827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.870 [2024-11-26 20:50:28.397841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.870 [2024-11-26 20:50:28.397857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.870 [2024-11-26 20:50:28.397872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.870 [2024-11-26 20:50:28.397888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.870 [2024-11-26 20:50:28.397903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.870 [2024-11-26 20:50:28.397919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.870 [2024-11-26 20:50:28.397932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.870 [2024-11-26 20:50:28.397949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.870 [2024-11-26 20:50:28.397963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.870 [2024-11-26 20:50:28.397979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.870 [2024-11-26 20:50:28.397993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.870 [2024-11-26 20:50:28.398009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.870 [2024-11-26 20:50:28.398023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.870 [2024-11-26 20:50:28.398041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.871 [2024-11-26 20:50:28.398060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.871 [2024-11-26 20:50:28.398076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.871 [2024-11-26 20:50:28.398091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.871 [2024-11-26 20:50:28.398107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.871 [2024-11-26 20:50:28.398122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.871 [2024-11-26 20:50:28.398137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.871 [2024-11-26 20:50:28.398152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.871 [2024-11-26 20:50:28.398167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.871 [2024-11-26 20:50:28.398182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.871 [2024-11-26 20:50:28.398198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.871 [2024-11-26 20:50:28.398212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.871 [2024-11-26 20:50:28.398228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.871 [2024-11-26 20:50:28.398243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.871 [2024-11-26 20:50:28.398259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.871 [2024-11-26 20:50:28.398273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.871 [2024-11-26 20:50:28.398289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.871 [2024-11-26 20:50:28.398311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.871 [2024-11-26 20:50:28.398329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.871 [2024-11-26 20:50:28.398344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.871 [2024-11-26 20:50:28.398359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.871 [2024-11-26 20:50:28.398374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.871 [2024-11-26 20:50:28.398390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.871 [2024-11-26 20:50:28.398404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.871 [2024-11-26 20:50:28.398420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.871 [2024-11-26 20:50:28.398434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.871 [2024-11-26 20:50:28.398454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.871 [2024-11-26 20:50:28.398470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.871 [2024-11-26 20:50:28.398486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.871 [2024-11-26 20:50:28.398500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.871 [2024-11-26 20:50:28.398516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.871 [2024-11-26 20:50:28.398530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.871 [2024-11-26 20:50:28.398547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.871 [2024-11-26 20:50:28.398561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.871 [2024-11-26 20:50:28.398577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.871 [2024-11-26 20:50:28.398591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.871 [2024-11-26 20:50:28.398608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.871 [2024-11-26 20:50:28.398622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.871 [2024-11-26 20:50:28.398638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.871 [2024-11-26 20:50:28.398652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.871 [2024-11-26 20:50:28.398668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.871 [2024-11-26 20:50:28.398682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.871 [2024-11-26 20:50:28.398698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.871 [2024-11-26 20:50:28.398712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.871 [2024-11-26 20:50:28.398728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.871 [2024-11-26 20:50:28.398742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.871 [2024-11-26 20:50:28.398757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.871 [2024-11-26 20:50:28.398771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.871 [2024-11-26 20:50:28.398787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.871 [2024-11-26 20:50:28.398801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.871 [2024-11-26 20:50:28.398817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.871 [2024-11-26 20:50:28.398838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.871 [2024-11-26 20:50:28.398854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.871 [2024-11-26 20:50:28.398869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.871 [2024-11-26 20:50:28.398886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.871 [2024-11-26 20:50:28.398900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.871 [2024-11-26 20:50:28.398916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.871 [2024-11-26 20:50:28.398932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.871 [2024-11-26 20:50:28.398947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.871 [2024-11-26 20:50:28.398961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.871 [2024-11-26 20:50:28.398977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.871 [2024-11-26 20:50:28.398992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.871 [2024-11-26 20:50:28.399008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.871 [2024-11-26 20:50:28.399022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.871 [2024-11-26 20:50:28.399038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.871 [2024-11-26 20:50:28.399052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.871 [2024-11-26 20:50:28.399067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e1330 is same with the state(6) to be set 00:20:24.871 [2024-11-26 20:50:28.400290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:20:24.871 [2024-11-26 20:50:28.400332] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:20:24.871 [2024-11-26 20:50:28.400353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:20:24.871 [2024-11-26 20:50:28.400373] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:20:24.871 [2024-11-26 20:50:28.400791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:24.871 [2024-11-26 20:50:28.400823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21cd6f0 with addr=10.0.0.2, port=4420 00:20:24.871 [2024-11-26 20:50:28.400841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd6f0 is same with the state(6) to be set 00:20:24.871 [2024-11-26 20:50:28.400924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:24.871 [2024-11-26 20:50:28.400949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26049e0 with addr=10.0.0.2, port=4420 00:20:24.871 [2024-11-26 20:50:28.400965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26049e0 is same with the state(6) to be set 00:20:24.871 [2024-11-26 20:50:28.401052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:24.871 [2024-11-26 20:50:28.401082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2141110 with addr=10.0.0.2, port=4420 00:20:24.871 [2024-11-26 20:50:28.401100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2141110 is same with the state(6) to be set 00:20:24.871 [2024-11-26 20:50:28.401177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:24.871 [2024-11-26 20:50:28.401202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26373e0 with addr=10.0.0.2, port=4420 00:20:24.872 [2024-11-26 20:50:28.401218] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26373e0 is same with the state(6) to be set 00:20:24.872 [2024-11-26 20:50:28.402146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.872 [2024-11-26 20:50:28.402170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.872 [2024-11-26 20:50:28.402194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.872 [2024-11-26 20:50:28.402210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.872 [2024-11-26 20:50:28.402227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.872 [2024-11-26 20:50:28.402241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.872 [2024-11-26 20:50:28.402258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.872 [2024-11-26 20:50:28.402272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.872 [2024-11-26 20:50:28.402288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.872 [2024-11-26 20:50:28.402311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.872 [2024-11-26 20:50:28.402330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.872 [2024-11-26 20:50:28.402349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.872 [2024-11-26 20:50:28.402366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.872 [2024-11-26 20:50:28.402381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.872 [2024-11-26 20:50:28.402397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.872 [2024-11-26 20:50:28.402412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.872 [2024-11-26 20:50:28.402428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.872 [2024-11-26 20:50:28.402443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.872 [2024-11-26 20:50:28.402459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.872 [2024-11-26 20:50:28.402474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.872 [2024-11-26 20:50:28.402490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.872 [2024-11-26 20:50:28.402510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.872 [2024-11-26 20:50:28.402526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.872 [2024-11-26 20:50:28.402541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.872 [2024-11-26 20:50:28.402557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.872 [2024-11-26 20:50:28.402572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.872 [2024-11-26 20:50:28.402588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.872 [2024-11-26 20:50:28.402602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.872 [2024-11-26 20:50:28.402617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.872 [2024-11-26 20:50:28.402632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.872 [2024-11-26 20:50:28.402647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.872 [2024-11-26 20:50:28.402662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.872 [2024-11-26 20:50:28.402678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.872 [2024-11-26 20:50:28.402692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.872 [2024-11-26 20:50:28.402707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.872 [2024-11-26 20:50:28.402722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.872 [2024-11-26 20:50:28.402738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.872 [2024-11-26 20:50:28.402752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.872 [2024-11-26 20:50:28.402767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.872 [2024-11-26 20:50:28.402782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.872 [2024-11-26 20:50:28.402797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.872 [2024-11-26 20:50:28.402812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.872 [2024-11-26 20:50:28.402828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.872 [2024-11-26 20:50:28.402842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.872 [2024-11-26 20:50:28.402859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.872 [2024-11-26 20:50:28.402874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.872 [2024-11-26 20:50:28.402895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.872 [2024-11-26 20:50:28.402910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.872 [2024-11-26 20:50:28.402926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.872 [2024-11-26 20:50:28.402941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.872 [2024-11-26 20:50:28.402956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.872 [2024-11-26 20:50:28.402971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.872 [2024-11-26 20:50:28.402987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.872 [2024-11-26 20:50:28.403001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.872 [2024-11-26 20:50:28.403017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.872 [2024-11-26 20:50:28.403031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.872 [2024-11-26 20:50:28.403047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.872 [2024-11-26 20:50:28.403061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.872 [2024-11-26 20:50:28.403077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.872 [2024-11-26 20:50:28.403090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.872 [2024-11-26 20:50:28.403106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.872 [2024-11-26 20:50:28.403121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.872 [2024-11-26 20:50:28.403137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.872 [2024-11-26 20:50:28.403151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.872 [2024-11-26 20:50:28.403167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.872 [2024-11-26 20:50:28.403182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.872 [2024-11-26 20:50:28.403198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.872 [2024-11-26 20:50:28.403213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.872 [2024-11-26 20:50:28.403230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.872 [2024-11-26 20:50:28.403244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.872 [2024-11-26 20:50:28.403260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.872 [2024-11-26 20:50:28.403278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.872 [2024-11-26 20:50:28.403295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.872 [2024-11-26 20:50:28.403321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.872 [2024-11-26 20:50:28.403339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.872 [2024-11-26 20:50:28.403353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.872 [2024-11-26 20:50:28.403369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.872 [2024-11-26 20:50:28.403384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.873 [2024-11-26 20:50:28.403400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.873 [2024-11-26 20:50:28.403414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.873 [2024-11-26 20:50:28.403430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.873 [2024-11-26 20:50:28.403445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.873 [2024-11-26 20:50:28.403461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.873 [2024-11-26 20:50:28.403475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.873 [2024-11-26 20:50:28.403491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.873 [2024-11-26 20:50:28.403506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.873 [2024-11-26 20:50:28.403522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.873 [2024-11-26 20:50:28.403536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.873 [2024-11-26 20:50:28.403552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.873 [2024-11-26 20:50:28.403566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.873 [2024-11-26 20:50:28.403582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.873 [2024-11-26 20:50:28.403596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.873 [2024-11-26 20:50:28.403613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.873 [2024-11-26 20:50:28.403627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.873 [2024-11-26 20:50:28.403643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.873 [2024-11-26 20:50:28.403658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.873 [2024-11-26 20:50:28.403678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.873 [2024-11-26 20:50:28.403693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.873 [2024-11-26 20:50:28.403710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.873 [2024-11-26 20:50:28.403724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.873 [2024-11-26 20:50:28.403741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.873 [2024-11-26 20:50:28.403755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.873 [2024-11-26 20:50:28.403771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.873 [2024-11-26 20:50:28.403785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.873 [2024-11-26 20:50:28.403801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.873 [2024-11-26 20:50:28.403815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.873 [2024-11-26 20:50:28.403831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.873 [2024-11-26 20:50:28.403846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.873 [2024-11-26 20:50:28.403862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.873 [2024-11-26 20:50:28.403877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.873 [2024-11-26 20:50:28.403893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.873 [2024-11-26 20:50:28.403907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.873 [2024-11-26 20:50:28.403923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.873 [2024-11-26 20:50:28.403938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.873 [2024-11-26 20:50:28.403954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.873 [2024-11-26 20:50:28.403968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.873 [2024-11-26 20:50:28.403984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.873 [2024-11-26 20:50:28.403998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.873 [2024-11-26 20:50:28.404012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25df200 is same with the state(6) to be set 00:20:24.873 [2024-11-26 20:50:28.405243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.873 [2024-11-26 20:50:28.405266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.873 [2024-11-26 20:50:28.405297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.873 [2024-11-26 20:50:28.405323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.873 [2024-11-26 20:50:28.405340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.873 [2024-11-26 20:50:28.405355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.873 [2024-11-26 20:50:28.405371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.873 [2024-11-26 20:50:28.405386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.873 [2024-11-26 20:50:28.405402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.873 [2024-11-26 20:50:28.405416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.873 [2024-11-26 20:50:28.405432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.873 [2024-11-26 20:50:28.405447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.873 [2024-11-26 20:50:28.405463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.873 [2024-11-26 20:50:28.405477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.873 [2024-11-26 20:50:28.405493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.873 [2024-11-26 20:50:28.405508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.873 [2024-11-26 20:50:28.405524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.873 [2024-11-26 20:50:28.405539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.873 [2024-11-26 20:50:28.405554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.873 [2024-11-26 20:50:28.405568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.873 [2024-11-26 20:50:28.405584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.873 [2024-11-26 20:50:28.405599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.873 [2024-11-26 20:50:28.405615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.874 [2024-11-26 20:50:28.405630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.874 [2024-11-26 20:50:28.405645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.874 [2024-11-26 20:50:28.405660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.874 [2024-11-26 20:50:28.405676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.874 [2024-11-26 20:50:28.405695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.874 [2024-11-26 20:50:28.405712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.874 [2024-11-26 20:50:28.405727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.874 [2024-11-26 20:50:28.405742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.874 [2024-11-26 20:50:28.405757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.874 [2024-11-26 20:50:28.405773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.874 [2024-11-26 20:50:28.405787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.874 [2024-11-26 20:50:28.405803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.874 [2024-11-26 20:50:28.405818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.874 [2024-11-26 20:50:28.405834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.874 [2024-11-26 20:50:28.405848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.874 [2024-11-26 20:50:28.405864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.874 [2024-11-26 20:50:28.405879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.874 [2024-11-26 20:50:28.405895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.874 [2024-11-26 20:50:28.405909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.874 [2024-11-26 20:50:28.405925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.874 [2024-11-26 20:50:28.405940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.874 [2024-11-26 20:50:28.405956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.874 [2024-11-26 20:50:28.405970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.874 [2024-11-26 20:50:28.405985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.874 [2024-11-26 20:50:28.406000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.874 [2024-11-26 20:50:28.406016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.874 [2024-11-26 20:50:28.406030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.874 [2024-11-26 20:50:28.406046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.874 [2024-11-26 20:50:28.406060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.874 [2024-11-26 20:50:28.406079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.874 [2024-11-26 20:50:28.406096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.874 [2024-11-26 20:50:28.406112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.874 [2024-11-26 20:50:28.406126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.874 [2024-11-26 20:50:28.406143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.874 [2024-11-26 20:50:28.406157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.874 [2024-11-26 20:50:28.406173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.874 [2024-11-26 20:50:28.406187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.874 [2024-11-26 20:50:28.406202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.874 [2024-11-26 20:50:28.406216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.874 [2024-11-26 20:50:28.406233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.874 [2024-11-26 20:50:28.406247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.874 [2024-11-26 20:50:28.406263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.874 [2024-11-26 20:50:28.406277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.874 [2024-11-26 20:50:28.406293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.874 [2024-11-26 20:50:28.406314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.874 [2024-11-26 20:50:28.406332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.874 [2024-11-26 20:50:28.406347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.874 [2024-11-26 20:50:28.406363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.874 [2024-11-26 20:50:28.406377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.874 [2024-11-26 20:50:28.406393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.874 [2024-11-26 20:50:28.406410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.874 [2024-11-26 20:50:28.406426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.874 [2024-11-26 20:50:28.406441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.874 [2024-11-26 20:50:28.406457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.874 [2024-11-26 20:50:28.406476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.874 [2024-11-26 20:50:28.406493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.874 [2024-11-26 20:50:28.406507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.874 [2024-11-26 20:50:28.406523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.874 [2024-11-26 20:50:28.406538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.874 [2024-11-26 20:50:28.406554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.874 [2024-11-26 20:50:28.406568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.874 [2024-11-26 20:50:28.406585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.874 [2024-11-26 20:50:28.406600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.874 [2024-11-26 20:50:28.406616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.874 [2024-11-26 20:50:28.406631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.874 [2024-11-26 20:50:28.406647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.874 [2024-11-26 20:50:28.406662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.874 [2024-11-26 20:50:28.406679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.874 [2024-11-26 20:50:28.406694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.874 [2024-11-26 20:50:28.406710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.874 [2024-11-26 20:50:28.406725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.874 [2024-11-26 20:50:28.406741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.874 [2024-11-26 20:50:28.406756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.874 [2024-11-26 20:50:28.406772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.874 [2024-11-26 20:50:28.406787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.874 [2024-11-26 20:50:28.406803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.874 [2024-11-26 20:50:28.406818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.874 [2024-11-26 20:50:28.406834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.874 [2024-11-26 20:50:28.406849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.875 [2024-11-26 20:50:28.406869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.875 [2024-11-26 20:50:28.406884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.875 [2024-11-26 20:50:28.406900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.875 [2024-11-26 20:50:28.406915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.875 [2024-11-26 20:50:28.406932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.875 [2024-11-26 20:50:28.406947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.875 [2024-11-26 20:50:28.406963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.875 [2024-11-26 20:50:28.406978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.875 [2024-11-26 20:50:28.406994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.875 [2024-11-26 20:50:28.407009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.875 [2024-11-26 20:50:28.407025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.875 [2024-11-26 20:50:28.407039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.875 [2024-11-26 20:50:28.407055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.875 [2024-11-26 20:50:28.407070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.875 [2024-11-26 20:50:28.407087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.875 [2024-11-26 20:50:28.407101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.875 [2024-11-26 20:50:28.407117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.875 [2024-11-26 20:50:28.407131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.875 [2024-11-26 20:50:28.407148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.875 [2024-11-26 20:50:28.407163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.875 [2024-11-26 20:50:28.407179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.875 [2024-11-26 20:50:28.407194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.875 [2024-11-26 20:50:28.407210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.875 [2024-11-26 20:50:28.407225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.875 [2024-11-26 20:50:28.407241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:24.875 [2024-11-26 20:50:28.407259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:24.875 [2024-11-26 20:50:28.407275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e0740 is same with the state(6) to be set 00:20:24.875 [2024-11-26 20:50:28.409211] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:20:24.875 [2024-11-26 20:50:28.409247] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:24.875 [2024-11-26 20:50:28.409267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:20:24.875 [2024-11-26 20:50:28.409285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:20:24.875 [2024-11-26 20:50:28.409315] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:20:24.875 task offset: 25600 on job bdev=Nvme4n1 fails 00:20:24.875 00:20:24.875 Latency(us) 00:20:24.875 [2024-11-26T19:50:28.572Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:24.875 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:24.875 Job: Nvme1n1 ended in about 0.90 seconds with error 00:20:24.875 Verification LBA range: start 0x0 length 0x400 00:20:24.875 Nvme1n1 : 0.90 213.26 13.33 71.09 0.00 222537.20 8155.59 250104.79 00:20:24.875 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:24.875 Job: Nvme2n1 ended in about 0.91 seconds with error 00:20:24.875 Verification LBA range: start 0x0 length 0x400 00:20:24.875 Nvme2n1 : 0.91 139.97 8.75 69.98 0.00 295439.93 21456.97 264085.81 00:20:24.875 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:24.875 Job: Nvme3n1 ended in about 0.90 seconds with error 00:20:24.875 Verification LBA range: start 0x0 length 0x400 00:20:24.875 Nvme3n1 : 0.90 213.68 13.36 71.23 0.00 212890.64 11019.76 256318.58 00:20:24.875 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:24.875 Job: Nvme4n1 ended in about 0.89 seconds with error 00:20:24.875 Verification LBA range: start 0x0 length 0x400 00:20:24.875 Nvme4n1 : 0.89 214.58 13.41 71.53 0.00 207358.39 6941.96 254765.13 00:20:24.875 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:24.875 Job: Nvme5n1 ended in about 0.92 seconds with error 00:20:24.875 Verification LBA range: start 0x0 length 0x400 00:20:24.875 Nvme5n1 : 0.92 139.47 8.72 69.74 0.00 278162.39 22816.24 248551.35 00:20:24.875 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:24.875 Job: Nvme6n1 ended in about 0.92 seconds with error 00:20:24.875 Verification LBA range: start 0x0 length 0x400 00:20:24.875 Nvme6n1 : 0.92 138.98 8.69 69.49 0.00 273127.35 39612.87 234570.33 00:20:24.875 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:24.875 Job: Nvme7n1 ended in about 0.91 seconds with error 00:20:24.875 Verification LBA range: start 0x0 length 0x400 00:20:24.875 Nvme7n1 : 0.91 217.04 13.57 70.51 0.00 193200.26 7961.41 243891.01 00:20:24.875 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:24.875 Job: Nvme8n1 ended in about 0.93 seconds with error 00:20:24.875 Verification LBA range: start 0x0 length 0x400 00:20:24.875 Nvme8n1 : 0.93 143.13 8.95 63.50 0.00 263225.39 19223.89 254765.13 00:20:24.875 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:24.875 Job: Nvme9n1 ended in about 0.93 seconds with error 00:20:24.875 Verification LBA range: start 0x0 length 0x400 00:20:24.875 Nvme9n1 : 0.93 137.27 8.58 68.64 0.00 259174.91 19612.25 288940.94 00:20:24.875 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:24.875 Job: Nvme10n1 ended in about 0.92 seconds with error 00:20:24.875 Verification LBA range: start 0x0 length 0x400 00:20:24.875 Nvme10n1 : 0.92 138.49 8.66 69.24 0.00 250494.67 23301.69 267192.70 00:20:24.875 [2024-11-26T19:50:28.572Z] =================================================================================================================== 00:20:24.875 [2024-11-26T19:50:28.572Z] Total : 1695.87 105.99 694.94 0.00 241149.23 6941.96 288940.94 00:20:24.875 [2024-11-26 20:50:28.436433] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:24.875 [2024-11-26 20:50:28.436523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:20:24.875 [2024-11-26 20:50:28.436654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21cd6f0 (9): Bad file descriptor 00:20:24.875 [2024-11-26 20:50:28.436684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26049e0 (9): Bad file descriptor 00:20:24.875 [2024-11-26 20:50:28.436706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2141110 (9): Bad file descriptor 00:20:24.875 [2024-11-26 20:50:28.436725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26373e0 (9): Bad file descriptor 00:20:24.875 [2024-11-26 20:50:28.437110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:24.875 [2024-11-26 20:50:28.437153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d9270 with addr=10.0.0.2, port=4420 00:20:24.875 [2024-11-26 20:50:28.437174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d9270 is same with the state(6) to be set 00:20:24.875 [2024-11-26 20:50:28.437291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:24.875 [2024-11-26 20:50:28.437327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d9700 with addr=10.0.0.2, port=4420 00:20:24.875 [2024-11-26 20:50:28.437350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d9700 is same with the state(6) to be set 00:20:24.875 [2024-11-26 20:50:28.437436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:24.875 [2024-11-26 20:50:28.437462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2646270 with addr=10.0.0.2, port=4420 00:20:24.875 [2024-11-26 20:50:28.437480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2646270 is same with the state(6) to be set 00:20:24.875 [2024-11-26 20:50:28.437566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:24.875 [2024-11-26 20:50:28.437593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2604fc0 with addr=10.0.0.2, port=4420 00:20:24.875 [2024-11-26 20:50:28.437610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2604fc0 is same with the state(6) to be set 00:20:24.875 [2024-11-26 20:50:28.437691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:24.875 [2024-11-26 20:50:28.437717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2646450 with addr=10.0.0.2, port=4420 00:20:24.875 [2024-11-26 20:50:28.437734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2646450 is same with the state(6) to be set 00:20:24.875 [2024-11-26 20:50:28.437815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:24.875 [2024-11-26 20:50:28.437840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2637200 with addr=10.0.0.2, port=4420 00:20:24.875 [2024-11-26 20:50:28.437856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2637200 is same with the state(6) to be set 00:20:24.875 [2024-11-26 20:50:28.437872] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:20:24.876 [2024-11-26 20:50:28.437886] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:20:24.876 [2024-11-26 20:50:28.437902] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:20:24.876 [2024-11-26 20:50:28.437920] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:20:24.876 [2024-11-26 20:50:28.437946] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:20:24.876 [2024-11-26 20:50:28.437961] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:20:24.876 [2024-11-26 20:50:28.437974] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:20:24.876 [2024-11-26 20:50:28.437988] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:20:24.876 [2024-11-26 20:50:28.438002] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:20:24.876 [2024-11-26 20:50:28.438016] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:20:24.876 [2024-11-26 20:50:28.438029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:20:24.876 [2024-11-26 20:50:28.438042] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:20:24.876 [2024-11-26 20:50:28.438057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:20:24.876 [2024-11-26 20:50:28.438070] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:20:24.876 [2024-11-26 20:50:28.438083] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:20:24.876 [2024-11-26 20:50:28.438096] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:20:24.876 [2024-11-26 20:50:28.438863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21d9270 (9): Bad file descriptor 00:20:24.876 [2024-11-26 20:50:28.438896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21d9700 (9): Bad file descriptor 00:20:24.876 [2024-11-26 20:50:28.438917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2646270 (9): Bad file descriptor 00:20:24.876 [2024-11-26 20:50:28.438936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2604fc0 (9): Bad file descriptor 00:20:24.876 [2024-11-26 20:50:28.438964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2646450 (9): Bad file descriptor 00:20:24.876 [2024-11-26 20:50:28.438982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2637200 (9): Bad file descriptor 00:20:24.876 [2024-11-26 20:50:28.439046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:20:24.876 [2024-11-26 20:50:28.439072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:20:24.876 [2024-11-26 20:50:28.439090] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:20:24.876 [2024-11-26 20:50:28.439107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:20:24.876 [2024-11-26 20:50:28.439152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:20:24.876 [2024-11-26 20:50:28.439170] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:20:24.876 [2024-11-26 20:50:28.439184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:20:24.876 [2024-11-26 20:50:28.439197] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:20:24.876 [2024-11-26 20:50:28.439212] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:24.876 [2024-11-26 20:50:28.439225] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:24.876 [2024-11-26 20:50:28.439239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:24.876 [2024-11-26 20:50:28.439258] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:24.876 [2024-11-26 20:50:28.439272] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:20:24.876 [2024-11-26 20:50:28.439285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:20:24.876 [2024-11-26 20:50:28.439299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:20:24.876 [2024-11-26 20:50:28.439322] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:20:24.876 [2024-11-26 20:50:28.439337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:20:24.876 [2024-11-26 20:50:28.439359] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:20:24.876 [2024-11-26 20:50:28.439372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:20:24.876 [2024-11-26 20:50:28.439386] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:20:24.876 [2024-11-26 20:50:28.439400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:20:24.876 [2024-11-26 20:50:28.439413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:20:24.876 [2024-11-26 20:50:28.439426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:20:24.876 [2024-11-26 20:50:28.439440] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:20:24.876 [2024-11-26 20:50:28.439454] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:20:24.876 [2024-11-26 20:50:28.439467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:20:24.876 [2024-11-26 20:50:28.439480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:20:24.876 [2024-11-26 20:50:28.439493] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:20:24.876 [2024-11-26 20:50:28.439633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:24.876 [2024-11-26 20:50:28.439661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26373e0 with addr=10.0.0.2, port=4420 00:20:24.876 [2024-11-26 20:50:28.439679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26373e0 is same with the state(6) to be set 00:20:24.876 [2024-11-26 20:50:28.439772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:24.876 [2024-11-26 20:50:28.439798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2141110 with addr=10.0.0.2, port=4420 00:20:24.876 [2024-11-26 20:50:28.439815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2141110 is same with the state(6) to be set 00:20:24.876 [2024-11-26 20:50:28.439887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:24.876 [2024-11-26 20:50:28.439913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x26049e0 with addr=10.0.0.2, port=4420 00:20:24.876 [2024-11-26 20:50:28.439929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26049e0 is same with the state(6) to be set 00:20:24.876 [2024-11-26 20:50:28.440003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:24.876 [2024-11-26 20:50:28.440028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21cd6f0 with addr=10.0.0.2, port=4420 00:20:24.876 [2024-11-26 20:50:28.440045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd6f0 is same with the state(6) to be set 00:20:24.876 [2024-11-26 20:50:28.440098] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26373e0 (9): Bad file descriptor 00:20:24.876 [2024-11-26 20:50:28.440124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2141110 (9): Bad file descriptor 00:20:24.876 [2024-11-26 20:50:28.440143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26049e0 (9): Bad file descriptor 00:20:24.876 [2024-11-26 20:50:28.440163] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21cd6f0 (9): Bad file descriptor 00:20:24.876 [2024-11-26 20:50:28.440205] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:20:24.876 [2024-11-26 20:50:28.440224] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:20:24.876 [2024-11-26 20:50:28.440238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:20:24.876 [2024-11-26 20:50:28.440253] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:20:24.876 [2024-11-26 20:50:28.440268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:20:24.876 [2024-11-26 20:50:28.440284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:20:24.876 [2024-11-26 20:50:28.440297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:20:24.876 [2024-11-26 20:50:28.440320] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:20:24.876 [2024-11-26 20:50:28.440336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:20:24.876 [2024-11-26 20:50:28.440350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:20:24.876 [2024-11-26 20:50:28.440363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:20:24.876 [2024-11-26 20:50:28.440377] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:20:24.876 [2024-11-26 20:50:28.440391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:20:24.876 [2024-11-26 20:50:28.440403] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:20:24.876 [2024-11-26 20:50:28.440418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:20:24.876 [2024-11-26 20:50:28.440430] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:20:25.444 20:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:20:26.381 20:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1707029 00:20:26.381 20:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:20:26.381 20:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1707029 00:20:26.381 20:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:20:26.381 20:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:26.381 20:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:20:26.381 20:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:26.381 20:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 1707029 00:20:26.381 20:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:20:26.381 20:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:26.381 20:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:20:26.381 20:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:20:26.381 20:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:20:26.381 20:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:26.381 20:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:20:26.381 20:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:26.381 20:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:26.381 20:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:26.381 20:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:26.381 20:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:26.381 20:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:20:26.381 20:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:26.381 20:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:20:26.381 20:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:26.381 20:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:26.381 rmmod nvme_tcp 00:20:26.381 rmmod nvme_fabrics 00:20:26.381 rmmod nvme_keyring 00:20:26.381 20:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:26.381 20:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:20:26.381 20:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:20:26.381 20:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 1706851 ']' 00:20:26.381 20:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 1706851 00:20:26.381 20:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1706851 ']' 00:20:26.381 20:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1706851 00:20:26.381 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1706851) - No such process 00:20:26.381 20:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1706851 is not found' 00:20:26.381 Process with pid 1706851 is not found 00:20:26.381 20:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:26.381 20:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:26.381 20:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:26.381 20:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:20:26.381 20:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:20:26.381 20:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:26.381 20:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:20:26.381 20:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:26.381 20:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:26.381 20:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:26.381 20:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:26.381 20:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:28.923 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:28.923 00:20:28.923 real 0m7.254s 00:20:28.923 user 0m17.303s 00:20:28.923 sys 0m1.448s 00:20:28.923 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:28.923 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:28.923 ************************************ 00:20:28.923 END TEST nvmf_shutdown_tc3 00:20:28.923 ************************************ 00:20:28.923 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:20:28.923 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:20:28.923 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:20:28.923 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:28.923 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:28.923 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:28.923 ************************************ 00:20:28.923 START TEST nvmf_shutdown_tc4 00:20:28.923 ************************************ 00:20:28.923 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:20:28.923 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:20:28.923 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:28.923 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:28.923 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:28.923 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:28.923 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:28.923 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:28.923 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:28.923 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:28.923 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:28.923 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:28.923 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:28.923 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:28.923 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:28.923 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:28.923 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:28.923 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:28.923 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:28.923 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:28.923 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:28.923 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:28.923 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:20:28.923 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:28.923 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:20:28.923 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:20:28.923 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:20:28.923 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:20:28.923 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:20:28.923 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:28.923 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:28.923 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:28.923 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:28.923 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:28.923 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:28.923 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:28.923 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:28.923 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:28.923 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:28.923 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:28.924 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:28.924 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:28.924 Found net devices under 0000:09:00.0: cvl_0_0 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:28.924 Found net devices under 0000:09:00.1: cvl_0_1 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:28.924 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:28.924 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:20:28.924 00:20:28.924 --- 10.0.0.2 ping statistics --- 00:20:28.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.924 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:28.924 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:28.924 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:20:28.924 00:20:28.924 --- 10.0.0.1 ping statistics --- 00:20:28.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.924 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=1707910 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:28.924 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 1707910 00:20:28.925 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 1707910 ']' 00:20:28.925 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:28.925 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:28.925 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:28.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:28.925 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:28.925 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:28.925 [2024-11-26 20:50:32.351737] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:20:28.925 [2024-11-26 20:50:32.351854] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:28.925 [2024-11-26 20:50:32.426627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:28.925 [2024-11-26 20:50:32.486172] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:28.925 [2024-11-26 20:50:32.486222] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:28.925 [2024-11-26 20:50:32.486245] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:28.925 [2024-11-26 20:50:32.486256] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:28.925 [2024-11-26 20:50:32.486265] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:28.925 [2024-11-26 20:50:32.487809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:28.925 [2024-11-26 20:50:32.487915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:28.925 [2024-11-26 20:50:32.488013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:28.925 [2024-11-26 20:50:32.488021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:28.925 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:28.925 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:20:28.925 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:28.925 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:28.925 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:29.183 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:29.183 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:29.183 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.183 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:29.183 [2024-11-26 20:50:32.645894] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:29.183 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.183 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:29.183 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:29.183 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:29.183 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:29.183 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:29.183 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:29.183 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:29.183 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:29.183 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:29.183 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:29.183 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:29.183 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:29.183 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:29.183 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:29.183 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:29.183 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:29.183 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:29.183 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:29.183 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:29.183 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:29.183 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:29.183 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:29.183 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:29.183 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:29.183 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:29.183 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:29.183 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.183 20:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:29.183 Malloc1 00:20:29.183 [2024-11-26 20:50:32.743083] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:29.183 Malloc2 00:20:29.183 Malloc3 00:20:29.183 Malloc4 00:20:29.441 Malloc5 00:20:29.441 Malloc6 00:20:29.441 Malloc7 00:20:29.441 Malloc8 00:20:29.441 Malloc9 00:20:29.699 Malloc10 00:20:29.699 20:50:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.699 20:50:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:29.699 20:50:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:29.699 20:50:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:29.699 20:50:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1708000 00:20:29.699 20:50:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:20:29.699 20:50:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:20:29.699 [2024-11-26 20:50:33.259639] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:34.967 20:50:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:34.967 20:50:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1707910 00:20:34.967 20:50:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1707910 ']' 00:20:34.967 20:50:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1707910 00:20:34.967 20:50:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:20:34.967 20:50:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:34.967 20:50:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1707910 00:20:34.967 20:50:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:34.967 20:50:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:34.967 20:50:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1707910' 00:20:34.967 killing process with pid 1707910 00:20:34.967 20:50:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 1707910 00:20:34.967 20:50:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 1707910 00:20:34.967 [2024-11-26 20:50:38.247390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc3370 is same with the state(6) to be set 00:20:34.967 [2024-11-26 20:50:38.247475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc3370 is same with the state(6) to be set 00:20:34.967 [2024-11-26 20:50:38.247492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc3370 is same with the state(6) to be set 00:20:34.967 [2024-11-26 20:50:38.247505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc3370 is same with the state(6) to be set 00:20:34.967 [2024-11-26 20:50:38.247518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc3370 is same with the state(6) to be set 00:20:34.967 [2024-11-26 20:50:38.247530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc3370 is same with the state(6) to be set 00:20:34.967 [2024-11-26 20:50:38.248483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc3d10 is same with the state(6) to be set 00:20:34.967 [2024-11-26 20:50:38.249503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc2ea0 is same with the state(6) to be set 00:20:34.967 [2024-11-26 20:50:38.249539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc2ea0 is same with the state(6) to be set 00:20:34.967 [2024-11-26 20:50:38.249555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc2ea0 is same with the state(6) to be set 00:20:34.967 [2024-11-26 20:50:38.249569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc2ea0 is same with the state(6) to be set 00:20:34.967 [2024-11-26 20:50:38.251615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc1650 is same with the state(6) to be set 00:20:34.967 [2024-11-26 20:50:38.251650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc1650 is same with the state(6) to be set 00:20:34.967 [2024-11-26 20:50:38.251675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc1650 is same with the state(6) to be set 00:20:34.967 [2024-11-26 20:50:38.251688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc1650 is same with the state(6) to be set 00:20:34.967 [2024-11-26 20:50:38.251701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc1650 is same with the state(6) to be set 00:20:34.967 [2024-11-26 20:50:38.251713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc1650 is same with the state(6) to be set 00:20:34.967 [2024-11-26 20:50:38.253608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf1f5d0 is same with the state(6) to be set 00:20:34.967 [2024-11-26 20:50:38.253643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf1f5d0 is same with the state(6) to be set 00:20:34.967 [2024-11-26 20:50:38.253668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf1f5d0 is same with the state(6) to be set 00:20:34.967 [2024-11-26 20:50:38.253680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf1f5d0 is same with the state(6) to be set 00:20:34.967 [2024-11-26 20:50:38.253692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf1f5d0 is same with the state(6) to be set 00:20:34.967 [2024-11-26 20:50:38.254057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf1faa0 is same with the state(6) to be set 00:20:34.967 [2024-11-26 20:50:38.254093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf1faa0 is same with the state(6) to be set 00:20:34.967 [2024-11-26 20:50:38.254109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf1faa0 is same with the state(6) to be set 00:20:34.967 [2024-11-26 20:50:38.254122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf1faa0 is same with the state(6) to be set 00:20:34.967 [2024-11-26 20:50:38.254134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf1faa0 is same with the state(6) to be set 00:20:34.967 [2024-11-26 20:50:38.254171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf1faa0 is same with the state(6) to be set 00:20:34.967 [2024-11-26 20:50:38.254185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf1faa0 is same with the state(6) to be set 00:20:34.967 [2024-11-26 20:50:38.254198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf1faa0 is same with the state(6) to be set 00:20:34.967 [2024-11-26 20:50:38.254212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf1faa0 is same with the state(6) to be set 00:20:34.967 [2024-11-26 20:50:38.254230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf1faa0 is same with the state(6) to be set 00:20:34.967 [2024-11-26 20:50:38.254857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf1ff70 is same with the state(6) to be set 00:20:34.967 [2024-11-26 20:50:38.254887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf1ff70 is same with the state(6) to be set 00:20:34.967 [2024-11-26 20:50:38.254911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf1ff70 is same with the state(6) to be set 00:20:34.967 [2024-11-26 20:50:38.254924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf1ff70 is same with the state(6) to be set 00:20:34.967 [2024-11-26 20:50:38.254936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf1ff70 is same with the state(6) to be set 00:20:34.967 [2024-11-26 20:50:38.254955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf1ff70 is same with the state(6) to be set 00:20:34.967 [2024-11-26 20:50:38.254980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf1ff70 is same with the state(6) to be set 00:20:34.967 [2024-11-26 20:50:38.254996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf1ff70 is same with the state(6) to be set 00:20:34.967 [2024-11-26 20:50:38.255008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf1ff70 is same with the state(6) to be set 00:20:34.967 [2024-11-26 20:50:38.255931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc41e0 is same with the state(6) to be set 00:20:34.967 [2024-11-26 20:50:38.255963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc41e0 is same with the state(6) to be set 00:20:34.967 [2024-11-26 20:50:38.255988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc41e0 is same with the state(6) to be set 00:20:34.967 [2024-11-26 20:50:38.256001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc41e0 is same with the state(6) to be set 00:20:34.968 [2024-11-26 20:50:38.256014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc41e0 is same with the state(6) to be set 00:20:34.968 [2024-11-26 20:50:38.256025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc41e0 is same with the state(6) to be set 00:20:34.968 [2024-11-26 20:50:38.256037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc41e0 is same with the state(6) to be set 00:20:34.968 [2024-11-26 20:50:38.256048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc41e0 is same with the state(6) to be set 00:20:34.968 [2024-11-26 20:50:38.256060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc41e0 is same with the state(6) to be set 00:20:34.968 [2024-11-26 20:50:38.257661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf21c50 is same with the state(6) to be set 00:20:34.968 [2024-11-26 20:50:38.257691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf21c50 is same with the state(6) to be set 00:20:34.968 [2024-11-26 20:50:38.257713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf21c50 is same with the state(6) to be set 00:20:34.968 [2024-11-26 20:50:38.257726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf21c50 is same with the state(6) to be set 00:20:34.968 [2024-11-26 20:50:38.258119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf22120 is same with the state(6) to be set 00:20:34.968 [2024-11-26 20:50:38.258149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf22120 is same with the state(6) to be set 00:20:34.968 [2024-11-26 20:50:38.258164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf22120 is same with the state(6) to be set 00:20:34.968 [2024-11-26 20:50:38.258522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf225f0 is same with the state(6) to be set 00:20:34.968 [2024-11-26 20:50:38.258554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf225f0 is same with the state(6) to be set 00:20:34.968 [2024-11-26 20:50:38.258578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf225f0 is same with the state(6) to be set 00:20:34.968 [2024-11-26 20:50:38.258612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf225f0 is same with the state(6) to be set 00:20:34.968 [2024-11-26 20:50:38.258630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf225f0 is same with the state(6) to be set 00:20:34.968 [2024-11-26 20:50:38.258643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf225f0 is same with the state(6) to be set 00:20:34.968 [2024-11-26 20:50:38.258655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf225f0 is same with the state(6) to be set 00:20:34.968 [2024-11-26 20:50:38.258667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf225f0 is same with the state(6) to be set 00:20:34.968 [2024-11-26 20:50:38.258680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf225f0 is same with the state(6) to be set 00:20:34.968 [2024-11-26 20:50:38.259870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf21780 is same with the state(6) to be set 00:20:34.968 [2024-11-26 20:50:38.259902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf21780 is same with the state(6) to be set 00:20:34.968 [2024-11-26 20:50:38.259921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf21780 is same with the state(6) to be set 00:20:34.968 [2024-11-26 20:50:38.267688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf22f90 is same with the state(6) to be set 00:20:34.968 [2024-11-26 20:50:38.267743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf22f90 is same with the state(6) to be set 00:20:34.968 [2024-11-26 20:50:38.267775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf22f90 is same with the state(6) to be set 00:20:34.968 [2024-11-26 20:50:38.267795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf22f90 is same with the state(6) to be set 00:20:34.968 [2024-11-26 20:50:38.267819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf22f90 is same with the state(6) to be set 00:20:34.968 [2024-11-26 20:50:38.267836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf22f90 is same with the state(6) to be set 00:20:34.968 [2024-11-26 20:50:38.267850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf22f90 is same with the state(6) to be set 00:20:34.968 [2024-11-26 20:50:38.267862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf22f90 is same with the state(6) to be set 00:20:34.968 [2024-11-26 20:50:38.267874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf22f90 is same with the state(6) to be set 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 starting I/O failed: -6 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 starting I/O failed: -6 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 starting I/O failed: -6 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 starting I/O failed: -6 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 starting I/O failed: -6 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 starting I/O failed: -6 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 starting I/O failed: -6 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 starting I/O failed: -6 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 starting I/O failed: -6 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 [2024-11-26 20:50:38.272369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 starting I/O failed: -6 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 starting I/O failed: -6 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 starting I/O failed: -6 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 starting I/O failed: -6 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 starting I/O failed: -6 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 starting I/O failed: -6 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 starting I/O failed: -6 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 starting I/O failed: -6 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 starting I/O failed: -6 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 [2024-11-26 20:50:38.272971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66560 is same with the state(6) to be set 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 [2024-11-26 20:50:38.273008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66560 is same with the state(6) to be set 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 starting I/O failed: -6 00:20:34.968 [2024-11-26 20:50:38.273028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66560 is same with the state(6) to be set 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 [2024-11-26 20:50:38.273042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66560 is same with tstarting I/O failed: -6 00:20:34.968 he state(6) to be set 00:20:34.968 [2024-11-26 20:50:38.273055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66560 is same with the state(6) to be set 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 [2024-11-26 20:50:38.273068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66560 is same with the state(6) to be set 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 [2024-11-26 20:50:38.273084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66560 is same with the state(6) to be set 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 [2024-11-26 20:50:38.273097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66560 is same with the state(6) to be set 00:20:34.968 starting I/O failed: -6 00:20:34.968 [2024-11-26 20:50:38.273109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66560 is same with tWrite completed with error (sct=0, sc=8) 00:20:34.968 he state(6) to be set 00:20:34.968 starting I/O failed: -6 00:20:34.968 [2024-11-26 20:50:38.273139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66560 is same with the state(6) to be set 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 starting I/O failed: -6 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 starting I/O failed: -6 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 starting I/O failed: -6 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 starting I/O failed: -6 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 starting I/O failed: -6 00:20:34.968 Write completed with error (sct=0, sc=8) 00:20:34.968 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 [2024-11-26 20:50:38.273572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 [2024-11-26 20:50:38.274048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66c80 is same with the state(6) to be set 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 [2024-11-26 20:50:38.274079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66c80 is same with the state(6) to be set 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 [2024-11-26 20:50:38.274096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66c80 is same with the state(6) to be set 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 [2024-11-26 20:50:38.274118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66c80 is same with the state(6) to be set 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 [2024-11-26 20:50:38.274132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66c80 is same with the state(6) to be set 00:20:34.969 starting I/O failed: -6 00:20:34.969 [2024-11-26 20:50:38.274145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66c80 is same with the state(6) to be set 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 [2024-11-26 20:50:38.274157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd66c80 is same with the state(6) to be set 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 [2024-11-26 20:50:38.274789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 [2024-11-26 20:50:38.275026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd67620 is same with tstarting I/O failed: -6 00:20:34.969 he state(6) to be set 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 [2024-11-26 20:50:38.275058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd67620 is same with tstarting I/O failed: -6 00:20:34.969 he state(6) to be set 00:20:34.969 [2024-11-26 20:50:38.275074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd67620 is same with tWrite completed with error (sct=0, sc=8) 00:20:34.969 he state(6) to be set 00:20:34.969 starting I/O failed: -6 00:20:34.969 [2024-11-26 20:50:38.275090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd67620 is same with the state(6) to be set 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 [2024-11-26 20:50:38.275103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd67620 is same with tstarting I/O failed: -6 00:20:34.969 he state(6) to be set 00:20:34.969 [2024-11-26 20:50:38.275117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd67620 is same with tWrite completed with error (sct=0, sc=8) 00:20:34.969 he state(6) to be set 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.969 starting I/O failed: -6 00:20:34.969 Write completed with error (sct=0, sc=8) 00:20:34.970 [2024-11-26 20:50:38.275857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf8f0 is same with the state(6) to be set 00:20:34.970 starting I/O failed: -6 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 [2024-11-26 20:50:38.275883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf8f0 is same with the state(6) to be set 00:20:34.970 starting I/O failed: -6 00:20:34.970 [2024-11-26 20:50:38.275902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf8f0 is same with the state(6) to be set 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 starting I/O failed: -6 00:20:34.970 [2024-11-26 20:50:38.275915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf8f0 is same with the state(6) to be set 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 [2024-11-26 20:50:38.275927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf8f0 is same with the state(6) to be set 00:20:34.970 starting I/O failed: -6 00:20:34.970 [2024-11-26 20:50:38.275939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf8f0 is same with the state(6) to be set 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 starting I/O failed: -6 00:20:34.970 [2024-11-26 20:50:38.275951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf8f0 is same with the state(6) to be set 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 [2024-11-26 20:50:38.275964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf8f0 is same with the state(6) to be set 00:20:34.970 starting I/O failed: -6 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 starting I/O failed: -6 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 starting I/O failed: -6 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 starting I/O failed: -6 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 starting I/O failed: -6 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 starting I/O failed: -6 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 starting I/O failed: -6 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 starting I/O failed: -6 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 starting I/O failed: -6 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 starting I/O failed: -6 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 starting I/O failed: -6 00:20:34.970 [2024-11-26 20:50:38.276505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:34.970 NVMe io qpair process completion error 00:20:34.970 [2024-11-26 20:50:38.276577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc02d0 is same with the state(6) to be set 00:20:34.970 [2024-11-26 20:50:38.276614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc02d0 is same with the state(6) to be set 00:20:34.970 [2024-11-26 20:50:38.276628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc02d0 is same with the state(6) to be set 00:20:34.970 [2024-11-26 20:50:38.276641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc02d0 is same with the state(6) to be set 00:20:34.970 [2024-11-26 20:50:38.276693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc02d0 is same with the state(6) to be set 00:20:34.970 [2024-11-26 20:50:38.276712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc02d0 is same with the state(6) to be set 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 starting I/O failed: -6 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 starting I/O failed: -6 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 starting I/O failed: -6 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 starting I/O failed: -6 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 [2024-11-26 20:50:38.277212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf420 is same with the state(6) to be set 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 [2024-11-26 20:50:38.277244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf420 is same with the state(6) to be set 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 starting I/O failed: -6 00:20:34.970 [2024-11-26 20:50:38.277259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf420 is same with the state(6) to be set 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 [2024-11-26 20:50:38.277272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf420 is same with the state(6) to be set 00:20:34.970 [2024-11-26 20:50:38.277284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf420 is same with the state(6) to be set 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 [2024-11-26 20:50:38.277297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf420 is same with the state(6) to be set 00:20:34.970 [2024-11-26 20:50:38.277334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbf420 is same with the state(6) to be set 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 starting I/O failed: -6 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 starting I/O failed: -6 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 starting I/O failed: -6 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 starting I/O failed: -6 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 starting I/O failed: -6 00:20:34.970 [2024-11-26 20:50:38.277721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:34.970 starting I/O failed: -6 00:20:34.970 starting I/O failed: -6 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 starting I/O failed: -6 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 starting I/O failed: -6 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 starting I/O failed: -6 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 starting I/O failed: -6 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 starting I/O failed: -6 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 starting I/O failed: -6 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 starting I/O failed: -6 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 starting I/O failed: -6 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 starting I/O failed: -6 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 starting I/O failed: -6 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 starting I/O failed: -6 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 starting I/O failed: -6 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 starting I/O failed: -6 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 starting I/O failed: -6 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 starting I/O failed: -6 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 starting I/O failed: -6 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 starting I/O failed: -6 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 starting I/O failed: -6 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 [2024-11-26 20:50:38.278823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 starting I/O failed: -6 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 starting I/O failed: -6 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 starting I/O failed: -6 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 starting I/O failed: -6 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 starting I/O failed: -6 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 starting I/O failed: -6 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 starting I/O failed: -6 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 starting I/O failed: -6 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 starting I/O failed: -6 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.970 starting I/O failed: -6 00:20:34.970 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 [2024-11-26 20:50:38.279955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 [2024-11-26 20:50:38.281982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:34.971 NVMe io qpair process completion error 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.971 starting I/O failed: -6 00:20:34.971 Write completed with error (sct=0, sc=8) 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.972 starting I/O failed: -6 00:20:34.972 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 [2024-11-26 20:50:38.288417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.973 Write completed with error (sct=0, sc=8) 00:20:34.973 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 [2024-11-26 20:50:38.289684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 [2024-11-26 20:50:38.291862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:34.974 NVMe io qpair process completion error 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 starting I/O failed: -6 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.974 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 [2024-11-26 20:50:38.293185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 [2024-11-26 20:50:38.294269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 [2024-11-26 20:50:38.295446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.975 Write completed with error (sct=0, sc=8) 00:20:34.975 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 [2024-11-26 20:50:38.299345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:34.976 NVMe io qpair process completion error 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.976 starting I/O failed: -6 00:20:34.976 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.977 starting I/O failed: -6 00:20:34.977 Write completed with error (sct=0, sc=8) 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 [2024-11-26 20:50:38.306461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:34.978 starting I/O failed: -6 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 [2024-11-26 20:50:38.307615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 [2024-11-26 20:50:38.308760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.978 starting I/O failed: -6 00:20:34.978 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 [2024-11-26 20:50:38.311156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:34.979 NVMe io qpair process completion error 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 [2024-11-26 20:50:38.312419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:34.979 starting I/O failed: -6 00:20:34.979 starting I/O failed: -6 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 [2024-11-26 20:50:38.313579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.979 starting I/O failed: -6 00:20:34.979 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 [2024-11-26 20:50:38.314716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 [2024-11-26 20:50:38.316520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:34.980 NVMe io qpair process completion error 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 [2024-11-26 20:50:38.317945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 Write completed with error (sct=0, sc=8) 00:20:34.980 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 [2024-11-26 20:50:38.318905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 [2024-11-26 20:50:38.320100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.981 Write completed with error (sct=0, sc=8) 00:20:34.981 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 [2024-11-26 20:50:38.322774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:34.982 NVMe io qpair process completion error 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 [2024-11-26 20:50:38.324539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 [2024-11-26 20:50:38.325620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.982 Write completed with error (sct=0, sc=8) 00:20:34.982 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 Write completed with error (sct=0, sc=8) 00:20:34.983 starting I/O failed: -6 00:20:34.983 [2024-11-26 20:50:38.329843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:34.983 NVMe io qpair process completion error 00:20:34.983 Initializing NVMe Controllers 00:20:34.983 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:20:34.983 Controller IO queue size 128, less than required. 00:20:34.983 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:34.983 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:20:34.983 Controller IO queue size 128, less than required. 00:20:34.983 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:34.983 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:20:34.983 Controller IO queue size 128, less than required. 00:20:34.983 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:34.983 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:20:34.983 Controller IO queue size 128, less than required. 00:20:34.983 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:34.983 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:20:34.983 Controller IO queue size 128, less than required. 00:20:34.983 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:34.983 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:34.983 Controller IO queue size 128, less than required. 00:20:34.983 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:34.983 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:20:34.983 Controller IO queue size 128, less than required. 00:20:34.983 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:34.983 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:20:34.983 Controller IO queue size 128, less than required. 00:20:34.983 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:34.983 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:20:34.983 Controller IO queue size 128, less than required. 00:20:34.983 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:34.983 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:20:34.983 Controller IO queue size 128, less than required. 00:20:34.983 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:34.983 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:20:34.983 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:20:34.983 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:20:34.983 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:20:34.983 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:20:34.983 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:34.984 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:20:34.984 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:20:34.984 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:20:34.984 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:20:34.984 Initialization complete. Launching workers. 00:20:34.984 ======================================================== 00:20:34.984 Latency(us) 00:20:34.984 Device Information : IOPS MiB/s Average min max 00:20:34.984 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1813.57 77.93 70600.63 761.78 124767.37 00:20:34.984 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1808.35 77.70 70828.08 903.96 154695.41 00:20:34.984 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1777.03 76.36 72121.84 805.42 121328.34 00:20:34.984 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1738.10 74.68 72840.55 1209.44 119887.13 00:20:34.984 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1766.59 75.91 71688.45 848.97 119046.41 00:20:34.984 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1752.89 75.32 72258.18 673.40 125510.84 00:20:34.984 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1789.00 76.87 70819.15 1270.67 124826.75 00:20:34.984 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1801.61 77.41 70356.59 931.52 127064.52 00:20:34.984 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1826.41 78.48 69438.52 1171.31 119209.24 00:20:34.984 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1799.22 77.31 70535.43 808.15 133911.85 00:20:34.984 ======================================================== 00:20:34.984 Total : 17872.77 767.97 71134.76 673.40 154695.41 00:20:34.984 00:20:34.984 [2024-11-26 20:50:38.333767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21405f0 is same with the state(6) to be set 00:20:34.984 [2024-11-26 20:50:38.333864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2140920 is same with the state(6) to be set 00:20:34.984 [2024-11-26 20:50:38.333923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2140c50 is same with the state(6) to be set 00:20:34.984 [2024-11-26 20:50:38.333982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2141900 is same with the state(6) to be set 00:20:34.984 [2024-11-26 20:50:38.334040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213f9e0 is same with the state(6) to be set 00:20:34.984 [2024-11-26 20:50:38.334100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2141720 is same with the state(6) to be set 00:20:34.984 [2024-11-26 20:50:38.334161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2141ae0 is same with the state(6) to be set 00:20:34.984 [2024-11-26 20:50:38.334232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213fd10 is same with the state(6) to be set 00:20:34.984 [2024-11-26 20:50:38.334297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x213f6b0 is same with the state(6) to be set 00:20:34.984 [2024-11-26 20:50:38.334367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21402c0 is same with the state(6) to be set 00:20:34.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:20:35.245 20:50:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:20:36.181 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1708000 00:20:36.181 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:20:36.181 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1708000 00:20:36.181 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:20:36.181 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:36.181 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:20:36.181 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:36.181 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 1708000 00:20:36.181 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:20:36.181 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:36.181 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:36.181 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:36.181 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:20:36.181 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:36.181 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:36.181 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:36.181 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:36.181 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:36.181 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:20:36.181 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:36.181 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:20:36.181 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:36.181 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:36.181 rmmod nvme_tcp 00:20:36.181 rmmod nvme_fabrics 00:20:36.181 rmmod nvme_keyring 00:20:36.181 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:36.181 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:20:36.181 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:20:36.181 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 1707910 ']' 00:20:36.181 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 1707910 00:20:36.181 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1707910 ']' 00:20:36.181 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1707910 00:20:36.181 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1707910) - No such process 00:20:36.181 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1707910 is not found' 00:20:36.181 Process with pid 1707910 is not found 00:20:36.181 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:36.181 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:36.181 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:36.181 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:20:36.181 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:20:36.181 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:36.181 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:20:36.181 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:36.181 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:36.181 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:36.181 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:36.181 20:50:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:38.712 20:50:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:38.712 00:20:38.712 real 0m9.755s 00:20:38.712 user 0m24.380s 00:20:38.712 sys 0m5.303s 00:20:38.712 20:50:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:38.712 20:50:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:38.712 ************************************ 00:20:38.712 END TEST nvmf_shutdown_tc4 00:20:38.712 ************************************ 00:20:38.712 20:50:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:20:38.712 00:20:38.712 real 0m37.247s 00:20:38.712 user 1m40.699s 00:20:38.712 sys 0m11.869s 00:20:38.712 20:50:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:38.712 20:50:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:38.712 ************************************ 00:20:38.712 END TEST nvmf_shutdown 00:20:38.712 ************************************ 00:20:38.712 20:50:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:20:38.712 20:50:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:38.712 20:50:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:38.712 20:50:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:38.712 ************************************ 00:20:38.712 START TEST nvmf_nsid 00:20:38.712 ************************************ 00:20:38.712 20:50:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:20:38.712 * Looking for test storage... 00:20:38.712 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:38.712 20:50:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:38.712 20:50:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:20:38.712 20:50:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:38.712 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:38.712 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:38.712 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:38.712 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:38.712 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:20:38.712 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:20:38.712 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:20:38.712 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:20:38.712 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:20:38.712 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:20:38.712 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:20:38.712 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:38.712 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:20:38.712 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:20:38.712 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:38.712 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:38.712 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:20:38.712 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:20:38.712 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:38.712 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:20:38.712 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:20:38.712 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:20:38.712 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:20:38.712 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:38.712 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:20:38.712 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:20:38.712 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:38.712 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:38.712 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:20:38.712 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:38.712 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:38.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.712 --rc genhtml_branch_coverage=1 00:20:38.712 --rc genhtml_function_coverage=1 00:20:38.712 --rc genhtml_legend=1 00:20:38.712 --rc geninfo_all_blocks=1 00:20:38.712 --rc geninfo_unexecuted_blocks=1 00:20:38.712 00:20:38.712 ' 00:20:38.712 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:38.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.712 --rc genhtml_branch_coverage=1 00:20:38.712 --rc genhtml_function_coverage=1 00:20:38.712 --rc genhtml_legend=1 00:20:38.712 --rc geninfo_all_blocks=1 00:20:38.712 --rc geninfo_unexecuted_blocks=1 00:20:38.712 00:20:38.712 ' 00:20:38.712 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:38.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.712 --rc genhtml_branch_coverage=1 00:20:38.712 --rc genhtml_function_coverage=1 00:20:38.712 --rc genhtml_legend=1 00:20:38.712 --rc geninfo_all_blocks=1 00:20:38.712 --rc geninfo_unexecuted_blocks=1 00:20:38.712 00:20:38.712 ' 00:20:38.712 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:38.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.712 --rc genhtml_branch_coverage=1 00:20:38.712 --rc genhtml_function_coverage=1 00:20:38.712 --rc genhtml_legend=1 00:20:38.712 --rc geninfo_all_blocks=1 00:20:38.712 --rc geninfo_unexecuted_blocks=1 00:20:38.712 00:20:38.712 ' 00:20:38.712 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:38.712 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:20:38.713 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:38.713 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:38.713 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:38.713 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:38.713 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:38.713 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:38.713 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:38.713 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:38.713 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:38.713 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:38.713 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:38.713 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:38.713 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:38.713 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:38.713 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:38.713 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:38.713 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:38.713 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:20:38.713 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:38.713 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:38.713 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:38.713 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.713 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.713 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.713 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:20:38.713 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.713 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:20:38.713 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:38.713 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:38.713 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:38.713 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:38.713 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:38.713 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:38.713 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:38.713 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:38.713 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:38.713 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:38.713 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:20:38.713 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:20:38.713 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:20:38.713 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:20:38.713 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:20:38.713 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:20:38.713 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:38.713 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:38.713 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:38.713 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:38.713 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:38.713 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:38.713 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:38.713 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:38.713 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:38.713 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:38.713 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:20:38.713 20:50:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:40.640 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:40.640 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:40.640 Found net devices under 0000:09:00.0: cvl_0_0 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:40.640 Found net devices under 0000:09:00.1: cvl_0_1 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:40.640 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:40.641 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:20:40.641 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:40.641 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:40.641 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:40.641 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:40.641 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:40.641 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:40.641 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:40.641 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:40.641 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:40.641 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:40.641 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:40.641 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:40.641 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:40.641 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:40.641 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:40.641 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:40.641 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:40.641 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:40.641 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:40.641 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:40.641 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:40.641 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:40.641 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:40.641 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:40.641 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:40.641 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:40.641 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:40.641 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:20:40.641 00:20:40.641 --- 10.0.0.2 ping statistics --- 00:20:40.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.641 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:20:40.641 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:40.641 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:40.641 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:20:40.641 00:20:40.641 --- 10.0.0.1 ping statistics --- 00:20:40.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.641 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:20:40.641 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:40.641 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:20:40.641 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:40.641 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:40.641 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:40.641 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:40.641 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:40.641 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:40.641 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:40.641 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:20:40.641 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:40.641 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:40.641 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:40.641 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=1710734 00:20:40.641 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 1710734 00:20:40.641 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:20:40.641 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1710734 ']' 00:20:40.641 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:40.641 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:40.641 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:40.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:40.641 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:40.641 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:40.641 [2024-11-26 20:50:44.244051] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:20:40.641 [2024-11-26 20:50:44.244161] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:40.641 [2024-11-26 20:50:44.317186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.899 [2024-11-26 20:50:44.372659] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:40.899 [2024-11-26 20:50:44.372714] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:40.899 [2024-11-26 20:50:44.372737] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:40.899 [2024-11-26 20:50:44.372747] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:40.899 [2024-11-26 20:50:44.372756] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:40.899 [2024-11-26 20:50:44.373341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.899 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:40.899 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:20:40.899 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:40.899 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:40.899 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:40.899 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:40.899 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:40.899 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=1710761 00:20:40.899 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:20:40.899 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:20:40.899 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:20:40.899 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:20:40.899 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:40.899 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:40.899 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:40.899 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:40.899 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:40.899 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:40.899 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:40.899 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:40.899 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:40.899 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:20:40.899 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:20:40.900 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=b1f64f9b-9fc2-466d-b5d5-1e674bdde944 00:20:40.900 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:20:40.900 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=02f58301-97cd-4c2f-b25b-450fb1941e53 00:20:40.900 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:20:40.900 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=6bf979f3-1282-473b-85b9-9d6ffb8ae1f3 00:20:40.900 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:20:40.900 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.900 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:40.900 null0 00:20:40.900 null1 00:20:40.900 null2 00:20:40.900 [2024-11-26 20:50:44.554824] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:40.900 [2024-11-26 20:50:44.568300] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:20:40.900 [2024-11-26 20:50:44.568400] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1710761 ] 00:20:40.900 [2024-11-26 20:50:44.579026] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:41.158 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.158 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 1710761 /var/tmp/tgt2.sock 00:20:41.158 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1710761 ']' 00:20:41.158 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:20:41.158 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:41.158 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:20:41.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:20:41.158 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:41.158 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:41.158 [2024-11-26 20:50:44.635328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.158 [2024-11-26 20:50:44.692615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:41.416 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:41.416 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:20:41.416 20:50:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:20:41.673 [2024-11-26 20:50:45.363134] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:41.931 [2024-11-26 20:50:45.379400] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:20:41.931 nvme0n1 nvme0n2 00:20:41.931 nvme1n1 00:20:41.931 20:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:20:41.931 20:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:20:41.931 20:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:42.497 20:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:20:42.497 20:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:20:42.497 20:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:20:42.497 20:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:20:42.497 20:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:20:42.497 20:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:20:42.497 20:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:20:42.497 20:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:20:42.497 20:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:42.497 20:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:20:42.497 20:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:20:42.497 20:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:20:42.497 20:50:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:20:43.431 20:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:43.431 20:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:20:43.431 20:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:43.431 20:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:20:43.431 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:20:43.431 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid b1f64f9b-9fc2-466d-b5d5-1e674bdde944 00:20:43.431 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:20:43.431 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:20:43.431 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:20:43.431 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:20:43.431 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:20:43.431 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=b1f64f9b9fc2466db5d51e674bdde944 00:20:43.431 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo B1F64F9B9FC2466DB5D51E674BDDE944 00:20:43.431 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ B1F64F9B9FC2466DB5D51E674BDDE944 == \B\1\F\6\4\F\9\B\9\F\C\2\4\6\6\D\B\5\D\5\1\E\6\7\4\B\D\D\E\9\4\4 ]] 00:20:43.431 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:20:43.431 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:20:43.431 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:43.431 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:20:43.431 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:43.431 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:20:43.431 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:20:43.431 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 02f58301-97cd-4c2f-b25b-450fb1941e53 00:20:43.431 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:20:43.431 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:20:43.431 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:20:43.431 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:20:43.431 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:20:43.431 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=02f5830197cd4c2fb25b450fb1941e53 00:20:43.431 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 02F5830197CD4C2FB25B450FB1941E53 00:20:43.431 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 02F5830197CD4C2FB25B450FB1941E53 == \0\2\F\5\8\3\0\1\9\7\C\D\4\C\2\F\B\2\5\B\4\5\0\F\B\1\9\4\1\E\5\3 ]] 00:20:43.431 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:20:43.431 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:20:43.431 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:43.431 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:20:43.431 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:43.431 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:20:43.431 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:20:43.689 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 6bf979f3-1282-473b-85b9-9d6ffb8ae1f3 00:20:43.689 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:20:43.689 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:20:43.689 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:20:43.689 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:20:43.689 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:20:43.689 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=6bf979f31282473b85b99d6ffb8ae1f3 00:20:43.689 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 6BF979F31282473B85B99D6FFB8AE1F3 00:20:43.689 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 6BF979F31282473B85B99D6FFB8AE1F3 == \6\B\F\9\7\9\F\3\1\2\8\2\4\7\3\B\8\5\B\9\9\D\6\F\F\B\8\A\E\1\F\3 ]] 00:20:43.689 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:20:43.689 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:20:43.689 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:20:43.689 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 1710761 00:20:43.689 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1710761 ']' 00:20:43.689 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1710761 00:20:43.689 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:20:43.689 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:43.689 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1710761 00:20:43.689 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:43.946 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:43.946 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1710761' 00:20:43.946 killing process with pid 1710761 00:20:43.946 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1710761 00:20:43.946 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1710761 00:20:44.203 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:20:44.203 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:44.203 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:20:44.203 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:44.203 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:20:44.203 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:44.203 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:44.203 rmmod nvme_tcp 00:20:44.203 rmmod nvme_fabrics 00:20:44.203 rmmod nvme_keyring 00:20:44.203 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:44.203 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:20:44.203 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:20:44.203 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 1710734 ']' 00:20:44.203 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 1710734 00:20:44.203 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1710734 ']' 00:20:44.203 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1710734 00:20:44.203 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:20:44.203 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:44.203 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1710734 00:20:44.461 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:44.461 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:44.461 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1710734' 00:20:44.461 killing process with pid 1710734 00:20:44.461 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1710734 00:20:44.461 20:50:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1710734 00:20:44.719 20:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:44.719 20:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:44.719 20:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:44.719 20:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:20:44.719 20:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:20:44.719 20:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:20:44.719 20:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:44.719 20:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:44.719 20:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:44.719 20:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:44.719 20:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:44.719 20:50:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:46.626 20:50:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:46.626 00:20:46.626 real 0m8.271s 00:20:46.626 user 0m8.344s 00:20:46.626 sys 0m2.568s 00:20:46.626 20:50:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:46.626 20:50:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:46.626 ************************************ 00:20:46.626 END TEST nvmf_nsid 00:20:46.626 ************************************ 00:20:46.626 20:50:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:20:46.626 00:20:46.626 real 11m43.761s 00:20:46.626 user 27m47.171s 00:20:46.626 sys 2m49.437s 00:20:46.626 20:50:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:46.626 20:50:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:46.626 ************************************ 00:20:46.626 END TEST nvmf_target_extra 00:20:46.626 ************************************ 00:20:46.626 20:50:50 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:20:46.626 20:50:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:46.626 20:50:50 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:46.626 20:50:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:46.626 ************************************ 00:20:46.626 START TEST nvmf_host 00:20:46.626 ************************************ 00:20:46.626 20:50:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:20:46.885 * Looking for test storage... 00:20:46.885 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:46.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:46.885 --rc genhtml_branch_coverage=1 00:20:46.885 --rc genhtml_function_coverage=1 00:20:46.885 --rc genhtml_legend=1 00:20:46.885 --rc geninfo_all_blocks=1 00:20:46.885 --rc geninfo_unexecuted_blocks=1 00:20:46.885 00:20:46.885 ' 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:46.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:46.885 --rc genhtml_branch_coverage=1 00:20:46.885 --rc genhtml_function_coverage=1 00:20:46.885 --rc genhtml_legend=1 00:20:46.885 --rc geninfo_all_blocks=1 00:20:46.885 --rc geninfo_unexecuted_blocks=1 00:20:46.885 00:20:46.885 ' 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:46.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:46.885 --rc genhtml_branch_coverage=1 00:20:46.885 --rc genhtml_function_coverage=1 00:20:46.885 --rc genhtml_legend=1 00:20:46.885 --rc geninfo_all_blocks=1 00:20:46.885 --rc geninfo_unexecuted_blocks=1 00:20:46.885 00:20:46.885 ' 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:46.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:46.885 --rc genhtml_branch_coverage=1 00:20:46.885 --rc genhtml_function_coverage=1 00:20:46.885 --rc genhtml_legend=1 00:20:46.885 --rc geninfo_all_blocks=1 00:20:46.885 --rc geninfo_unexecuted_blocks=1 00:20:46.885 00:20:46.885 ' 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:46.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.885 ************************************ 00:20:46.885 START TEST nvmf_multicontroller 00:20:46.885 ************************************ 00:20:46.885 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:46.885 * Looking for test storage... 00:20:46.886 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:46.886 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:46.886 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:46.886 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:20:47.144 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:47.144 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:47.144 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:47.144 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:47.144 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:20:47.144 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:20:47.144 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:20:47.144 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:20:47.144 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:20:47.144 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:20:47.144 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:20:47.144 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:47.144 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:20:47.144 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:20:47.144 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:47.144 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:47.144 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:20:47.144 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:20:47.144 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:47.144 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:20:47.144 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:20:47.144 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:20:47.144 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:20:47.144 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:47.144 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:20:47.144 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:20:47.144 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:47.144 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:47.144 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:20:47.144 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:47.144 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:47.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:47.144 --rc genhtml_branch_coverage=1 00:20:47.144 --rc genhtml_function_coverage=1 00:20:47.144 --rc genhtml_legend=1 00:20:47.144 --rc geninfo_all_blocks=1 00:20:47.144 --rc geninfo_unexecuted_blocks=1 00:20:47.144 00:20:47.144 ' 00:20:47.144 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:47.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:47.144 --rc genhtml_branch_coverage=1 00:20:47.144 --rc genhtml_function_coverage=1 00:20:47.144 --rc genhtml_legend=1 00:20:47.144 --rc geninfo_all_blocks=1 00:20:47.144 --rc geninfo_unexecuted_blocks=1 00:20:47.144 00:20:47.144 ' 00:20:47.144 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:47.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:47.144 --rc genhtml_branch_coverage=1 00:20:47.144 --rc genhtml_function_coverage=1 00:20:47.144 --rc genhtml_legend=1 00:20:47.144 --rc geninfo_all_blocks=1 00:20:47.144 --rc geninfo_unexecuted_blocks=1 00:20:47.144 00:20:47.144 ' 00:20:47.144 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:47.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:47.144 --rc genhtml_branch_coverage=1 00:20:47.144 --rc genhtml_function_coverage=1 00:20:47.144 --rc genhtml_legend=1 00:20:47.144 --rc geninfo_all_blocks=1 00:20:47.144 --rc geninfo_unexecuted_blocks=1 00:20:47.144 00:20:47.144 ' 00:20:47.144 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:47.144 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:20:47.144 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:47.144 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:47.144 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:47.145 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:47.145 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:47.145 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:47.145 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:47.145 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:47.145 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:47.145 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:47.145 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:47.145 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:47.145 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:47.145 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:47.145 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:47.145 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:47.145 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:47.145 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:20:47.145 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:47.145 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:47.145 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:47.145 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.145 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.145 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.145 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:20:47.145 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.145 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:20:47.145 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:47.145 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:47.145 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:47.145 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:47.145 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:47.145 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:47.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:47.145 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:47.145 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:47.145 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:47.145 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:47.145 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:47.145 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:47.145 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:47.145 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:47.145 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:20:47.145 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:20:47.145 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:47.145 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:47.145 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:47.145 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:47.145 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:47.145 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:47.145 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:47.145 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:47.145 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:47.145 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:47.145 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:20:47.145 20:50:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:49.046 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:49.046 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:20:49.046 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:49.046 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:49.046 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:49.046 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:49.046 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:49.046 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:20:49.046 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:49.046 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:20:49.046 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:20:49.046 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:20:49.046 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:20:49.046 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:20:49.046 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:20:49.046 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:49.046 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:49.046 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:49.046 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:49.046 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:49.046 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:49.046 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:49.046 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:49.046 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:49.046 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:49.046 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:49.046 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:49.046 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:49.046 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:49.046 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:49.046 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:49.046 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:49.046 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:49.046 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:49.046 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:49.046 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:49.046 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:49.047 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:49.047 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:49.047 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:49.047 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:49.047 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:49.047 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:49.047 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:49.047 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:49.047 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:49.047 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:49.047 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:49.047 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:49.047 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:49.047 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:49.047 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:49.047 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:49.047 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:49.047 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:49.047 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:49.047 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:49.047 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:49.047 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:49.047 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:49.047 Found net devices under 0000:09:00.0: cvl_0_0 00:20:49.047 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:49.047 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:49.047 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:49.047 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:49.047 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:49.047 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:49.047 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:49.047 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:49.047 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:49.047 Found net devices under 0000:09:00.1: cvl_0_1 00:20:49.047 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:49.047 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:49.047 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:20:49.047 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:49.047 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:49.047 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:49.047 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:49.047 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:49.047 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:49.047 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:49.047 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:49.047 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:49.047 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:49.047 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:49.047 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:49.047 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:49.047 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:49.047 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:49.047 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:49.047 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:49.047 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:49.305 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:49.305 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:49.305 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:49.305 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:49.305 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:49.305 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:49.305 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:49.305 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:49.305 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:49.305 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:20:49.305 00:20:49.305 --- 10.0.0.2 ping statistics --- 00:20:49.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:49.305 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:20:49.305 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:49.305 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:49.305 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:20:49.305 00:20:49.305 --- 10.0.0.1 ping statistics --- 00:20:49.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:49.305 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:20:49.305 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:49.305 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:20:49.305 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:49.305 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:49.305 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:49.305 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:49.305 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:49.305 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:49.305 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:49.305 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:20:49.305 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:49.305 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:49.305 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:49.305 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=1713317 00:20:49.306 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:49.306 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 1713317 00:20:49.306 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1713317 ']' 00:20:49.306 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:49.306 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:49.306 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:49.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:49.306 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:49.306 20:50:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:49.306 [2024-11-26 20:50:52.949997] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:20:49.306 [2024-11-26 20:50:52.950105] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:49.564 [2024-11-26 20:50:53.021292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:49.564 [2024-11-26 20:50:53.078381] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:49.564 [2024-11-26 20:50:53.078441] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:49.564 [2024-11-26 20:50:53.078470] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:49.564 [2024-11-26 20:50:53.078482] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:49.564 [2024-11-26 20:50:53.078491] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:49.564 [2024-11-26 20:50:53.079991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:49.564 [2024-11-26 20:50:53.080055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:49.564 [2024-11-26 20:50:53.080059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:49.564 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:49.564 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:20:49.564 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:49.564 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:49.564 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:49.564 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:49.564 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:49.564 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.564 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:49.564 [2024-11-26 20:50:53.227529] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:49.564 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.564 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:49.564 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.564 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:49.821 Malloc0 00:20:49.821 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.822 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:49.822 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.822 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:49.822 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.822 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:49.822 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.822 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:49.822 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.822 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:49.822 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.822 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:49.822 [2024-11-26 20:50:53.295999] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:49.822 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.822 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:49.822 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.822 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:49.822 [2024-11-26 20:50:53.303884] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:49.822 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.822 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:49.822 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.822 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:49.822 Malloc1 00:20:49.822 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.822 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:49.822 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.822 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:49.822 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.822 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:20:49.822 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.822 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:49.822 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.822 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:49.822 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.822 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:49.822 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.822 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:20:49.822 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.822 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:49.822 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.822 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1713346 00:20:49.822 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:49.822 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:20:49.822 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1713346 /var/tmp/bdevperf.sock 00:20:49.822 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1713346 ']' 00:20:49.822 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:49.822 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:49.822 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:49.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:49.822 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:49.822 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:50.080 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:50.080 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:20:50.080 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:20:50.080 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.080 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:50.337 NVMe0n1 00:20:50.337 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.337 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:50.337 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:20:50.337 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.337 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:50.337 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.337 1 00:20:50.337 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:20:50.337 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:20:50.337 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:20:50.337 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:50.337 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:50.337 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:50.337 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:50.337 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:20:50.337 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.337 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:50.337 request: 00:20:50.337 { 00:20:50.337 "name": "NVMe0", 00:20:50.337 "trtype": "tcp", 00:20:50.337 "traddr": "10.0.0.2", 00:20:50.337 "adrfam": "ipv4", 00:20:50.337 "trsvcid": "4420", 00:20:50.337 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:50.337 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:50.337 "hostaddr": "10.0.0.1", 00:20:50.337 "prchk_reftag": false, 00:20:50.337 "prchk_guard": false, 00:20:50.337 "hdgst": false, 00:20:50.337 "ddgst": false, 00:20:50.337 "allow_unrecognized_csi": false, 00:20:50.337 "method": "bdev_nvme_attach_controller", 00:20:50.337 "req_id": 1 00:20:50.337 } 00:20:50.337 Got JSON-RPC error response 00:20:50.337 response: 00:20:50.337 { 00:20:50.337 "code": -114, 00:20:50.337 "message": "A controller named NVMe0 already exists with the specified network path" 00:20:50.337 } 00:20:50.337 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:50.337 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:20:50.338 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:50.338 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:50.338 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:50.338 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:20:50.338 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:20:50.338 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:20:50.338 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:50.338 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:50.338 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:50.338 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:50.338 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:20:50.338 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.338 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:50.338 request: 00:20:50.338 { 00:20:50.338 "name": "NVMe0", 00:20:50.338 "trtype": "tcp", 00:20:50.338 "traddr": "10.0.0.2", 00:20:50.338 "adrfam": "ipv4", 00:20:50.338 "trsvcid": "4420", 00:20:50.338 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:50.338 "hostaddr": "10.0.0.1", 00:20:50.338 "prchk_reftag": false, 00:20:50.338 "prchk_guard": false, 00:20:50.338 "hdgst": false, 00:20:50.338 "ddgst": false, 00:20:50.338 "allow_unrecognized_csi": false, 00:20:50.338 "method": "bdev_nvme_attach_controller", 00:20:50.338 "req_id": 1 00:20:50.338 } 00:20:50.338 Got JSON-RPC error response 00:20:50.338 response: 00:20:50.338 { 00:20:50.338 "code": -114, 00:20:50.338 "message": "A controller named NVMe0 already exists with the specified network path" 00:20:50.338 } 00:20:50.338 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:50.338 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:20:50.338 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:50.338 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:50.338 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:50.338 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:20:50.338 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:20:50.338 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:20:50.338 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:50.338 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:50.338 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:50.338 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:50.338 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:20:50.338 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.338 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:50.338 request: 00:20:50.338 { 00:20:50.338 "name": "NVMe0", 00:20:50.338 "trtype": "tcp", 00:20:50.338 "traddr": "10.0.0.2", 00:20:50.338 "adrfam": "ipv4", 00:20:50.338 "trsvcid": "4420", 00:20:50.338 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:50.338 "hostaddr": "10.0.0.1", 00:20:50.338 "prchk_reftag": false, 00:20:50.338 "prchk_guard": false, 00:20:50.338 "hdgst": false, 00:20:50.338 "ddgst": false, 00:20:50.338 "multipath": "disable", 00:20:50.338 "allow_unrecognized_csi": false, 00:20:50.338 "method": "bdev_nvme_attach_controller", 00:20:50.338 "req_id": 1 00:20:50.338 } 00:20:50.338 Got JSON-RPC error response 00:20:50.338 response: 00:20:50.338 { 00:20:50.338 "code": -114, 00:20:50.338 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:20:50.338 } 00:20:50.338 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:50.338 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:20:50.338 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:50.338 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:50.338 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:50.338 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:20:50.338 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:20:50.338 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:20:50.338 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:50.338 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:50.338 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:50.338 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:50.338 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:20:50.338 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.338 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:50.338 request: 00:20:50.338 { 00:20:50.338 "name": "NVMe0", 00:20:50.338 "trtype": "tcp", 00:20:50.338 "traddr": "10.0.0.2", 00:20:50.338 "adrfam": "ipv4", 00:20:50.338 "trsvcid": "4420", 00:20:50.338 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:50.338 "hostaddr": "10.0.0.1", 00:20:50.338 "prchk_reftag": false, 00:20:50.338 "prchk_guard": false, 00:20:50.338 "hdgst": false, 00:20:50.338 "ddgst": false, 00:20:50.338 "multipath": "failover", 00:20:50.338 "allow_unrecognized_csi": false, 00:20:50.338 "method": "bdev_nvme_attach_controller", 00:20:50.338 "req_id": 1 00:20:50.338 } 00:20:50.338 Got JSON-RPC error response 00:20:50.338 response: 00:20:50.338 { 00:20:50.338 "code": -114, 00:20:50.338 "message": "A controller named NVMe0 already exists with the specified network path" 00:20:50.338 } 00:20:50.338 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:50.338 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:20:50.338 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:50.338 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:50.338 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:50.338 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:50.338 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.338 20:50:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:50.338 NVMe0n1 00:20:50.338 20:50:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.338 20:50:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:50.338 20:50:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.338 20:50:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:50.596 20:50:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.596 20:50:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:20:50.596 20:50:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.596 20:50:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:50.596 00:20:50.596 20:50:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.596 20:50:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:50.596 20:50:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:20:50.596 20:50:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.596 20:50:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:50.596 20:50:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.596 20:50:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:20:50.596 20:50:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:51.969 { 00:20:51.969 "results": [ 00:20:51.969 { 00:20:51.969 "job": "NVMe0n1", 00:20:51.969 "core_mask": "0x1", 00:20:51.969 "workload": "write", 00:20:51.969 "status": "finished", 00:20:51.969 "queue_depth": 128, 00:20:51.969 "io_size": 4096, 00:20:51.969 "runtime": 1.003918, 00:20:51.969 "iops": 17459.593313398105, 00:20:51.969 "mibps": 68.20153638046135, 00:20:51.969 "io_failed": 0, 00:20:51.969 "io_timeout": 0, 00:20:51.969 "avg_latency_us": 7319.029311155061, 00:20:51.969 "min_latency_us": 4903.063703703704, 00:20:51.969 "max_latency_us": 13204.29037037037 00:20:51.969 } 00:20:51.969 ], 00:20:51.969 "core_count": 1 00:20:51.969 } 00:20:51.969 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:20:51.969 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.969 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:51.969 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.969 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:20:51.969 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 1713346 00:20:51.969 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1713346 ']' 00:20:51.969 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1713346 00:20:51.969 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:20:51.969 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:51.969 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1713346 00:20:51.969 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:51.969 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:51.969 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1713346' 00:20:51.969 killing process with pid 1713346 00:20:51.969 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1713346 00:20:51.969 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1713346 00:20:51.969 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:51.969 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.969 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:51.969 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.969 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:51.969 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.969 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:51.969 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.969 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:20:51.969 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:51.969 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:20:51.969 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:20:51.969 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:20:51.969 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:20:51.969 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:51.969 [2024-11-26 20:50:53.414573] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:20:51.969 [2024-11-26 20:50:53.414689] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1713346 ] 00:20:51.969 [2024-11-26 20:50:53.483812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.969 [2024-11-26 20:50:53.543994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:51.969 [2024-11-26 20:50:54.106784] bdev.c:4765:bdev_name_add: *ERROR*: Bdev name ef8e8ac8-e7b4-4c97-a889-592f97370bcd already exists 00:20:51.969 [2024-11-26 20:50:54.106819] bdev.c:7965:bdev_register: *ERROR*: Unable to add uuid:ef8e8ac8-e7b4-4c97-a889-592f97370bcd alias for bdev NVMe1n1 00:20:51.969 [2024-11-26 20:50:54.106849] bdev_nvme.c:4659:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:20:51.969 Running I/O for 1 seconds... 00:20:51.969 17400.00 IOPS, 67.97 MiB/s 00:20:51.969 Latency(us) 00:20:51.969 [2024-11-26T19:50:55.666Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:51.969 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:20:51.969 NVMe0n1 : 1.00 17459.59 68.20 0.00 0.00 7319.03 4903.06 13204.29 00:20:51.969 [2024-11-26T19:50:55.666Z] =================================================================================================================== 00:20:51.969 [2024-11-26T19:50:55.666Z] Total : 17459.59 68.20 0.00 0.00 7319.03 4903.06 13204.29 00:20:51.969 Received shutdown signal, test time was about 1.000000 seconds 00:20:51.969 00:20:51.969 Latency(us) 00:20:51.969 [2024-11-26T19:50:55.666Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:51.969 [2024-11-26T19:50:55.666Z] =================================================================================================================== 00:20:51.969 [2024-11-26T19:50:55.666Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:51.969 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:51.969 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:51.969 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:20:51.969 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:20:51.969 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:51.969 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:20:51.969 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:51.969 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:20:51.969 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:51.969 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:51.969 rmmod nvme_tcp 00:20:51.969 rmmod nvme_fabrics 00:20:51.969 rmmod nvme_keyring 00:20:51.969 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:51.969 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:20:51.969 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:20:51.969 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 1713317 ']' 00:20:51.969 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 1713317 00:20:51.969 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1713317 ']' 00:20:51.969 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1713317 00:20:51.969 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:20:51.969 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:51.969 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1713317 00:20:51.969 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:51.969 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:51.969 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1713317' 00:20:51.969 killing process with pid 1713317 00:20:51.969 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1713317 00:20:51.969 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1713317 00:20:52.228 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:52.228 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:52.228 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:52.228 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:20:52.228 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:20:52.228 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:52.228 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:20:52.228 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:52.228 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:52.228 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:52.228 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:52.228 20:50:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:54.767 20:50:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:54.767 00:20:54.767 real 0m7.485s 00:20:54.767 user 0m11.345s 00:20:54.767 sys 0m2.447s 00:20:54.767 20:50:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:54.767 20:50:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:54.767 ************************************ 00:20:54.767 END TEST nvmf_multicontroller 00:20:54.767 ************************************ 00:20:54.767 20:50:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:54.767 20:50:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:54.767 20:50:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:54.767 20:50:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.767 ************************************ 00:20:54.767 START TEST nvmf_aer 00:20:54.767 ************************************ 00:20:54.767 20:50:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:54.767 * Looking for test storage... 00:20:54.767 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:54.767 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:54.767 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:20:54.767 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:54.767 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:54.767 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:54.767 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:54.767 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:54.767 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:20:54.767 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:20:54.767 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:20:54.767 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:20:54.767 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:20:54.767 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:20:54.767 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:20:54.767 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:54.767 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:20:54.767 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:20:54.767 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:54.767 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:54.767 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:20:54.767 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:20:54.767 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:54.767 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:20:54.767 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:20:54.767 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:20:54.767 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:20:54.767 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:54.767 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:20:54.767 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:20:54.767 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:54.767 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:54.767 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:20:54.767 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:54.767 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:54.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:54.767 --rc genhtml_branch_coverage=1 00:20:54.767 --rc genhtml_function_coverage=1 00:20:54.767 --rc genhtml_legend=1 00:20:54.767 --rc geninfo_all_blocks=1 00:20:54.767 --rc geninfo_unexecuted_blocks=1 00:20:54.767 00:20:54.767 ' 00:20:54.767 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:54.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:54.767 --rc genhtml_branch_coverage=1 00:20:54.767 --rc genhtml_function_coverage=1 00:20:54.767 --rc genhtml_legend=1 00:20:54.767 --rc geninfo_all_blocks=1 00:20:54.767 --rc geninfo_unexecuted_blocks=1 00:20:54.767 00:20:54.767 ' 00:20:54.767 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:54.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:54.767 --rc genhtml_branch_coverage=1 00:20:54.767 --rc genhtml_function_coverage=1 00:20:54.767 --rc genhtml_legend=1 00:20:54.767 --rc geninfo_all_blocks=1 00:20:54.767 --rc geninfo_unexecuted_blocks=1 00:20:54.767 00:20:54.767 ' 00:20:54.767 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:54.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:54.767 --rc genhtml_branch_coverage=1 00:20:54.767 --rc genhtml_function_coverage=1 00:20:54.767 --rc genhtml_legend=1 00:20:54.767 --rc geninfo_all_blocks=1 00:20:54.767 --rc geninfo_unexecuted_blocks=1 00:20:54.767 00:20:54.767 ' 00:20:54.767 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:54.767 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:20:54.767 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:54.767 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:54.767 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:54.767 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:54.767 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:54.767 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:54.767 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:54.767 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:54.767 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:54.767 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:54.767 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:54.767 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:54.767 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:54.767 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:54.767 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:54.767 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:54.767 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:54.768 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:20:54.768 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:54.768 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:54.768 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:54.768 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.768 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.768 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.768 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:20:54.768 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.768 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:20:54.768 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:54.768 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:54.768 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:54.768 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:54.768 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:54.768 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:54.768 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:54.768 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:54.768 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:54.768 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:54.768 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:20:54.768 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:54.768 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:54.768 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:54.768 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:54.768 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:54.768 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:54.768 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:54.768 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:54.768 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:54.768 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:54.768 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:20:54.768 20:50:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:56.668 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:56.668 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:20:56.668 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:56.668 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:56.668 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:56.668 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:56.668 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:56.668 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:20:56.668 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:56.668 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:20:56.668 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:20:56.668 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:20:56.668 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:20:56.668 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:20:56.668 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:20:56.668 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:56.668 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:56.668 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:56.668 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:56.668 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:56.668 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:56.668 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:56.668 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:56.668 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:56.668 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:56.668 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:56.668 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:56.668 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:56.668 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:56.668 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:56.668 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:56.668 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:56.668 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:56.668 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:56.668 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:56.668 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:56.668 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:56.668 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:56.668 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:56.668 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:56.668 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:56.668 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:56.668 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:56.668 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:56.668 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:56.668 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:56.668 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:56.668 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:56.668 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:56.668 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:56.668 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:56.668 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:56.668 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:56.668 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:56.668 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:56.668 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:56.668 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:56.668 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:56.668 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:56.668 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:56.669 Found net devices under 0000:09:00.0: cvl_0_0 00:20:56.669 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:56.669 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:56.669 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:56.669 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:56.669 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:56.669 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:56.669 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:56.669 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:56.669 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:56.669 Found net devices under 0000:09:00.1: cvl_0_1 00:20:56.669 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:56.669 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:56.669 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:20:56.669 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:56.669 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:56.669 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:56.669 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:56.669 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:56.669 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:56.669 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:56.669 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:56.669 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:56.669 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:56.669 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:56.669 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:56.669 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:56.669 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:56.669 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:56.669 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:56.669 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:56.669 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:56.927 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:56.927 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:56.927 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:56.927 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:56.927 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:56.927 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:56.927 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:56.927 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:56.927 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:56.927 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:20:56.927 00:20:56.927 --- 10.0.0.2 ping statistics --- 00:20:56.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:56.927 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:20:56.927 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:56.927 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:56.927 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:20:56.927 00:20:56.927 --- 10.0.0.1 ping statistics --- 00:20:56.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:56.927 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:20:56.927 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:56.927 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:20:56.927 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:56.927 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:56.927 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:56.927 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:56.927 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:56.927 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:56.927 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:56.927 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:20:56.927 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:56.927 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:56.927 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:56.927 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=1715608 00:20:56.927 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:56.927 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 1715608 00:20:56.927 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 1715608 ']' 00:20:56.927 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:56.927 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:56.927 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:56.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:56.928 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:56.928 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:56.928 [2024-11-26 20:51:00.555267] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:20:56.928 [2024-11-26 20:51:00.555385] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:57.186 [2024-11-26 20:51:00.636312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:57.186 [2024-11-26 20:51:00.696667] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:57.186 [2024-11-26 20:51:00.696716] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:57.186 [2024-11-26 20:51:00.696744] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:57.186 [2024-11-26 20:51:00.696755] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:57.186 [2024-11-26 20:51:00.696764] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:57.186 [2024-11-26 20:51:00.698415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:57.186 [2024-11-26 20:51:00.698442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:57.186 [2024-11-26 20:51:00.698491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:57.186 [2024-11-26 20:51:00.698495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:57.186 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:57.186 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:20:57.186 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:57.186 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:57.186 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:57.186 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:57.186 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:57.186 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.186 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:57.186 [2024-11-26 20:51:00.848762] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:57.186 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.186 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:20:57.186 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.186 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:57.445 Malloc0 00:20:57.445 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.445 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:20:57.445 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.445 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:57.445 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.445 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:57.445 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.445 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:57.445 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.445 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:57.445 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.445 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:57.445 [2024-11-26 20:51:00.919086] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:57.445 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.445 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:20:57.445 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.445 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:57.445 [ 00:20:57.445 { 00:20:57.445 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:57.445 "subtype": "Discovery", 00:20:57.445 "listen_addresses": [], 00:20:57.445 "allow_any_host": true, 00:20:57.445 "hosts": [] 00:20:57.445 }, 00:20:57.445 { 00:20:57.445 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:57.445 "subtype": "NVMe", 00:20:57.445 "listen_addresses": [ 00:20:57.445 { 00:20:57.445 "trtype": "TCP", 00:20:57.445 "adrfam": "IPv4", 00:20:57.445 "traddr": "10.0.0.2", 00:20:57.445 "trsvcid": "4420" 00:20:57.445 } 00:20:57.445 ], 00:20:57.445 "allow_any_host": true, 00:20:57.445 "hosts": [], 00:20:57.445 "serial_number": "SPDK00000000000001", 00:20:57.445 "model_number": "SPDK bdev Controller", 00:20:57.445 "max_namespaces": 2, 00:20:57.445 "min_cntlid": 1, 00:20:57.445 "max_cntlid": 65519, 00:20:57.445 "namespaces": [ 00:20:57.445 { 00:20:57.445 "nsid": 1, 00:20:57.445 "bdev_name": "Malloc0", 00:20:57.445 "name": "Malloc0", 00:20:57.445 "nguid": "1DF3248695EB45A6BD121C8C894CE35A", 00:20:57.445 "uuid": "1df32486-95eb-45a6-bd12-1c8c894ce35a" 00:20:57.445 } 00:20:57.445 ] 00:20:57.445 } 00:20:57.445 ] 00:20:57.445 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.445 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:57.445 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:20:57.445 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1715767 00:20:57.445 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:20:57.445 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:20:57.445 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:20:57.445 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:57.445 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:20:57.445 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:20:57.445 20:51:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:20:57.445 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:57.445 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:20:57.445 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:20:57.446 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:20:57.704 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:57.704 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:57.704 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:20:57.704 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:20:57.704 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.704 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:57.704 Malloc1 00:20:57.704 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.704 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:20:57.704 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.704 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:57.704 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.704 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:20:57.704 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.704 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:57.704 [ 00:20:57.704 { 00:20:57.704 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:57.704 "subtype": "Discovery", 00:20:57.704 "listen_addresses": [], 00:20:57.704 "allow_any_host": true, 00:20:57.704 "hosts": [] 00:20:57.704 }, 00:20:57.704 { 00:20:57.704 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:57.704 "subtype": "NVMe", 00:20:57.704 "listen_addresses": [ 00:20:57.704 { 00:20:57.704 "trtype": "TCP", 00:20:57.704 "adrfam": "IPv4", 00:20:57.704 "traddr": "10.0.0.2", 00:20:57.704 "trsvcid": "4420" 00:20:57.704 } 00:20:57.704 ], 00:20:57.704 "allow_any_host": true, 00:20:57.704 "hosts": [], 00:20:57.704 "serial_number": "SPDK00000000000001", 00:20:57.704 "model_number": "SPDK bdev Controller", 00:20:57.704 "max_namespaces": 2, 00:20:57.704 "min_cntlid": 1, 00:20:57.704 "max_cntlid": 65519, 00:20:57.704 "namespaces": [ 00:20:57.704 { 00:20:57.704 "nsid": 1, 00:20:57.704 "bdev_name": "Malloc0", 00:20:57.704 "name": "Malloc0", 00:20:57.704 "nguid": "1DF3248695EB45A6BD121C8C894CE35A", 00:20:57.704 "uuid": "1df32486-95eb-45a6-bd12-1c8c894ce35a" 00:20:57.704 }, 00:20:57.704 { 00:20:57.704 "nsid": 2, 00:20:57.704 "bdev_name": "Malloc1", 00:20:57.704 "name": "Malloc1", 00:20:57.704 "nguid": "0038E0055B8F4F79B615E3D187452F86", 00:20:57.704 "uuid": "0038e005-5b8f-4f79-b615-e3d187452f86" 00:20:57.704 } 00:20:57.704 ] 00:20:57.704 } 00:20:57.704 ] 00:20:57.704 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.704 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1715767 00:20:57.704 Asynchronous Event Request test 00:20:57.704 Attaching to 10.0.0.2 00:20:57.704 Attached to 10.0.0.2 00:20:57.704 Registering asynchronous event callbacks... 00:20:57.704 Starting namespace attribute notice tests for all controllers... 00:20:57.704 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:57.704 aer_cb - Changed Namespace 00:20:57.704 Cleaning up... 00:20:57.704 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:57.704 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.704 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:57.704 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.704 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:57.704 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.704 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:57.704 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.704 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:57.704 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.704 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:57.704 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.704 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:20:57.704 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:20:57.704 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:57.704 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:20:57.704 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:57.704 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:20:57.704 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:57.704 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:57.704 rmmod nvme_tcp 00:20:57.704 rmmod nvme_fabrics 00:20:57.704 rmmod nvme_keyring 00:20:57.704 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:57.704 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:20:57.704 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:20:57.704 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 1715608 ']' 00:20:57.704 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 1715608 00:20:57.704 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 1715608 ']' 00:20:57.704 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 1715608 00:20:57.704 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:20:57.704 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:57.704 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1715608 00:20:57.704 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:57.704 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:57.704 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1715608' 00:20:57.704 killing process with pid 1715608 00:20:57.704 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 1715608 00:20:57.704 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 1715608 00:20:57.963 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:57.963 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:57.963 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:57.963 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:20:57.963 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:20:57.963 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:57.963 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:20:57.963 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:57.963 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:57.963 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:57.963 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:57.963 20:51:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:00.492 00:21:00.492 real 0m5.650s 00:21:00.492 user 0m4.387s 00:21:00.492 sys 0m2.102s 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:00.492 ************************************ 00:21:00.492 END TEST nvmf_aer 00:21:00.492 ************************************ 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.492 ************************************ 00:21:00.492 START TEST nvmf_async_init 00:21:00.492 ************************************ 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:00.492 * Looking for test storage... 00:21:00.492 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:00.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.492 --rc genhtml_branch_coverage=1 00:21:00.492 --rc genhtml_function_coverage=1 00:21:00.492 --rc genhtml_legend=1 00:21:00.492 --rc geninfo_all_blocks=1 00:21:00.492 --rc geninfo_unexecuted_blocks=1 00:21:00.492 00:21:00.492 ' 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:00.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.492 --rc genhtml_branch_coverage=1 00:21:00.492 --rc genhtml_function_coverage=1 00:21:00.492 --rc genhtml_legend=1 00:21:00.492 --rc geninfo_all_blocks=1 00:21:00.492 --rc geninfo_unexecuted_blocks=1 00:21:00.492 00:21:00.492 ' 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:00.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.492 --rc genhtml_branch_coverage=1 00:21:00.492 --rc genhtml_function_coverage=1 00:21:00.492 --rc genhtml_legend=1 00:21:00.492 --rc geninfo_all_blocks=1 00:21:00.492 --rc geninfo_unexecuted_blocks=1 00:21:00.492 00:21:00.492 ' 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:00.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.492 --rc genhtml_branch_coverage=1 00:21:00.492 --rc genhtml_function_coverage=1 00:21:00.492 --rc genhtml_legend=1 00:21:00.492 --rc geninfo_all_blocks=1 00:21:00.492 --rc geninfo_unexecuted_blocks=1 00:21:00.492 00:21:00.492 ' 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:00.492 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:00.493 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:00.493 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:21:00.493 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:00.493 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:00.493 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:00.493 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.493 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.493 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.493 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:21:00.493 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.493 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:21:00.493 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:00.493 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:00.493 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:00.493 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:00.493 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:00.493 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:00.493 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:00.493 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:00.493 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:00.493 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:00.493 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:00.493 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:21:00.493 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:21:00.493 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:00.493 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:21:00.493 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:21:00.493 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=69f7c2dea0b64cb8875b2e4d2c5111b5 00:21:00.493 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:21:00.493 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:00.493 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:00.493 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:00.493 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:00.493 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:00.493 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:00.493 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:00.493 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.493 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:00.493 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:00.493 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:21:00.493 20:51:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:02.393 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:02.393 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:02.393 Found net devices under 0000:09:00.0: cvl_0_0 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:02.393 Found net devices under 0000:09:00.1: cvl_0_1 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:02.393 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:02.394 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:21:02.394 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:02.394 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:02.394 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:02.394 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:02.394 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:02.394 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:02.394 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:02.394 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:02.394 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:02.394 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:02.394 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:02.394 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:02.394 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:02.394 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:02.394 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:02.394 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:02.394 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:02.394 20:51:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:02.394 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:02.394 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:02.394 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:02.394 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:02.394 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:02.394 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:02.394 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:02.394 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:02.394 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:02.394 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.318 ms 00:21:02.394 00:21:02.394 --- 10.0.0.2 ping statistics --- 00:21:02.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.394 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:21:02.394 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:02.394 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:02.394 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:21:02.394 00:21:02.394 --- 10.0.0.1 ping statistics --- 00:21:02.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.394 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:21:02.394 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:02.394 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:21:02.394 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:02.394 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:02.394 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:02.394 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:02.394 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:02.394 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:02.394 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:02.652 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:02.652 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:02.652 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:02.652 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:02.652 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=1717774 00:21:02.652 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:02.652 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 1717774 00:21:02.652 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 1717774 ']' 00:21:02.652 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:02.652 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:02.652 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:02.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:02.652 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:02.652 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:02.652 [2024-11-26 20:51:06.147588] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:21:02.652 [2024-11-26 20:51:06.147676] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:02.652 [2024-11-26 20:51:06.218603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.652 [2024-11-26 20:51:06.274916] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:02.652 [2024-11-26 20:51:06.274961] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:02.652 [2024-11-26 20:51:06.274989] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:02.652 [2024-11-26 20:51:06.275000] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:02.652 [2024-11-26 20:51:06.275009] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:02.652 [2024-11-26 20:51:06.275587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:02.909 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:02.909 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:21:02.909 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:02.909 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:02.909 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:02.909 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:02.909 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:02.909 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.909 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:02.909 [2024-11-26 20:51:06.411796] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:02.909 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.909 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:02.909 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.909 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:02.909 null0 00:21:02.909 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.909 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:02.909 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.909 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:02.909 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.909 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:02.909 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.909 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:02.909 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.909 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 69f7c2dea0b64cb8875b2e4d2c5111b5 00:21:02.909 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.909 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:02.909 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.909 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:02.909 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.909 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:02.909 [2024-11-26 20:51:06.452065] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:02.909 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.910 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:02.910 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.910 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:03.167 nvme0n1 00:21:03.167 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.167 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:03.167 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.167 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:03.167 [ 00:21:03.167 { 00:21:03.167 "name": "nvme0n1", 00:21:03.167 "aliases": [ 00:21:03.167 "69f7c2de-a0b6-4cb8-875b-2e4d2c5111b5" 00:21:03.167 ], 00:21:03.167 "product_name": "NVMe disk", 00:21:03.167 "block_size": 512, 00:21:03.167 "num_blocks": 2097152, 00:21:03.167 "uuid": "69f7c2de-a0b6-4cb8-875b-2e4d2c5111b5", 00:21:03.167 "numa_id": 0, 00:21:03.167 "assigned_rate_limits": { 00:21:03.167 "rw_ios_per_sec": 0, 00:21:03.167 "rw_mbytes_per_sec": 0, 00:21:03.167 "r_mbytes_per_sec": 0, 00:21:03.167 "w_mbytes_per_sec": 0 00:21:03.167 }, 00:21:03.167 "claimed": false, 00:21:03.167 "zoned": false, 00:21:03.167 "supported_io_types": { 00:21:03.167 "read": true, 00:21:03.167 "write": true, 00:21:03.167 "unmap": false, 00:21:03.167 "flush": true, 00:21:03.167 "reset": true, 00:21:03.167 "nvme_admin": true, 00:21:03.167 "nvme_io": true, 00:21:03.167 "nvme_io_md": false, 00:21:03.167 "write_zeroes": true, 00:21:03.167 "zcopy": false, 00:21:03.167 "get_zone_info": false, 00:21:03.167 "zone_management": false, 00:21:03.167 "zone_append": false, 00:21:03.167 "compare": true, 00:21:03.167 "compare_and_write": true, 00:21:03.167 "abort": true, 00:21:03.167 "seek_hole": false, 00:21:03.167 "seek_data": false, 00:21:03.167 "copy": true, 00:21:03.167 "nvme_iov_md": false 00:21:03.167 }, 00:21:03.167 "memory_domains": [ 00:21:03.167 { 00:21:03.167 "dma_device_id": "system", 00:21:03.167 "dma_device_type": 1 00:21:03.167 } 00:21:03.167 ], 00:21:03.167 "driver_specific": { 00:21:03.167 "nvme": [ 00:21:03.167 { 00:21:03.167 "trid": { 00:21:03.167 "trtype": "TCP", 00:21:03.167 "adrfam": "IPv4", 00:21:03.167 "traddr": "10.0.0.2", 00:21:03.167 "trsvcid": "4420", 00:21:03.167 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:03.167 }, 00:21:03.167 "ctrlr_data": { 00:21:03.167 "cntlid": 1, 00:21:03.167 "vendor_id": "0x8086", 00:21:03.167 "model_number": "SPDK bdev Controller", 00:21:03.167 "serial_number": "00000000000000000000", 00:21:03.167 "firmware_revision": "25.01", 00:21:03.167 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:03.167 "oacs": { 00:21:03.167 "security": 0, 00:21:03.167 "format": 0, 00:21:03.167 "firmware": 0, 00:21:03.167 "ns_manage": 0 00:21:03.167 }, 00:21:03.167 "multi_ctrlr": true, 00:21:03.167 "ana_reporting": false 00:21:03.167 }, 00:21:03.167 "vs": { 00:21:03.167 "nvme_version": "1.3" 00:21:03.167 }, 00:21:03.167 "ns_data": { 00:21:03.167 "id": 1, 00:21:03.167 "can_share": true 00:21:03.167 } 00:21:03.167 } 00:21:03.167 ], 00:21:03.167 "mp_policy": "active_passive" 00:21:03.167 } 00:21:03.167 } 00:21:03.167 ] 00:21:03.167 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.167 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:03.167 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.167 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:03.167 [2024-11-26 20:51:06.700544] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:03.167 [2024-11-26 20:51:06.700620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x806710 (9): Bad file descriptor 00:21:03.167 [2024-11-26 20:51:06.832421] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:21:03.167 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.167 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:03.167 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.167 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:03.167 [ 00:21:03.167 { 00:21:03.167 "name": "nvme0n1", 00:21:03.167 "aliases": [ 00:21:03.167 "69f7c2de-a0b6-4cb8-875b-2e4d2c5111b5" 00:21:03.167 ], 00:21:03.167 "product_name": "NVMe disk", 00:21:03.167 "block_size": 512, 00:21:03.167 "num_blocks": 2097152, 00:21:03.167 "uuid": "69f7c2de-a0b6-4cb8-875b-2e4d2c5111b5", 00:21:03.167 "numa_id": 0, 00:21:03.167 "assigned_rate_limits": { 00:21:03.167 "rw_ios_per_sec": 0, 00:21:03.167 "rw_mbytes_per_sec": 0, 00:21:03.167 "r_mbytes_per_sec": 0, 00:21:03.167 "w_mbytes_per_sec": 0 00:21:03.167 }, 00:21:03.167 "claimed": false, 00:21:03.167 "zoned": false, 00:21:03.167 "supported_io_types": { 00:21:03.167 "read": true, 00:21:03.167 "write": true, 00:21:03.167 "unmap": false, 00:21:03.167 "flush": true, 00:21:03.167 "reset": true, 00:21:03.167 "nvme_admin": true, 00:21:03.167 "nvme_io": true, 00:21:03.167 "nvme_io_md": false, 00:21:03.167 "write_zeroes": true, 00:21:03.167 "zcopy": false, 00:21:03.167 "get_zone_info": false, 00:21:03.167 "zone_management": false, 00:21:03.167 "zone_append": false, 00:21:03.167 "compare": true, 00:21:03.167 "compare_and_write": true, 00:21:03.167 "abort": true, 00:21:03.167 "seek_hole": false, 00:21:03.167 "seek_data": false, 00:21:03.167 "copy": true, 00:21:03.167 "nvme_iov_md": false 00:21:03.167 }, 00:21:03.167 "memory_domains": [ 00:21:03.167 { 00:21:03.167 "dma_device_id": "system", 00:21:03.167 "dma_device_type": 1 00:21:03.167 } 00:21:03.167 ], 00:21:03.167 "driver_specific": { 00:21:03.167 "nvme": [ 00:21:03.167 { 00:21:03.167 "trid": { 00:21:03.167 "trtype": "TCP", 00:21:03.167 "adrfam": "IPv4", 00:21:03.167 "traddr": "10.0.0.2", 00:21:03.167 "trsvcid": "4420", 00:21:03.167 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:03.167 }, 00:21:03.167 "ctrlr_data": { 00:21:03.167 "cntlid": 2, 00:21:03.167 "vendor_id": "0x8086", 00:21:03.167 "model_number": "SPDK bdev Controller", 00:21:03.167 "serial_number": "00000000000000000000", 00:21:03.167 "firmware_revision": "25.01", 00:21:03.167 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:03.167 "oacs": { 00:21:03.167 "security": 0, 00:21:03.167 "format": 0, 00:21:03.167 "firmware": 0, 00:21:03.167 "ns_manage": 0 00:21:03.167 }, 00:21:03.167 "multi_ctrlr": true, 00:21:03.168 "ana_reporting": false 00:21:03.168 }, 00:21:03.168 "vs": { 00:21:03.168 "nvme_version": "1.3" 00:21:03.168 }, 00:21:03.168 "ns_data": { 00:21:03.168 "id": 1, 00:21:03.168 "can_share": true 00:21:03.168 } 00:21:03.168 } 00:21:03.168 ], 00:21:03.168 "mp_policy": "active_passive" 00:21:03.168 } 00:21:03.168 } 00:21:03.168 ] 00:21:03.168 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.168 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:03.168 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.168 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:03.168 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.425 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:21:03.425 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.3ZGvZKR0zR 00:21:03.425 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:03.425 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.3ZGvZKR0zR 00:21:03.425 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.3ZGvZKR0zR 00:21:03.425 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.425 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:03.425 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.425 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:03.425 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.425 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:03.425 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.425 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:21:03.425 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.425 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:03.425 [2024-11-26 20:51:06.889163] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:03.425 [2024-11-26 20:51:06.889281] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:03.425 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.425 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:21:03.425 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.425 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:03.425 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.425 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:03.425 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.425 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:03.425 [2024-11-26 20:51:06.905209] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:03.425 nvme0n1 00:21:03.425 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.425 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:03.425 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.425 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:03.425 [ 00:21:03.425 { 00:21:03.425 "name": "nvme0n1", 00:21:03.425 "aliases": [ 00:21:03.425 "69f7c2de-a0b6-4cb8-875b-2e4d2c5111b5" 00:21:03.425 ], 00:21:03.425 "product_name": "NVMe disk", 00:21:03.425 "block_size": 512, 00:21:03.425 "num_blocks": 2097152, 00:21:03.426 "uuid": "69f7c2de-a0b6-4cb8-875b-2e4d2c5111b5", 00:21:03.426 "numa_id": 0, 00:21:03.426 "assigned_rate_limits": { 00:21:03.426 "rw_ios_per_sec": 0, 00:21:03.426 "rw_mbytes_per_sec": 0, 00:21:03.426 "r_mbytes_per_sec": 0, 00:21:03.426 "w_mbytes_per_sec": 0 00:21:03.426 }, 00:21:03.426 "claimed": false, 00:21:03.426 "zoned": false, 00:21:03.426 "supported_io_types": { 00:21:03.426 "read": true, 00:21:03.426 "write": true, 00:21:03.426 "unmap": false, 00:21:03.426 "flush": true, 00:21:03.426 "reset": true, 00:21:03.426 "nvme_admin": true, 00:21:03.426 "nvme_io": true, 00:21:03.426 "nvme_io_md": false, 00:21:03.426 "write_zeroes": true, 00:21:03.426 "zcopy": false, 00:21:03.426 "get_zone_info": false, 00:21:03.426 "zone_management": false, 00:21:03.426 "zone_append": false, 00:21:03.426 "compare": true, 00:21:03.426 "compare_and_write": true, 00:21:03.426 "abort": true, 00:21:03.426 "seek_hole": false, 00:21:03.426 "seek_data": false, 00:21:03.426 "copy": true, 00:21:03.426 "nvme_iov_md": false 00:21:03.426 }, 00:21:03.426 "memory_domains": [ 00:21:03.426 { 00:21:03.426 "dma_device_id": "system", 00:21:03.426 "dma_device_type": 1 00:21:03.426 } 00:21:03.426 ], 00:21:03.426 "driver_specific": { 00:21:03.426 "nvme": [ 00:21:03.426 { 00:21:03.426 "trid": { 00:21:03.426 "trtype": "TCP", 00:21:03.426 "adrfam": "IPv4", 00:21:03.426 "traddr": "10.0.0.2", 00:21:03.426 "trsvcid": "4421", 00:21:03.426 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:03.426 }, 00:21:03.426 "ctrlr_data": { 00:21:03.426 "cntlid": 3, 00:21:03.426 "vendor_id": "0x8086", 00:21:03.426 "model_number": "SPDK bdev Controller", 00:21:03.426 "serial_number": "00000000000000000000", 00:21:03.426 "firmware_revision": "25.01", 00:21:03.426 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:03.426 "oacs": { 00:21:03.426 "security": 0, 00:21:03.426 "format": 0, 00:21:03.426 "firmware": 0, 00:21:03.426 "ns_manage": 0 00:21:03.426 }, 00:21:03.426 "multi_ctrlr": true, 00:21:03.426 "ana_reporting": false 00:21:03.426 }, 00:21:03.426 "vs": { 00:21:03.426 "nvme_version": "1.3" 00:21:03.426 }, 00:21:03.426 "ns_data": { 00:21:03.426 "id": 1, 00:21:03.426 "can_share": true 00:21:03.426 } 00:21:03.426 } 00:21:03.426 ], 00:21:03.426 "mp_policy": "active_passive" 00:21:03.426 } 00:21:03.426 } 00:21:03.426 ] 00:21:03.426 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.426 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:03.426 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.426 20:51:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:03.426 20:51:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.426 20:51:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.3ZGvZKR0zR 00:21:03.426 20:51:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:21:03.426 20:51:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:21:03.426 20:51:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:03.426 20:51:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:21:03.426 20:51:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:03.426 20:51:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:21:03.426 20:51:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:03.426 20:51:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:03.426 rmmod nvme_tcp 00:21:03.426 rmmod nvme_fabrics 00:21:03.426 rmmod nvme_keyring 00:21:03.426 20:51:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:03.426 20:51:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:21:03.426 20:51:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:21:03.426 20:51:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 1717774 ']' 00:21:03.426 20:51:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 1717774 00:21:03.426 20:51:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 1717774 ']' 00:21:03.426 20:51:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 1717774 00:21:03.426 20:51:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:21:03.426 20:51:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:03.426 20:51:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1717774 00:21:03.426 20:51:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:03.426 20:51:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:03.426 20:51:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1717774' 00:21:03.426 killing process with pid 1717774 00:21:03.426 20:51:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 1717774 00:21:03.426 20:51:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 1717774 00:21:03.684 20:51:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:03.684 20:51:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:03.684 20:51:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:03.684 20:51:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:21:03.684 20:51:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:21:03.684 20:51:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:21:03.684 20:51:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:03.684 20:51:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:03.684 20:51:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:03.684 20:51:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:03.684 20:51:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:03.684 20:51:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:06.221 20:51:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:06.221 00:21:06.221 real 0m5.654s 00:21:06.221 user 0m2.184s 00:21:06.221 sys 0m1.901s 00:21:06.221 20:51:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:06.221 20:51:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:06.221 ************************************ 00:21:06.221 END TEST nvmf_async_init 00:21:06.221 ************************************ 00:21:06.221 20:51:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:06.221 20:51:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:06.221 20:51:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:06.221 20:51:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.221 ************************************ 00:21:06.221 START TEST dma 00:21:06.221 ************************************ 00:21:06.221 20:51:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:06.221 * Looking for test storage... 00:21:06.221 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:06.221 20:51:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:06.221 20:51:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:21:06.221 20:51:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:06.221 20:51:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:06.221 20:51:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:06.221 20:51:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:06.221 20:51:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:06.221 20:51:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:21:06.221 20:51:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:21:06.221 20:51:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:21:06.221 20:51:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:21:06.221 20:51:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:21:06.221 20:51:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:21:06.221 20:51:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:21:06.221 20:51:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:06.221 20:51:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:21:06.221 20:51:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:21:06.221 20:51:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:06.221 20:51:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:06.221 20:51:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:21:06.221 20:51:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:21:06.221 20:51:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:06.221 20:51:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:21:06.221 20:51:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:21:06.221 20:51:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:21:06.221 20:51:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:21:06.221 20:51:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:06.221 20:51:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:21:06.221 20:51:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:21:06.221 20:51:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:06.221 20:51:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:06.221 20:51:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:21:06.221 20:51:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:06.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.222 --rc genhtml_branch_coverage=1 00:21:06.222 --rc genhtml_function_coverage=1 00:21:06.222 --rc genhtml_legend=1 00:21:06.222 --rc geninfo_all_blocks=1 00:21:06.222 --rc geninfo_unexecuted_blocks=1 00:21:06.222 00:21:06.222 ' 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:06.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.222 --rc genhtml_branch_coverage=1 00:21:06.222 --rc genhtml_function_coverage=1 00:21:06.222 --rc genhtml_legend=1 00:21:06.222 --rc geninfo_all_blocks=1 00:21:06.222 --rc geninfo_unexecuted_blocks=1 00:21:06.222 00:21:06.222 ' 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:06.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.222 --rc genhtml_branch_coverage=1 00:21:06.222 --rc genhtml_function_coverage=1 00:21:06.222 --rc genhtml_legend=1 00:21:06.222 --rc geninfo_all_blocks=1 00:21:06.222 --rc geninfo_unexecuted_blocks=1 00:21:06.222 00:21:06.222 ' 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:06.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.222 --rc genhtml_branch_coverage=1 00:21:06.222 --rc genhtml_function_coverage=1 00:21:06.222 --rc genhtml_legend=1 00:21:06.222 --rc geninfo_all_blocks=1 00:21:06.222 --rc geninfo_unexecuted_blocks=1 00:21:06.222 00:21:06.222 ' 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:06.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:21:06.222 00:21:06.222 real 0m0.142s 00:21:06.222 user 0m0.081s 00:21:06.222 sys 0m0.069s 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:21:06.222 ************************************ 00:21:06.222 END TEST dma 00:21:06.222 ************************************ 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.222 ************************************ 00:21:06.222 START TEST nvmf_identify 00:21:06.222 ************************************ 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:06.222 * Looking for test storage... 00:21:06.222 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:21:06.222 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:06.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.223 --rc genhtml_branch_coverage=1 00:21:06.223 --rc genhtml_function_coverage=1 00:21:06.223 --rc genhtml_legend=1 00:21:06.223 --rc geninfo_all_blocks=1 00:21:06.223 --rc geninfo_unexecuted_blocks=1 00:21:06.223 00:21:06.223 ' 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:06.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.223 --rc genhtml_branch_coverage=1 00:21:06.223 --rc genhtml_function_coverage=1 00:21:06.223 --rc genhtml_legend=1 00:21:06.223 --rc geninfo_all_blocks=1 00:21:06.223 --rc geninfo_unexecuted_blocks=1 00:21:06.223 00:21:06.223 ' 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:06.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.223 --rc genhtml_branch_coverage=1 00:21:06.223 --rc genhtml_function_coverage=1 00:21:06.223 --rc genhtml_legend=1 00:21:06.223 --rc geninfo_all_blocks=1 00:21:06.223 --rc geninfo_unexecuted_blocks=1 00:21:06.223 00:21:06.223 ' 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:06.223 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.223 --rc genhtml_branch_coverage=1 00:21:06.223 --rc genhtml_function_coverage=1 00:21:06.223 --rc genhtml_legend=1 00:21:06.223 --rc geninfo_all_blocks=1 00:21:06.223 --rc geninfo_unexecuted_blocks=1 00:21:06.223 00:21:06.223 ' 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:06.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:21:06.223 20:51:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:08.126 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:08.127 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:08.127 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:08.127 Found net devices under 0000:09:00.0: cvl_0_0 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:08.127 Found net devices under 0000:09:00.1: cvl_0_1 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:08.127 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:08.386 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:08.386 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:08.386 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:08.386 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:08.386 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:08.386 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:21:08.386 00:21:08.386 --- 10.0.0.2 ping statistics --- 00:21:08.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.386 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:21:08.386 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:08.386 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:08.386 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:21:08.386 00:21:08.386 --- 10.0.0.1 ping statistics --- 00:21:08.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.386 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:21:08.386 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:08.386 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:21:08.386 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:08.386 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:08.386 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:08.386 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:08.386 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:08.386 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:08.386 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:08.386 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:21:08.386 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:08.386 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:08.386 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1720422 00:21:08.386 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:08.386 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:08.386 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1720422 00:21:08.386 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 1720422 ']' 00:21:08.386 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:08.386 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:08.386 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:08.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:08.387 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:08.387 20:51:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:08.387 [2024-11-26 20:51:11.954206] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:21:08.387 [2024-11-26 20:51:11.954312] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:08.387 [2024-11-26 20:51:12.028096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:08.645 [2024-11-26 20:51:12.085524] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:08.645 [2024-11-26 20:51:12.085573] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:08.645 [2024-11-26 20:51:12.085602] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:08.645 [2024-11-26 20:51:12.085612] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:08.645 [2024-11-26 20:51:12.085621] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:08.645 [2024-11-26 20:51:12.087206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:08.645 [2024-11-26 20:51:12.087271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:08.645 [2024-11-26 20:51:12.087337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:08.645 [2024-11-26 20:51:12.087342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:08.645 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:08.645 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:21:08.645 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:08.645 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.645 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:08.645 [2024-11-26 20:51:12.210006] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:08.645 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.645 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:21:08.645 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:08.645 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:08.645 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:08.645 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.645 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:08.645 Malloc0 00:21:08.645 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.645 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:08.645 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.645 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:08.645 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.645 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:21:08.645 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.645 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:08.645 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.645 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:08.645 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.645 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:08.645 [2024-11-26 20:51:12.302670] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:08.645 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.645 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:08.645 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.645 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:08.645 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.645 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:21:08.645 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.645 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:08.645 [ 00:21:08.645 { 00:21:08.645 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:08.645 "subtype": "Discovery", 00:21:08.645 "listen_addresses": [ 00:21:08.645 { 00:21:08.645 "trtype": "TCP", 00:21:08.645 "adrfam": "IPv4", 00:21:08.645 "traddr": "10.0.0.2", 00:21:08.645 "trsvcid": "4420" 00:21:08.645 } 00:21:08.645 ], 00:21:08.645 "allow_any_host": true, 00:21:08.645 "hosts": [] 00:21:08.646 }, 00:21:08.646 { 00:21:08.646 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:08.646 "subtype": "NVMe", 00:21:08.646 "listen_addresses": [ 00:21:08.646 { 00:21:08.646 "trtype": "TCP", 00:21:08.646 "adrfam": "IPv4", 00:21:08.646 "traddr": "10.0.0.2", 00:21:08.646 "trsvcid": "4420" 00:21:08.646 } 00:21:08.646 ], 00:21:08.646 "allow_any_host": true, 00:21:08.646 "hosts": [], 00:21:08.646 "serial_number": "SPDK00000000000001", 00:21:08.646 "model_number": "SPDK bdev Controller", 00:21:08.646 "max_namespaces": 32, 00:21:08.646 "min_cntlid": 1, 00:21:08.646 "max_cntlid": 65519, 00:21:08.646 "namespaces": [ 00:21:08.646 { 00:21:08.646 "nsid": 1, 00:21:08.646 "bdev_name": "Malloc0", 00:21:08.646 "name": "Malloc0", 00:21:08.646 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:21:08.646 "eui64": "ABCDEF0123456789", 00:21:08.646 "uuid": "223ad4c1-affd-4382-b471-f9a9870a799b" 00:21:08.646 } 00:21:08.646 ] 00:21:08.646 } 00:21:08.646 ] 00:21:08.646 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.646 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:21:08.906 [2024-11-26 20:51:12.347468] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:21:08.906 [2024-11-26 20:51:12.347520] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1720551 ] 00:21:08.906 [2024-11-26 20:51:12.401440] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:21:08.906 [2024-11-26 20:51:12.401501] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:08.906 [2024-11-26 20:51:12.401512] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:08.906 [2024-11-26 20:51:12.401535] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:08.906 [2024-11-26 20:51:12.401549] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:08.906 [2024-11-26 20:51:12.402211] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:21:08.906 [2024-11-26 20:51:12.402273] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1fdf690 0 00:21:08.906 [2024-11-26 20:51:12.412320] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:08.906 [2024-11-26 20:51:12.412341] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:08.906 [2024-11-26 20:51:12.412349] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:08.906 [2024-11-26 20:51:12.412355] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:08.906 [2024-11-26 20:51:12.412403] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:08.906 [2024-11-26 20:51:12.412417] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.906 [2024-11-26 20:51:12.412424] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fdf690) 00:21:08.906 [2024-11-26 20:51:12.412441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:08.906 [2024-11-26 20:51:12.412468] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2041100, cid 0, qid 0 00:21:08.906 [2024-11-26 20:51:12.419317] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:08.906 [2024-11-26 20:51:12.419337] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:08.906 [2024-11-26 20:51:12.419344] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:08.906 [2024-11-26 20:51:12.419352] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2041100) on tqpair=0x1fdf690 00:21:08.906 [2024-11-26 20:51:12.419368] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:08.906 [2024-11-26 20:51:12.419382] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:21:08.906 [2024-11-26 20:51:12.419392] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:21:08.906 [2024-11-26 20:51:12.419416] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:08.906 [2024-11-26 20:51:12.419425] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.906 [2024-11-26 20:51:12.419436] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fdf690) 00:21:08.906 [2024-11-26 20:51:12.419448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.906 [2024-11-26 20:51:12.419472] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2041100, cid 0, qid 0 00:21:08.906 [2024-11-26 20:51:12.419588] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:08.906 [2024-11-26 20:51:12.419604] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:08.906 [2024-11-26 20:51:12.419615] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:08.906 [2024-11-26 20:51:12.419623] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2041100) on tqpair=0x1fdf690 00:21:08.906 [2024-11-26 20:51:12.419637] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:21:08.906 [2024-11-26 20:51:12.419652] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:21:08.906 [2024-11-26 20:51:12.419668] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:08.906 [2024-11-26 20:51:12.419677] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.906 [2024-11-26 20:51:12.419683] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fdf690) 00:21:08.906 [2024-11-26 20:51:12.419694] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.906 [2024-11-26 20:51:12.419716] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2041100, cid 0, qid 0 00:21:08.906 [2024-11-26 20:51:12.419799] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:08.906 [2024-11-26 20:51:12.419814] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:08.906 [2024-11-26 20:51:12.419820] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:08.906 [2024-11-26 20:51:12.419827] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2041100) on tqpair=0x1fdf690 00:21:08.906 [2024-11-26 20:51:12.419836] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:21:08.906 [2024-11-26 20:51:12.419851] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:21:08.906 [2024-11-26 20:51:12.419867] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:08.906 [2024-11-26 20:51:12.419874] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.906 [2024-11-26 20:51:12.419881] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fdf690) 00:21:08.906 [2024-11-26 20:51:12.419891] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.906 [2024-11-26 20:51:12.419913] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2041100, cid 0, qid 0 00:21:08.906 [2024-11-26 20:51:12.419992] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:08.906 [2024-11-26 20:51:12.420007] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:08.906 [2024-11-26 20:51:12.420014] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:08.906 [2024-11-26 20:51:12.420021] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2041100) on tqpair=0x1fdf690 00:21:08.906 [2024-11-26 20:51:12.420030] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:08.906 [2024-11-26 20:51:12.420051] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:08.906 [2024-11-26 20:51:12.420060] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.906 [2024-11-26 20:51:12.420066] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fdf690) 00:21:08.906 [2024-11-26 20:51:12.420077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.906 [2024-11-26 20:51:12.420107] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2041100, cid 0, qid 0 00:21:08.906 [2024-11-26 20:51:12.420194] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:08.906 [2024-11-26 20:51:12.420209] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:08.906 [2024-11-26 20:51:12.420216] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:08.906 [2024-11-26 20:51:12.420222] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2041100) on tqpair=0x1fdf690 00:21:08.906 [2024-11-26 20:51:12.420231] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:21:08.906 [2024-11-26 20:51:12.420244] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:21:08.906 [2024-11-26 20:51:12.420257] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:08.906 [2024-11-26 20:51:12.420368] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:21:08.906 [2024-11-26 20:51:12.420381] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:08.906 [2024-11-26 20:51:12.420395] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:08.906 [2024-11-26 20:51:12.420403] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.906 [2024-11-26 20:51:12.420410] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fdf690) 00:21:08.906 [2024-11-26 20:51:12.420420] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.906 [2024-11-26 20:51:12.420456] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2041100, cid 0, qid 0 00:21:08.906 [2024-11-26 20:51:12.420549] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:08.906 [2024-11-26 20:51:12.420564] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:08.906 [2024-11-26 20:51:12.420571] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:08.906 [2024-11-26 20:51:12.420581] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2041100) on tqpair=0x1fdf690 00:21:08.907 [2024-11-26 20:51:12.420591] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:08.907 [2024-11-26 20:51:12.420607] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:08.907 [2024-11-26 20:51:12.420616] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.907 [2024-11-26 20:51:12.420623] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fdf690) 00:21:08.907 [2024-11-26 20:51:12.420637] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.907 [2024-11-26 20:51:12.420659] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2041100, cid 0, qid 0 00:21:08.907 [2024-11-26 20:51:12.420787] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:08.907 [2024-11-26 20:51:12.420802] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:08.907 [2024-11-26 20:51:12.420809] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:08.907 [2024-11-26 20:51:12.420816] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2041100) on tqpair=0x1fdf690 00:21:08.907 [2024-11-26 20:51:12.420823] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:08.907 [2024-11-26 20:51:12.420835] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:21:08.907 [2024-11-26 20:51:12.420850] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:21:08.907 [2024-11-26 20:51:12.420876] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:21:08.907 [2024-11-26 20:51:12.420897] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.907 [2024-11-26 20:51:12.420905] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fdf690) 00:21:08.907 [2024-11-26 20:51:12.420916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.907 [2024-11-26 20:51:12.420951] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2041100, cid 0, qid 0 00:21:08.907 [2024-11-26 20:51:12.421103] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:08.907 [2024-11-26 20:51:12.421119] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:08.907 [2024-11-26 20:51:12.421128] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:08.907 [2024-11-26 20:51:12.421135] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fdf690): datao=0, datal=4096, cccid=0 00:21:08.907 [2024-11-26 20:51:12.421143] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2041100) on tqpair(0x1fdf690): expected_datao=0, payload_size=4096 00:21:08.907 [2024-11-26 20:51:12.421150] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:08.907 [2024-11-26 20:51:12.421161] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:08.907 [2024-11-26 20:51:12.421169] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:08.907 [2024-11-26 20:51:12.421195] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:08.907 [2024-11-26 20:51:12.421208] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:08.907 [2024-11-26 20:51:12.421215] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:08.907 [2024-11-26 20:51:12.421221] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2041100) on tqpair=0x1fdf690 00:21:08.907 [2024-11-26 20:51:12.421234] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:21:08.907 [2024-11-26 20:51:12.421246] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:21:08.907 [2024-11-26 20:51:12.421254] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:21:08.907 [2024-11-26 20:51:12.421263] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:21:08.907 [2024-11-26 20:51:12.421271] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:21:08.907 [2024-11-26 20:51:12.421279] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:21:08.907 [2024-11-26 20:51:12.421294] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:21:08.907 [2024-11-26 20:51:12.421317] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:08.907 [2024-11-26 20:51:12.421326] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.907 [2024-11-26 20:51:12.421332] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fdf690) 00:21:08.907 [2024-11-26 20:51:12.421343] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:08.907 [2024-11-26 20:51:12.421365] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2041100, cid 0, qid 0 00:21:08.907 [2024-11-26 20:51:12.421499] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:08.907 [2024-11-26 20:51:12.421514] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:08.907 [2024-11-26 20:51:12.421521] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:08.907 [2024-11-26 20:51:12.421532] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2041100) on tqpair=0x1fdf690 00:21:08.907 [2024-11-26 20:51:12.421544] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:08.907 [2024-11-26 20:51:12.421552] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.907 [2024-11-26 20:51:12.421558] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fdf690) 00:21:08.907 [2024-11-26 20:51:12.421568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:08.907 [2024-11-26 20:51:12.421578] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:08.907 [2024-11-26 20:51:12.421585] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.907 [2024-11-26 20:51:12.421591] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1fdf690) 00:21:08.907 [2024-11-26 20:51:12.421600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:08.907 [2024-11-26 20:51:12.421609] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:08.907 [2024-11-26 20:51:12.421615] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.907 [2024-11-26 20:51:12.421621] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1fdf690) 00:21:08.907 [2024-11-26 20:51:12.421630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:08.907 [2024-11-26 20:51:12.421639] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:08.907 [2024-11-26 20:51:12.421646] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.907 [2024-11-26 20:51:12.421652] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fdf690) 00:21:08.907 [2024-11-26 20:51:12.421660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:08.907 [2024-11-26 20:51:12.421684] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:08.907 [2024-11-26 20:51:12.421705] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:08.907 [2024-11-26 20:51:12.421719] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.907 [2024-11-26 20:51:12.421727] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fdf690) 00:21:08.907 [2024-11-26 20:51:12.421752] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.907 [2024-11-26 20:51:12.421774] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2041100, cid 0, qid 0 00:21:08.907 [2024-11-26 20:51:12.421785] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2041280, cid 1, qid 0 00:21:08.907 [2024-11-26 20:51:12.421792] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2041400, cid 2, qid 0 00:21:08.907 [2024-11-26 20:51:12.421814] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2041580, cid 3, qid 0 00:21:08.907 [2024-11-26 20:51:12.421822] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2041700, cid 4, qid 0 00:21:08.907 [2024-11-26 20:51:12.421953] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:08.907 [2024-11-26 20:51:12.421968] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:08.907 [2024-11-26 20:51:12.421975] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:08.907 [2024-11-26 20:51:12.421982] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2041700) on tqpair=0x1fdf690 00:21:08.907 [2024-11-26 20:51:12.421993] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:21:08.907 [2024-11-26 20:51:12.422003] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:21:08.907 [2024-11-26 20:51:12.422026] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.907 [2024-11-26 20:51:12.422036] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fdf690) 00:21:08.907 [2024-11-26 20:51:12.422050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.907 [2024-11-26 20:51:12.422072] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2041700, cid 4, qid 0 00:21:08.907 [2024-11-26 20:51:12.422162] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:08.907 [2024-11-26 20:51:12.422177] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:08.907 [2024-11-26 20:51:12.422184] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:08.907 [2024-11-26 20:51:12.422191] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fdf690): datao=0, datal=4096, cccid=4 00:21:08.907 [2024-11-26 20:51:12.422205] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2041700) on tqpair(0x1fdf690): expected_datao=0, payload_size=4096 00:21:08.907 [2024-11-26 20:51:12.422216] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:08.907 [2024-11-26 20:51:12.422234] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:08.907 [2024-11-26 20:51:12.422243] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:08.907 [2024-11-26 20:51:12.422265] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:08.907 [2024-11-26 20:51:12.422279] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:08.907 [2024-11-26 20:51:12.422286] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:08.907 [2024-11-26 20:51:12.422292] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2041700) on tqpair=0x1fdf690 00:21:08.907 [2024-11-26 20:51:12.422324] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:21:08.907 [2024-11-26 20:51:12.422366] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.908 [2024-11-26 20:51:12.422378] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fdf690) 00:21:08.908 [2024-11-26 20:51:12.422389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.908 [2024-11-26 20:51:12.422401] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:08.908 [2024-11-26 20:51:12.422408] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.908 [2024-11-26 20:51:12.422414] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1fdf690) 00:21:08.908 [2024-11-26 20:51:12.422423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:08.908 [2024-11-26 20:51:12.422450] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2041700, cid 4, qid 0 00:21:08.908 [2024-11-26 20:51:12.422462] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2041880, cid 5, qid 0 00:21:08.908 [2024-11-26 20:51:12.422607] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:08.908 [2024-11-26 20:51:12.422622] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:08.908 [2024-11-26 20:51:12.422631] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:08.908 [2024-11-26 20:51:12.422638] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fdf690): datao=0, datal=1024, cccid=4 00:21:08.908 [2024-11-26 20:51:12.422660] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2041700) on tqpair(0x1fdf690): expected_datao=0, payload_size=1024 00:21:08.908 [2024-11-26 20:51:12.422668] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:08.908 [2024-11-26 20:51:12.422678] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:08.908 [2024-11-26 20:51:12.422685] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:08.908 [2024-11-26 20:51:12.422694] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:08.908 [2024-11-26 20:51:12.422703] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:08.908 [2024-11-26 20:51:12.422714] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:08.908 [2024-11-26 20:51:12.422721] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2041880) on tqpair=0x1fdf690 00:21:08.908 [2024-11-26 20:51:12.467323] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:08.908 [2024-11-26 20:51:12.467343] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:08.908 [2024-11-26 20:51:12.467351] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:08.908 [2024-11-26 20:51:12.467359] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2041700) on tqpair=0x1fdf690 00:21:08.908 [2024-11-26 20:51:12.467377] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.908 [2024-11-26 20:51:12.467386] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fdf690) 00:21:08.908 [2024-11-26 20:51:12.467398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.908 [2024-11-26 20:51:12.467432] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2041700, cid 4, qid 0 00:21:08.908 [2024-11-26 20:51:12.467542] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:08.908 [2024-11-26 20:51:12.467558] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:08.908 [2024-11-26 20:51:12.467565] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:08.908 [2024-11-26 20:51:12.467574] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fdf690): datao=0, datal=3072, cccid=4 00:21:08.908 [2024-11-26 20:51:12.467583] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2041700) on tqpair(0x1fdf690): expected_datao=0, payload_size=3072 00:21:08.908 [2024-11-26 20:51:12.467590] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:08.908 [2024-11-26 20:51:12.467601] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:08.908 [2024-11-26 20:51:12.467608] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:08.908 [2024-11-26 20:51:12.467636] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:08.908 [2024-11-26 20:51:12.467651] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:08.908 [2024-11-26 20:51:12.467658] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:08.908 [2024-11-26 20:51:12.467665] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2041700) on tqpair=0x1fdf690 00:21:08.908 [2024-11-26 20:51:12.467681] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.908 [2024-11-26 20:51:12.467690] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fdf690) 00:21:08.908 [2024-11-26 20:51:12.467701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.908 [2024-11-26 20:51:12.467731] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2041700, cid 4, qid 0 00:21:08.908 [2024-11-26 20:51:12.467834] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:08.908 [2024-11-26 20:51:12.467848] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:08.908 [2024-11-26 20:51:12.467855] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:08.908 [2024-11-26 20:51:12.467862] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fdf690): datao=0, datal=8, cccid=4 00:21:08.908 [2024-11-26 20:51:12.467869] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2041700) on tqpair(0x1fdf690): expected_datao=0, payload_size=8 00:21:08.908 [2024-11-26 20:51:12.467876] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:08.908 [2024-11-26 20:51:12.467886] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:08.908 [2024-11-26 20:51:12.467893] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:08.908 [2024-11-26 20:51:12.508422] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:08.908 [2024-11-26 20:51:12.508442] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:08.908 [2024-11-26 20:51:12.508450] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:08.908 [2024-11-26 20:51:12.508466] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2041700) on tqpair=0x1fdf690 00:21:08.908 ===================================================== 00:21:08.908 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:08.908 ===================================================== 00:21:08.908 Controller Capabilities/Features 00:21:08.908 ================================ 00:21:08.908 Vendor ID: 0000 00:21:08.908 Subsystem Vendor ID: 0000 00:21:08.908 Serial Number: .................... 00:21:08.908 Model Number: ........................................ 00:21:08.908 Firmware Version: 25.01 00:21:08.908 Recommended Arb Burst: 0 00:21:08.908 IEEE OUI Identifier: 00 00 00 00:21:08.908 Multi-path I/O 00:21:08.908 May have multiple subsystem ports: No 00:21:08.908 May have multiple controllers: No 00:21:08.908 Associated with SR-IOV VF: No 00:21:08.908 Max Data Transfer Size: 131072 00:21:08.908 Max Number of Namespaces: 0 00:21:08.908 Max Number of I/O Queues: 1024 00:21:08.908 NVMe Specification Version (VS): 1.3 00:21:08.908 NVMe Specification Version (Identify): 1.3 00:21:08.908 Maximum Queue Entries: 128 00:21:08.908 Contiguous Queues Required: Yes 00:21:08.908 Arbitration Mechanisms Supported 00:21:08.908 Weighted Round Robin: Not Supported 00:21:08.908 Vendor Specific: Not Supported 00:21:08.908 Reset Timeout: 15000 ms 00:21:08.908 Doorbell Stride: 4 bytes 00:21:08.908 NVM Subsystem Reset: Not Supported 00:21:08.908 Command Sets Supported 00:21:08.908 NVM Command Set: Supported 00:21:08.908 Boot Partition: Not Supported 00:21:08.908 Memory Page Size Minimum: 4096 bytes 00:21:08.908 Memory Page Size Maximum: 4096 bytes 00:21:08.908 Persistent Memory Region: Not Supported 00:21:08.908 Optional Asynchronous Events Supported 00:21:08.908 Namespace Attribute Notices: Not Supported 00:21:08.908 Firmware Activation Notices: Not Supported 00:21:08.908 ANA Change Notices: Not Supported 00:21:08.908 PLE Aggregate Log Change Notices: Not Supported 00:21:08.908 LBA Status Info Alert Notices: Not Supported 00:21:08.908 EGE Aggregate Log Change Notices: Not Supported 00:21:08.908 Normal NVM Subsystem Shutdown event: Not Supported 00:21:08.908 Zone Descriptor Change Notices: Not Supported 00:21:08.908 Discovery Log Change Notices: Supported 00:21:08.908 Controller Attributes 00:21:08.908 128-bit Host Identifier: Not Supported 00:21:08.908 Non-Operational Permissive Mode: Not Supported 00:21:08.908 NVM Sets: Not Supported 00:21:08.908 Read Recovery Levels: Not Supported 00:21:08.908 Endurance Groups: Not Supported 00:21:08.908 Predictable Latency Mode: Not Supported 00:21:08.908 Traffic Based Keep ALive: Not Supported 00:21:08.908 Namespace Granularity: Not Supported 00:21:08.908 SQ Associations: Not Supported 00:21:08.908 UUID List: Not Supported 00:21:08.908 Multi-Domain Subsystem: Not Supported 00:21:08.908 Fixed Capacity Management: Not Supported 00:21:08.908 Variable Capacity Management: Not Supported 00:21:08.908 Delete Endurance Group: Not Supported 00:21:08.908 Delete NVM Set: Not Supported 00:21:08.908 Extended LBA Formats Supported: Not Supported 00:21:08.908 Flexible Data Placement Supported: Not Supported 00:21:08.908 00:21:08.908 Controller Memory Buffer Support 00:21:08.908 ================================ 00:21:08.908 Supported: No 00:21:08.908 00:21:08.908 Persistent Memory Region Support 00:21:08.908 ================================ 00:21:08.908 Supported: No 00:21:08.908 00:21:08.908 Admin Command Set Attributes 00:21:08.908 ============================ 00:21:08.908 Security Send/Receive: Not Supported 00:21:08.908 Format NVM: Not Supported 00:21:08.908 Firmware Activate/Download: Not Supported 00:21:08.908 Namespace Management: Not Supported 00:21:08.908 Device Self-Test: Not Supported 00:21:08.908 Directives: Not Supported 00:21:08.908 NVMe-MI: Not Supported 00:21:08.908 Virtualization Management: Not Supported 00:21:08.908 Doorbell Buffer Config: Not Supported 00:21:08.908 Get LBA Status Capability: Not Supported 00:21:08.908 Command & Feature Lockdown Capability: Not Supported 00:21:08.908 Abort Command Limit: 1 00:21:08.908 Async Event Request Limit: 4 00:21:08.908 Number of Firmware Slots: N/A 00:21:08.908 Firmware Slot 1 Read-Only: N/A 00:21:08.908 Firmware Activation Without Reset: N/A 00:21:08.909 Multiple Update Detection Support: N/A 00:21:08.909 Firmware Update Granularity: No Information Provided 00:21:08.909 Per-Namespace SMART Log: No 00:21:08.909 Asymmetric Namespace Access Log Page: Not Supported 00:21:08.909 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:08.909 Command Effects Log Page: Not Supported 00:21:08.909 Get Log Page Extended Data: Supported 00:21:08.909 Telemetry Log Pages: Not Supported 00:21:08.909 Persistent Event Log Pages: Not Supported 00:21:08.909 Supported Log Pages Log Page: May Support 00:21:08.909 Commands Supported & Effects Log Page: Not Supported 00:21:08.909 Feature Identifiers & Effects Log Page:May Support 00:21:08.909 NVMe-MI Commands & Effects Log Page: May Support 00:21:08.909 Data Area 4 for Telemetry Log: Not Supported 00:21:08.909 Error Log Page Entries Supported: 128 00:21:08.909 Keep Alive: Not Supported 00:21:08.909 00:21:08.909 NVM Command Set Attributes 00:21:08.909 ========================== 00:21:08.909 Submission Queue Entry Size 00:21:08.909 Max: 1 00:21:08.909 Min: 1 00:21:08.909 Completion Queue Entry Size 00:21:08.909 Max: 1 00:21:08.909 Min: 1 00:21:08.909 Number of Namespaces: 0 00:21:08.909 Compare Command: Not Supported 00:21:08.909 Write Uncorrectable Command: Not Supported 00:21:08.909 Dataset Management Command: Not Supported 00:21:08.909 Write Zeroes Command: Not Supported 00:21:08.909 Set Features Save Field: Not Supported 00:21:08.909 Reservations: Not Supported 00:21:08.909 Timestamp: Not Supported 00:21:08.909 Copy: Not Supported 00:21:08.909 Volatile Write Cache: Not Present 00:21:08.909 Atomic Write Unit (Normal): 1 00:21:08.909 Atomic Write Unit (PFail): 1 00:21:08.909 Atomic Compare & Write Unit: 1 00:21:08.909 Fused Compare & Write: Supported 00:21:08.909 Scatter-Gather List 00:21:08.909 SGL Command Set: Supported 00:21:08.909 SGL Keyed: Supported 00:21:08.909 SGL Bit Bucket Descriptor: Not Supported 00:21:08.909 SGL Metadata Pointer: Not Supported 00:21:08.909 Oversized SGL: Not Supported 00:21:08.909 SGL Metadata Address: Not Supported 00:21:08.909 SGL Offset: Supported 00:21:08.909 Transport SGL Data Block: Not Supported 00:21:08.909 Replay Protected Memory Block: Not Supported 00:21:08.909 00:21:08.909 Firmware Slot Information 00:21:08.909 ========================= 00:21:08.909 Active slot: 0 00:21:08.909 00:21:08.909 00:21:08.909 Error Log 00:21:08.909 ========= 00:21:08.909 00:21:08.909 Active Namespaces 00:21:08.909 ================= 00:21:08.909 Discovery Log Page 00:21:08.909 ================== 00:21:08.909 Generation Counter: 2 00:21:08.909 Number of Records: 2 00:21:08.909 Record Format: 0 00:21:08.909 00:21:08.909 Discovery Log Entry 0 00:21:08.909 ---------------------- 00:21:08.909 Transport Type: 3 (TCP) 00:21:08.909 Address Family: 1 (IPv4) 00:21:08.909 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:08.909 Entry Flags: 00:21:08.909 Duplicate Returned Information: 1 00:21:08.909 Explicit Persistent Connection Support for Discovery: 1 00:21:08.909 Transport Requirements: 00:21:08.909 Secure Channel: Not Required 00:21:08.909 Port ID: 0 (0x0000) 00:21:08.909 Controller ID: 65535 (0xffff) 00:21:08.909 Admin Max SQ Size: 128 00:21:08.909 Transport Service Identifier: 4420 00:21:08.909 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:08.909 Transport Address: 10.0.0.2 00:21:08.909 Discovery Log Entry 1 00:21:08.909 ---------------------- 00:21:08.909 Transport Type: 3 (TCP) 00:21:08.909 Address Family: 1 (IPv4) 00:21:08.909 Subsystem Type: 2 (NVM Subsystem) 00:21:08.909 Entry Flags: 00:21:08.909 Duplicate Returned Information: 0 00:21:08.909 Explicit Persistent Connection Support for Discovery: 0 00:21:08.909 Transport Requirements: 00:21:08.909 Secure Channel: Not Required 00:21:08.909 Port ID: 0 (0x0000) 00:21:08.909 Controller ID: 65535 (0xffff) 00:21:08.909 Admin Max SQ Size: 128 00:21:08.909 Transport Service Identifier: 4420 00:21:08.909 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:21:08.909 Transport Address: 10.0.0.2 [2024-11-26 20:51:12.508584] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:21:08.909 [2024-11-26 20:51:12.508608] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2041100) on tqpair=0x1fdf690 00:21:08.909 [2024-11-26 20:51:12.508623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.909 [2024-11-26 20:51:12.508632] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2041280) on tqpair=0x1fdf690 00:21:08.909 [2024-11-26 20:51:12.508640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.909 [2024-11-26 20:51:12.508647] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2041400) on tqpair=0x1fdf690 00:21:08.909 [2024-11-26 20:51:12.508655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.909 [2024-11-26 20:51:12.508663] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2041580) on tqpair=0x1fdf690 00:21:08.909 [2024-11-26 20:51:12.508670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:08.909 [2024-11-26 20:51:12.508699] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:08.909 [2024-11-26 20:51:12.508706] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.909 [2024-11-26 20:51:12.508713] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fdf690) 00:21:08.909 [2024-11-26 20:51:12.508723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.909 [2024-11-26 20:51:12.508762] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2041580, cid 3, qid 0 00:21:08.909 [2024-11-26 20:51:12.508847] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:08.909 [2024-11-26 20:51:12.508862] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:08.909 [2024-11-26 20:51:12.508883] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:08.909 [2024-11-26 20:51:12.508893] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2041580) on tqpair=0x1fdf690 00:21:08.909 [2024-11-26 20:51:12.508905] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:08.909 [2024-11-26 20:51:12.508913] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.909 [2024-11-26 20:51:12.508920] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fdf690) 00:21:08.909 [2024-11-26 20:51:12.508930] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.909 [2024-11-26 20:51:12.508959] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2041580, cid 3, qid 0 00:21:08.909 [2024-11-26 20:51:12.509112] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:08.909 [2024-11-26 20:51:12.509127] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:08.909 [2024-11-26 20:51:12.509134] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:08.909 [2024-11-26 20:51:12.509141] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2041580) on tqpair=0x1fdf690 00:21:08.909 [2024-11-26 20:51:12.509150] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:21:08.909 [2024-11-26 20:51:12.509158] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:21:08.909 [2024-11-26 20:51:12.509175] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:08.909 [2024-11-26 20:51:12.509186] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.909 [2024-11-26 20:51:12.509193] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fdf690) 00:21:08.909 [2024-11-26 20:51:12.509203] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.909 [2024-11-26 20:51:12.509246] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2041580, cid 3, qid 0 00:21:08.909 [2024-11-26 20:51:12.509379] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:08.909 [2024-11-26 20:51:12.509395] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:08.909 [2024-11-26 20:51:12.509402] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:08.909 [2024-11-26 20:51:12.509409] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2041580) on tqpair=0x1fdf690 00:21:08.909 [2024-11-26 20:51:12.509429] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:08.909 [2024-11-26 20:51:12.509440] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.909 [2024-11-26 20:51:12.509446] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fdf690) 00:21:08.909 [2024-11-26 20:51:12.509457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.909 [2024-11-26 20:51:12.509481] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2041580, cid 3, qid 0 00:21:08.909 [2024-11-26 20:51:12.509578] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:08.909 [2024-11-26 20:51:12.509593] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:08.910 [2024-11-26 20:51:12.509600] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:08.910 [2024-11-26 20:51:12.509607] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2041580) on tqpair=0x1fdf690 00:21:08.910 [2024-11-26 20:51:12.509625] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:08.910 [2024-11-26 20:51:12.509636] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.910 [2024-11-26 20:51:12.509642] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fdf690) 00:21:08.910 [2024-11-26 20:51:12.509653] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.910 [2024-11-26 20:51:12.509676] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2041580, cid 3, qid 0 00:21:08.910 [2024-11-26 20:51:12.509752] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:08.910 [2024-11-26 20:51:12.509767] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:08.910 [2024-11-26 20:51:12.509777] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:08.910 [2024-11-26 20:51:12.509784] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2041580) on tqpair=0x1fdf690 00:21:08.910 [2024-11-26 20:51:12.509801] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:08.910 [2024-11-26 20:51:12.509810] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.910 [2024-11-26 20:51:12.509818] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fdf690) 00:21:08.910 [2024-11-26 20:51:12.509834] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.910 [2024-11-26 20:51:12.509859] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2041580, cid 3, qid 0 00:21:08.910 [2024-11-26 20:51:12.509940] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:08.910 [2024-11-26 20:51:12.509956] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:08.910 [2024-11-26 20:51:12.509963] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:08.910 [2024-11-26 20:51:12.509970] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2041580) on tqpair=0x1fdf690 00:21:08.910 [2024-11-26 20:51:12.509988] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:08.910 [2024-11-26 20:51:12.509999] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.910 [2024-11-26 20:51:12.510005] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fdf690) 00:21:08.910 [2024-11-26 20:51:12.510016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.910 [2024-11-26 20:51:12.510043] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2041580, cid 3, qid 0 00:21:08.910 [2024-11-26 20:51:12.510118] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:08.910 [2024-11-26 20:51:12.510134] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:08.910 [2024-11-26 20:51:12.510141] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:08.910 [2024-11-26 20:51:12.510148] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2041580) on tqpair=0x1fdf690 00:21:08.910 [2024-11-26 20:51:12.510166] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:08.910 [2024-11-26 20:51:12.510177] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.910 [2024-11-26 20:51:12.510183] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fdf690) 00:21:08.910 [2024-11-26 20:51:12.510194] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.910 [2024-11-26 20:51:12.510217] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2041580, cid 3, qid 0 00:21:08.910 [2024-11-26 20:51:12.510294] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:08.910 [2024-11-26 20:51:12.514334] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:08.910 [2024-11-26 20:51:12.514345] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:08.910 [2024-11-26 20:51:12.514352] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2041580) on tqpair=0x1fdf690 00:21:08.910 [2024-11-26 20:51:12.514372] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:08.910 [2024-11-26 20:51:12.514383] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:08.910 [2024-11-26 20:51:12.514389] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fdf690) 00:21:08.910 [2024-11-26 20:51:12.514400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:08.910 [2024-11-26 20:51:12.514422] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2041580, cid 3, qid 0 00:21:08.910 [2024-11-26 20:51:12.514524] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:08.910 [2024-11-26 20:51:12.514541] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:08.910 [2024-11-26 20:51:12.514548] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:08.910 [2024-11-26 20:51:12.514555] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2041580) on tqpair=0x1fdf690 00:21:08.910 [2024-11-26 20:51:12.514569] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:21:08.910 00:21:08.910 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:21:08.910 [2024-11-26 20:51:12.550611] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:21:08.910 [2024-11-26 20:51:12.550654] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1720561 ] 00:21:09.236 [2024-11-26 20:51:12.598403] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:21:09.236 [2024-11-26 20:51:12.598461] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:09.236 [2024-11-26 20:51:12.598472] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:09.236 [2024-11-26 20:51:12.598492] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:09.236 [2024-11-26 20:51:12.598509] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:09.236 [2024-11-26 20:51:12.602580] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:21:09.236 [2024-11-26 20:51:12.602641] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x98d690 0 00:21:09.236 [2024-11-26 20:51:12.609315] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:09.236 [2024-11-26 20:51:12.609336] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:09.236 [2024-11-26 20:51:12.609343] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:09.236 [2024-11-26 20:51:12.609349] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:09.236 [2024-11-26 20:51:12.609400] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.236 [2024-11-26 20:51:12.609413] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.236 [2024-11-26 20:51:12.609420] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x98d690) 00:21:09.236 [2024-11-26 20:51:12.609435] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:09.236 [2024-11-26 20:51:12.609463] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ef100, cid 0, qid 0 00:21:09.236 [2024-11-26 20:51:12.616337] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.237 [2024-11-26 20:51:12.616356] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.237 [2024-11-26 20:51:12.616364] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.237 [2024-11-26 20:51:12.616371] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ef100) on tqpair=0x98d690 00:21:09.237 [2024-11-26 20:51:12.616386] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:09.237 [2024-11-26 20:51:12.616398] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:21:09.237 [2024-11-26 20:51:12.616408] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:21:09.237 [2024-11-26 20:51:12.616429] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.237 [2024-11-26 20:51:12.616438] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.237 [2024-11-26 20:51:12.616445] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x98d690) 00:21:09.237 [2024-11-26 20:51:12.616456] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.237 [2024-11-26 20:51:12.616481] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ef100, cid 0, qid 0 00:21:09.237 [2024-11-26 20:51:12.616596] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.237 [2024-11-26 20:51:12.616609] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.237 [2024-11-26 20:51:12.616616] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.237 [2024-11-26 20:51:12.616622] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ef100) on tqpair=0x98d690 00:21:09.237 [2024-11-26 20:51:12.616635] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:21:09.237 [2024-11-26 20:51:12.616651] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:21:09.237 [2024-11-26 20:51:12.616664] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.237 [2024-11-26 20:51:12.616672] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.237 [2024-11-26 20:51:12.616678] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x98d690) 00:21:09.237 [2024-11-26 20:51:12.616689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.237 [2024-11-26 20:51:12.616711] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ef100, cid 0, qid 0 00:21:09.237 [2024-11-26 20:51:12.616782] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.237 [2024-11-26 20:51:12.616795] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.237 [2024-11-26 20:51:12.616802] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.237 [2024-11-26 20:51:12.616809] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ef100) on tqpair=0x98d690 00:21:09.237 [2024-11-26 20:51:12.616818] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:21:09.237 [2024-11-26 20:51:12.616832] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:21:09.237 [2024-11-26 20:51:12.616845] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.237 [2024-11-26 20:51:12.616853] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.237 [2024-11-26 20:51:12.616859] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x98d690) 00:21:09.237 [2024-11-26 20:51:12.616869] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.237 [2024-11-26 20:51:12.616891] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ef100, cid 0, qid 0 00:21:09.237 [2024-11-26 20:51:12.616964] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.237 [2024-11-26 20:51:12.616976] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.237 [2024-11-26 20:51:12.616983] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.237 [2024-11-26 20:51:12.616990] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ef100) on tqpair=0x98d690 00:21:09.237 [2024-11-26 20:51:12.616999] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:09.237 [2024-11-26 20:51:12.617016] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.237 [2024-11-26 20:51:12.617025] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.237 [2024-11-26 20:51:12.617032] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x98d690) 00:21:09.237 [2024-11-26 20:51:12.617042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.237 [2024-11-26 20:51:12.617063] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ef100, cid 0, qid 0 00:21:09.237 [2024-11-26 20:51:12.617135] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.237 [2024-11-26 20:51:12.617148] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.237 [2024-11-26 20:51:12.617154] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.237 [2024-11-26 20:51:12.617161] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ef100) on tqpair=0x98d690 00:21:09.237 [2024-11-26 20:51:12.617169] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:21:09.237 [2024-11-26 20:51:12.617177] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:21:09.237 [2024-11-26 20:51:12.617190] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:09.237 [2024-11-26 20:51:12.617300] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:21:09.237 [2024-11-26 20:51:12.617320] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:09.237 [2024-11-26 20:51:12.617332] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.237 [2024-11-26 20:51:12.617340] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.237 [2024-11-26 20:51:12.617347] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x98d690) 00:21:09.237 [2024-11-26 20:51:12.617357] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.237 [2024-11-26 20:51:12.617384] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ef100, cid 0, qid 0 00:21:09.237 [2024-11-26 20:51:12.617492] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.237 [2024-11-26 20:51:12.617504] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.237 [2024-11-26 20:51:12.617512] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.237 [2024-11-26 20:51:12.617518] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ef100) on tqpair=0x98d690 00:21:09.237 [2024-11-26 20:51:12.617527] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:09.237 [2024-11-26 20:51:12.617544] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.237 [2024-11-26 20:51:12.617553] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.237 [2024-11-26 20:51:12.617559] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x98d690) 00:21:09.237 [2024-11-26 20:51:12.617570] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.237 [2024-11-26 20:51:12.617591] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ef100, cid 0, qid 0 00:21:09.237 [2024-11-26 20:51:12.617687] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.237 [2024-11-26 20:51:12.617701] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.237 [2024-11-26 20:51:12.617708] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.237 [2024-11-26 20:51:12.617715] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ef100) on tqpair=0x98d690 00:21:09.237 [2024-11-26 20:51:12.617723] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:09.237 [2024-11-26 20:51:12.617731] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:21:09.237 [2024-11-26 20:51:12.617745] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:21:09.237 [2024-11-26 20:51:12.617764] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:21:09.237 [2024-11-26 20:51:12.617779] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.237 [2024-11-26 20:51:12.617787] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x98d690) 00:21:09.237 [2024-11-26 20:51:12.617798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.237 [2024-11-26 20:51:12.617820] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ef100, cid 0, qid 0 00:21:09.237 [2024-11-26 20:51:12.617946] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:09.237 [2024-11-26 20:51:12.617961] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:09.237 [2024-11-26 20:51:12.617968] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:09.237 [2024-11-26 20:51:12.617974] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x98d690): datao=0, datal=4096, cccid=0 00:21:09.237 [2024-11-26 20:51:12.617982] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9ef100) on tqpair(0x98d690): expected_datao=0, payload_size=4096 00:21:09.237 [2024-11-26 20:51:12.617989] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.237 [2024-11-26 20:51:12.618006] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:09.237 [2024-11-26 20:51:12.618015] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:09.237 [2024-11-26 20:51:12.658446] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.237 [2024-11-26 20:51:12.658465] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.237 [2024-11-26 20:51:12.658473] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.237 [2024-11-26 20:51:12.658487] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ef100) on tqpair=0x98d690 00:21:09.237 [2024-11-26 20:51:12.658500] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:21:09.237 [2024-11-26 20:51:12.658509] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:21:09.237 [2024-11-26 20:51:12.658517] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:21:09.237 [2024-11-26 20:51:12.658524] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:21:09.237 [2024-11-26 20:51:12.658532] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:21:09.237 [2024-11-26 20:51:12.658540] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:21:09.237 [2024-11-26 20:51:12.658555] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:21:09.237 [2024-11-26 20:51:12.658568] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.237 [2024-11-26 20:51:12.658576] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.237 [2024-11-26 20:51:12.658582] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x98d690) 00:21:09.237 [2024-11-26 20:51:12.658594] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:09.237 [2024-11-26 20:51:12.658618] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ef100, cid 0, qid 0 00:21:09.237 [2024-11-26 20:51:12.658696] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.237 [2024-11-26 20:51:12.658711] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.237 [2024-11-26 20:51:12.658718] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.237 [2024-11-26 20:51:12.658725] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ef100) on tqpair=0x98d690 00:21:09.237 [2024-11-26 20:51:12.658735] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.237 [2024-11-26 20:51:12.658744] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.237 [2024-11-26 20:51:12.658750] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x98d690) 00:21:09.237 [2024-11-26 20:51:12.658760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:09.237 [2024-11-26 20:51:12.658771] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.237 [2024-11-26 20:51:12.658778] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.237 [2024-11-26 20:51:12.658784] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x98d690) 00:21:09.237 [2024-11-26 20:51:12.658793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:09.238 [2024-11-26 20:51:12.658803] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.238 [2024-11-26 20:51:12.658810] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.238 [2024-11-26 20:51:12.658816] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x98d690) 00:21:09.238 [2024-11-26 20:51:12.658825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:09.238 [2024-11-26 20:51:12.658835] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.238 [2024-11-26 20:51:12.658841] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.238 [2024-11-26 20:51:12.658848] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98d690) 00:21:09.238 [2024-11-26 20:51:12.658856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:09.238 [2024-11-26 20:51:12.658869] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:09.238 [2024-11-26 20:51:12.658890] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:09.238 [2024-11-26 20:51:12.658903] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.238 [2024-11-26 20:51:12.658911] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x98d690) 00:21:09.238 [2024-11-26 20:51:12.658921] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.238 [2024-11-26 20:51:12.658944] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ef100, cid 0, qid 0 00:21:09.238 [2024-11-26 20:51:12.658956] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ef280, cid 1, qid 0 00:21:09.238 [2024-11-26 20:51:12.658965] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ef400, cid 2, qid 0 00:21:09.238 [2024-11-26 20:51:12.658973] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ef580, cid 3, qid 0 00:21:09.238 [2024-11-26 20:51:12.658981] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ef700, cid 4, qid 0 00:21:09.238 [2024-11-26 20:51:12.659157] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.238 [2024-11-26 20:51:12.659171] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.238 [2024-11-26 20:51:12.659178] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.238 [2024-11-26 20:51:12.659185] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ef700) on tqpair=0x98d690 00:21:09.238 [2024-11-26 20:51:12.659193] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:21:09.238 [2024-11-26 20:51:12.659202] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:09.238 [2024-11-26 20:51:12.659221] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:21:09.238 [2024-11-26 20:51:12.659234] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:21:09.238 [2024-11-26 20:51:12.659245] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.238 [2024-11-26 20:51:12.659252] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.238 [2024-11-26 20:51:12.659259] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x98d690) 00:21:09.238 [2024-11-26 20:51:12.659269] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:09.238 [2024-11-26 20:51:12.659291] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ef700, cid 4, qid 0 00:21:09.238 [2024-11-26 20:51:12.659437] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.238 [2024-11-26 20:51:12.659452] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.238 [2024-11-26 20:51:12.659459] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.238 [2024-11-26 20:51:12.659466] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ef700) on tqpair=0x98d690 00:21:09.238 [2024-11-26 20:51:12.659538] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:21:09.238 [2024-11-26 20:51:12.659559] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:21:09.238 [2024-11-26 20:51:12.659574] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.238 [2024-11-26 20:51:12.659582] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x98d690) 00:21:09.238 [2024-11-26 20:51:12.659593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.238 [2024-11-26 20:51:12.659620] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ef700, cid 4, qid 0 00:21:09.238 [2024-11-26 20:51:12.659748] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:09.238 [2024-11-26 20:51:12.659763] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:09.238 [2024-11-26 20:51:12.659770] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:09.238 [2024-11-26 20:51:12.659777] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x98d690): datao=0, datal=4096, cccid=4 00:21:09.238 [2024-11-26 20:51:12.659784] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9ef700) on tqpair(0x98d690): expected_datao=0, payload_size=4096 00:21:09.238 [2024-11-26 20:51:12.659792] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.238 [2024-11-26 20:51:12.659809] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:09.238 [2024-11-26 20:51:12.659818] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:09.238 [2024-11-26 20:51:12.704314] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.238 [2024-11-26 20:51:12.704334] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.238 [2024-11-26 20:51:12.704341] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.238 [2024-11-26 20:51:12.704348] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ef700) on tqpair=0x98d690 00:21:09.238 [2024-11-26 20:51:12.704378] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:21:09.238 [2024-11-26 20:51:12.704399] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:21:09.238 [2024-11-26 20:51:12.704432] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:21:09.238 [2024-11-26 20:51:12.704448] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.238 [2024-11-26 20:51:12.704456] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x98d690) 00:21:09.238 [2024-11-26 20:51:12.704468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.238 [2024-11-26 20:51:12.704492] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ef700, cid 4, qid 0 00:21:09.238 [2024-11-26 20:51:12.704638] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:09.238 [2024-11-26 20:51:12.704653] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:09.238 [2024-11-26 20:51:12.704660] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:09.238 [2024-11-26 20:51:12.704666] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x98d690): datao=0, datal=4096, cccid=4 00:21:09.238 [2024-11-26 20:51:12.704674] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9ef700) on tqpair(0x98d690): expected_datao=0, payload_size=4096 00:21:09.238 [2024-11-26 20:51:12.704681] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.238 [2024-11-26 20:51:12.704699] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:09.238 [2024-11-26 20:51:12.704708] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:09.238 [2024-11-26 20:51:12.747315] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.238 [2024-11-26 20:51:12.747335] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.238 [2024-11-26 20:51:12.747343] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.238 [2024-11-26 20:51:12.747350] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ef700) on tqpair=0x98d690 00:21:09.238 [2024-11-26 20:51:12.747367] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:09.238 [2024-11-26 20:51:12.747387] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:09.238 [2024-11-26 20:51:12.747407] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.238 [2024-11-26 20:51:12.747417] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x98d690) 00:21:09.238 [2024-11-26 20:51:12.747428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.238 [2024-11-26 20:51:12.747453] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ef700, cid 4, qid 0 00:21:09.238 [2024-11-26 20:51:12.747579] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:09.238 [2024-11-26 20:51:12.747593] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:09.238 [2024-11-26 20:51:12.747600] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:09.238 [2024-11-26 20:51:12.747606] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x98d690): datao=0, datal=4096, cccid=4 00:21:09.238 [2024-11-26 20:51:12.747614] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9ef700) on tqpair(0x98d690): expected_datao=0, payload_size=4096 00:21:09.238 [2024-11-26 20:51:12.747621] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.238 [2024-11-26 20:51:12.747638] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:09.238 [2024-11-26 20:51:12.747647] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:09.238 [2024-11-26 20:51:12.788377] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.238 [2024-11-26 20:51:12.788396] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.238 [2024-11-26 20:51:12.788404] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.238 [2024-11-26 20:51:12.788411] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ef700) on tqpair=0x98d690 00:21:09.238 [2024-11-26 20:51:12.788430] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:09.238 [2024-11-26 20:51:12.788448] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:21:09.238 [2024-11-26 20:51:12.788463] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:21:09.238 [2024-11-26 20:51:12.788474] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:21:09.238 [2024-11-26 20:51:12.788483] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:09.238 [2024-11-26 20:51:12.788492] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:21:09.238 [2024-11-26 20:51:12.788500] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:21:09.238 [2024-11-26 20:51:12.788508] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:21:09.238 [2024-11-26 20:51:12.788516] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:21:09.238 [2024-11-26 20:51:12.788535] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.238 [2024-11-26 20:51:12.788544] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x98d690) 00:21:09.238 [2024-11-26 20:51:12.788556] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.238 [2024-11-26 20:51:12.788567] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.238 [2024-11-26 20:51:12.788575] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.238 [2024-11-26 20:51:12.788581] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x98d690) 00:21:09.238 [2024-11-26 20:51:12.788590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:09.238 [2024-11-26 20:51:12.788621] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ef700, cid 4, qid 0 00:21:09.238 [2024-11-26 20:51:12.788634] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ef880, cid 5, qid 0 00:21:09.238 [2024-11-26 20:51:12.788716] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.238 [2024-11-26 20:51:12.788729] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.238 [2024-11-26 20:51:12.788736] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.238 [2024-11-26 20:51:12.788743] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ef700) on tqpair=0x98d690 00:21:09.238 [2024-11-26 20:51:12.788753] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.238 [2024-11-26 20:51:12.788763] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.238 [2024-11-26 20:51:12.788770] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.238 [2024-11-26 20:51:12.788776] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ef880) on tqpair=0x98d690 00:21:09.238 [2024-11-26 20:51:12.788792] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.238 [2024-11-26 20:51:12.788801] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x98d690) 00:21:09.238 [2024-11-26 20:51:12.788812] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.239 [2024-11-26 20:51:12.788833] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ef880, cid 5, qid 0 00:21:09.239 [2024-11-26 20:51:12.788914] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.239 [2024-11-26 20:51:12.788929] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.239 [2024-11-26 20:51:12.788936] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.239 [2024-11-26 20:51:12.788943] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ef880) on tqpair=0x98d690 00:21:09.239 [2024-11-26 20:51:12.788959] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.239 [2024-11-26 20:51:12.788968] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x98d690) 00:21:09.239 [2024-11-26 20:51:12.788979] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.239 [2024-11-26 20:51:12.789000] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ef880, cid 5, qid 0 00:21:09.239 [2024-11-26 20:51:12.789081] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.239 [2024-11-26 20:51:12.789093] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.239 [2024-11-26 20:51:12.789100] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.239 [2024-11-26 20:51:12.789107] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ef880) on tqpair=0x98d690 00:21:09.239 [2024-11-26 20:51:12.789122] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.239 [2024-11-26 20:51:12.789131] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x98d690) 00:21:09.239 [2024-11-26 20:51:12.789142] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.239 [2024-11-26 20:51:12.789162] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ef880, cid 5, qid 0 00:21:09.239 [2024-11-26 20:51:12.789234] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.239 [2024-11-26 20:51:12.789246] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.239 [2024-11-26 20:51:12.789253] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.239 [2024-11-26 20:51:12.789260] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ef880) on tqpair=0x98d690 00:21:09.239 [2024-11-26 20:51:12.789284] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.239 [2024-11-26 20:51:12.789295] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x98d690) 00:21:09.239 [2024-11-26 20:51:12.793320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.239 [2024-11-26 20:51:12.793341] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.239 [2024-11-26 20:51:12.793364] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x98d690) 00:21:09.239 [2024-11-26 20:51:12.793374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.239 [2024-11-26 20:51:12.793387] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.239 [2024-11-26 20:51:12.793395] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x98d690) 00:21:09.239 [2024-11-26 20:51:12.793404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.239 [2024-11-26 20:51:12.793416] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.239 [2024-11-26 20:51:12.793424] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x98d690) 00:21:09.239 [2024-11-26 20:51:12.793434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.239 [2024-11-26 20:51:12.793457] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ef880, cid 5, qid 0 00:21:09.239 [2024-11-26 20:51:12.793469] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ef700, cid 4, qid 0 00:21:09.239 [2024-11-26 20:51:12.793478] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9efa00, cid 6, qid 0 00:21:09.239 [2024-11-26 20:51:12.793486] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9efb80, cid 7, qid 0 00:21:09.239 [2024-11-26 20:51:12.793658] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:09.239 [2024-11-26 20:51:12.793674] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:09.239 [2024-11-26 20:51:12.793681] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:09.239 [2024-11-26 20:51:12.793687] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x98d690): datao=0, datal=8192, cccid=5 00:21:09.239 [2024-11-26 20:51:12.793695] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9ef880) on tqpair(0x98d690): expected_datao=0, payload_size=8192 00:21:09.239 [2024-11-26 20:51:12.793702] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.239 [2024-11-26 20:51:12.793721] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:09.239 [2024-11-26 20:51:12.793730] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:09.239 [2024-11-26 20:51:12.793743] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:09.239 [2024-11-26 20:51:12.793753] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:09.239 [2024-11-26 20:51:12.793759] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:09.239 [2024-11-26 20:51:12.793765] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x98d690): datao=0, datal=512, cccid=4 00:21:09.239 [2024-11-26 20:51:12.793773] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9ef700) on tqpair(0x98d690): expected_datao=0, payload_size=512 00:21:09.239 [2024-11-26 20:51:12.793780] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.239 [2024-11-26 20:51:12.793789] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:09.239 [2024-11-26 20:51:12.793796] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:09.239 [2024-11-26 20:51:12.793805] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:09.239 [2024-11-26 20:51:12.793814] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:09.239 [2024-11-26 20:51:12.793820] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:09.239 [2024-11-26 20:51:12.793826] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x98d690): datao=0, datal=512, cccid=6 00:21:09.239 [2024-11-26 20:51:12.793841] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9efa00) on tqpair(0x98d690): expected_datao=0, payload_size=512 00:21:09.239 [2024-11-26 20:51:12.793849] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.239 [2024-11-26 20:51:12.793858] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:09.239 [2024-11-26 20:51:12.793865] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:09.239 [2024-11-26 20:51:12.793874] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:09.239 [2024-11-26 20:51:12.793883] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:09.239 [2024-11-26 20:51:12.793889] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:09.239 [2024-11-26 20:51:12.793895] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x98d690): datao=0, datal=4096, cccid=7 00:21:09.239 [2024-11-26 20:51:12.793903] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9efb80) on tqpair(0x98d690): expected_datao=0, payload_size=4096 00:21:09.239 [2024-11-26 20:51:12.793910] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.239 [2024-11-26 20:51:12.793920] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:09.239 [2024-11-26 20:51:12.793927] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:09.239 [2024-11-26 20:51:12.793938] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.239 [2024-11-26 20:51:12.793948] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.239 [2024-11-26 20:51:12.793954] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.239 [2024-11-26 20:51:12.793961] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ef880) on tqpair=0x98d690 00:21:09.239 [2024-11-26 20:51:12.793980] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.239 [2024-11-26 20:51:12.793992] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.239 [2024-11-26 20:51:12.793998] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.239 [2024-11-26 20:51:12.794005] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ef700) on tqpair=0x98d690 00:21:09.239 [2024-11-26 20:51:12.794035] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.239 [2024-11-26 20:51:12.794046] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.239 [2024-11-26 20:51:12.794053] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.239 [2024-11-26 20:51:12.794059] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9efa00) on tqpair=0x98d690 00:21:09.239 [2024-11-26 20:51:12.794070] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.239 [2024-11-26 20:51:12.794080] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.239 [2024-11-26 20:51:12.794101] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.239 [2024-11-26 20:51:12.794107] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9efb80) on tqpair=0x98d690 00:21:09.239 ===================================================== 00:21:09.239 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:09.239 ===================================================== 00:21:09.239 Controller Capabilities/Features 00:21:09.239 ================================ 00:21:09.239 Vendor ID: 8086 00:21:09.239 Subsystem Vendor ID: 8086 00:21:09.239 Serial Number: SPDK00000000000001 00:21:09.239 Model Number: SPDK bdev Controller 00:21:09.239 Firmware Version: 25.01 00:21:09.239 Recommended Arb Burst: 6 00:21:09.239 IEEE OUI Identifier: e4 d2 5c 00:21:09.239 Multi-path I/O 00:21:09.239 May have multiple subsystem ports: Yes 00:21:09.239 May have multiple controllers: Yes 00:21:09.239 Associated with SR-IOV VF: No 00:21:09.239 Max Data Transfer Size: 131072 00:21:09.239 Max Number of Namespaces: 32 00:21:09.239 Max Number of I/O Queues: 127 00:21:09.239 NVMe Specification Version (VS): 1.3 00:21:09.239 NVMe Specification Version (Identify): 1.3 00:21:09.239 Maximum Queue Entries: 128 00:21:09.239 Contiguous Queues Required: Yes 00:21:09.239 Arbitration Mechanisms Supported 00:21:09.239 Weighted Round Robin: Not Supported 00:21:09.239 Vendor Specific: Not Supported 00:21:09.239 Reset Timeout: 15000 ms 00:21:09.239 Doorbell Stride: 4 bytes 00:21:09.239 NVM Subsystem Reset: Not Supported 00:21:09.239 Command Sets Supported 00:21:09.239 NVM Command Set: Supported 00:21:09.239 Boot Partition: Not Supported 00:21:09.239 Memory Page Size Minimum: 4096 bytes 00:21:09.239 Memory Page Size Maximum: 4096 bytes 00:21:09.239 Persistent Memory Region: Not Supported 00:21:09.239 Optional Asynchronous Events Supported 00:21:09.239 Namespace Attribute Notices: Supported 00:21:09.239 Firmware Activation Notices: Not Supported 00:21:09.239 ANA Change Notices: Not Supported 00:21:09.239 PLE Aggregate Log Change Notices: Not Supported 00:21:09.239 LBA Status Info Alert Notices: Not Supported 00:21:09.239 EGE Aggregate Log Change Notices: Not Supported 00:21:09.239 Normal NVM Subsystem Shutdown event: Not Supported 00:21:09.239 Zone Descriptor Change Notices: Not Supported 00:21:09.239 Discovery Log Change Notices: Not Supported 00:21:09.239 Controller Attributes 00:21:09.239 128-bit Host Identifier: Supported 00:21:09.239 Non-Operational Permissive Mode: Not Supported 00:21:09.239 NVM Sets: Not Supported 00:21:09.239 Read Recovery Levels: Not Supported 00:21:09.239 Endurance Groups: Not Supported 00:21:09.239 Predictable Latency Mode: Not Supported 00:21:09.239 Traffic Based Keep ALive: Not Supported 00:21:09.239 Namespace Granularity: Not Supported 00:21:09.239 SQ Associations: Not Supported 00:21:09.239 UUID List: Not Supported 00:21:09.239 Multi-Domain Subsystem: Not Supported 00:21:09.239 Fixed Capacity Management: Not Supported 00:21:09.239 Variable Capacity Management: Not Supported 00:21:09.239 Delete Endurance Group: Not Supported 00:21:09.239 Delete NVM Set: Not Supported 00:21:09.239 Extended LBA Formats Supported: Not Supported 00:21:09.239 Flexible Data Placement Supported: Not Supported 00:21:09.239 00:21:09.239 Controller Memory Buffer Support 00:21:09.239 ================================ 00:21:09.239 Supported: No 00:21:09.239 00:21:09.239 Persistent Memory Region Support 00:21:09.239 ================================ 00:21:09.240 Supported: No 00:21:09.240 00:21:09.240 Admin Command Set Attributes 00:21:09.240 ============================ 00:21:09.240 Security Send/Receive: Not Supported 00:21:09.240 Format NVM: Not Supported 00:21:09.240 Firmware Activate/Download: Not Supported 00:21:09.240 Namespace Management: Not Supported 00:21:09.240 Device Self-Test: Not Supported 00:21:09.240 Directives: Not Supported 00:21:09.240 NVMe-MI: Not Supported 00:21:09.240 Virtualization Management: Not Supported 00:21:09.240 Doorbell Buffer Config: Not Supported 00:21:09.240 Get LBA Status Capability: Not Supported 00:21:09.240 Command & Feature Lockdown Capability: Not Supported 00:21:09.240 Abort Command Limit: 4 00:21:09.240 Async Event Request Limit: 4 00:21:09.240 Number of Firmware Slots: N/A 00:21:09.240 Firmware Slot 1 Read-Only: N/A 00:21:09.240 Firmware Activation Without Reset: N/A 00:21:09.240 Multiple Update Detection Support: N/A 00:21:09.240 Firmware Update Granularity: No Information Provided 00:21:09.240 Per-Namespace SMART Log: No 00:21:09.240 Asymmetric Namespace Access Log Page: Not Supported 00:21:09.240 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:21:09.240 Command Effects Log Page: Supported 00:21:09.240 Get Log Page Extended Data: Supported 00:21:09.240 Telemetry Log Pages: Not Supported 00:21:09.240 Persistent Event Log Pages: Not Supported 00:21:09.240 Supported Log Pages Log Page: May Support 00:21:09.240 Commands Supported & Effects Log Page: Not Supported 00:21:09.240 Feature Identifiers & Effects Log Page:May Support 00:21:09.240 NVMe-MI Commands & Effects Log Page: May Support 00:21:09.240 Data Area 4 for Telemetry Log: Not Supported 00:21:09.240 Error Log Page Entries Supported: 128 00:21:09.240 Keep Alive: Supported 00:21:09.240 Keep Alive Granularity: 10000 ms 00:21:09.240 00:21:09.240 NVM Command Set Attributes 00:21:09.240 ========================== 00:21:09.240 Submission Queue Entry Size 00:21:09.240 Max: 64 00:21:09.240 Min: 64 00:21:09.240 Completion Queue Entry Size 00:21:09.240 Max: 16 00:21:09.240 Min: 16 00:21:09.240 Number of Namespaces: 32 00:21:09.240 Compare Command: Supported 00:21:09.240 Write Uncorrectable Command: Not Supported 00:21:09.240 Dataset Management Command: Supported 00:21:09.240 Write Zeroes Command: Supported 00:21:09.240 Set Features Save Field: Not Supported 00:21:09.240 Reservations: Supported 00:21:09.240 Timestamp: Not Supported 00:21:09.240 Copy: Supported 00:21:09.240 Volatile Write Cache: Present 00:21:09.240 Atomic Write Unit (Normal): 1 00:21:09.240 Atomic Write Unit (PFail): 1 00:21:09.240 Atomic Compare & Write Unit: 1 00:21:09.240 Fused Compare & Write: Supported 00:21:09.240 Scatter-Gather List 00:21:09.240 SGL Command Set: Supported 00:21:09.240 SGL Keyed: Supported 00:21:09.240 SGL Bit Bucket Descriptor: Not Supported 00:21:09.240 SGL Metadata Pointer: Not Supported 00:21:09.240 Oversized SGL: Not Supported 00:21:09.240 SGL Metadata Address: Not Supported 00:21:09.240 SGL Offset: Supported 00:21:09.240 Transport SGL Data Block: Not Supported 00:21:09.240 Replay Protected Memory Block: Not Supported 00:21:09.240 00:21:09.240 Firmware Slot Information 00:21:09.240 ========================= 00:21:09.240 Active slot: 1 00:21:09.240 Slot 1 Firmware Revision: 25.01 00:21:09.240 00:21:09.240 00:21:09.240 Commands Supported and Effects 00:21:09.240 ============================== 00:21:09.240 Admin Commands 00:21:09.240 -------------- 00:21:09.240 Get Log Page (02h): Supported 00:21:09.240 Identify (06h): Supported 00:21:09.240 Abort (08h): Supported 00:21:09.240 Set Features (09h): Supported 00:21:09.240 Get Features (0Ah): Supported 00:21:09.240 Asynchronous Event Request (0Ch): Supported 00:21:09.240 Keep Alive (18h): Supported 00:21:09.240 I/O Commands 00:21:09.240 ------------ 00:21:09.240 Flush (00h): Supported LBA-Change 00:21:09.240 Write (01h): Supported LBA-Change 00:21:09.240 Read (02h): Supported 00:21:09.240 Compare (05h): Supported 00:21:09.240 Write Zeroes (08h): Supported LBA-Change 00:21:09.240 Dataset Management (09h): Supported LBA-Change 00:21:09.240 Copy (19h): Supported LBA-Change 00:21:09.240 00:21:09.240 Error Log 00:21:09.240 ========= 00:21:09.240 00:21:09.240 Arbitration 00:21:09.240 =========== 00:21:09.240 Arbitration Burst: 1 00:21:09.240 00:21:09.240 Power Management 00:21:09.240 ================ 00:21:09.240 Number of Power States: 1 00:21:09.240 Current Power State: Power State #0 00:21:09.240 Power State #0: 00:21:09.240 Max Power: 0.00 W 00:21:09.240 Non-Operational State: Operational 00:21:09.240 Entry Latency: Not Reported 00:21:09.240 Exit Latency: Not Reported 00:21:09.240 Relative Read Throughput: 0 00:21:09.240 Relative Read Latency: 0 00:21:09.240 Relative Write Throughput: 0 00:21:09.240 Relative Write Latency: 0 00:21:09.240 Idle Power: Not Reported 00:21:09.240 Active Power: Not Reported 00:21:09.240 Non-Operational Permissive Mode: Not Supported 00:21:09.240 00:21:09.240 Health Information 00:21:09.240 ================== 00:21:09.240 Critical Warnings: 00:21:09.240 Available Spare Space: OK 00:21:09.240 Temperature: OK 00:21:09.240 Device Reliability: OK 00:21:09.240 Read Only: No 00:21:09.240 Volatile Memory Backup: OK 00:21:09.240 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:09.240 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:21:09.240 Available Spare: 0% 00:21:09.240 Available Spare Threshold: 0% 00:21:09.240 Life Percentage Used:[2024-11-26 20:51:12.794229] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.240 [2024-11-26 20:51:12.794241] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x98d690) 00:21:09.240 [2024-11-26 20:51:12.794253] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.240 [2024-11-26 20:51:12.794274] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9efb80, cid 7, qid 0 00:21:09.240 [2024-11-26 20:51:12.794417] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.240 [2024-11-26 20:51:12.794432] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.240 [2024-11-26 20:51:12.794439] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.240 [2024-11-26 20:51:12.794446] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9efb80) on tqpair=0x98d690 00:21:09.240 [2024-11-26 20:51:12.794491] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:21:09.240 [2024-11-26 20:51:12.794511] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ef100) on tqpair=0x98d690 00:21:09.240 [2024-11-26 20:51:12.794525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.240 [2024-11-26 20:51:12.794535] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ef280) on tqpair=0x98d690 00:21:09.240 [2024-11-26 20:51:12.794543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.240 [2024-11-26 20:51:12.794551] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ef400) on tqpair=0x98d690 00:21:09.240 [2024-11-26 20:51:12.794559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.240 [2024-11-26 20:51:12.794567] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ef580) on tqpair=0x98d690 00:21:09.240 [2024-11-26 20:51:12.794575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:09.240 [2024-11-26 20:51:12.794587] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.240 [2024-11-26 20:51:12.794595] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.240 [2024-11-26 20:51:12.794601] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98d690) 00:21:09.240 [2024-11-26 20:51:12.794612] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.240 [2024-11-26 20:51:12.794635] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ef580, cid 3, qid 0 00:21:09.240 [2024-11-26 20:51:12.794738] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.240 [2024-11-26 20:51:12.794751] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.240 [2024-11-26 20:51:12.794758] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.240 [2024-11-26 20:51:12.794765] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ef580) on tqpair=0x98d690 00:21:09.240 [2024-11-26 20:51:12.794776] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.240 [2024-11-26 20:51:12.794784] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.240 [2024-11-26 20:51:12.794790] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98d690) 00:21:09.240 [2024-11-26 20:51:12.794801] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.240 [2024-11-26 20:51:12.794827] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ef580, cid 3, qid 0 00:21:09.240 [2024-11-26 20:51:12.794913] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.240 [2024-11-26 20:51:12.794925] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.240 [2024-11-26 20:51:12.794932] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.241 [2024-11-26 20:51:12.794939] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ef580) on tqpair=0x98d690 00:21:09.241 [2024-11-26 20:51:12.794947] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:21:09.241 [2024-11-26 20:51:12.794955] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:21:09.241 [2024-11-26 20:51:12.794970] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.241 [2024-11-26 20:51:12.794979] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.241 [2024-11-26 20:51:12.794985] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98d690) 00:21:09.241 [2024-11-26 20:51:12.794996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.241 [2024-11-26 20:51:12.795017] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ef580, cid 3, qid 0 00:21:09.241 [2024-11-26 20:51:12.795104] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.241 [2024-11-26 20:51:12.795118] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.241 [2024-11-26 20:51:12.795125] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.241 [2024-11-26 20:51:12.795138] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ef580) on tqpair=0x98d690 00:21:09.241 [2024-11-26 20:51:12.795156] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.241 [2024-11-26 20:51:12.795175] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.241 [2024-11-26 20:51:12.795182] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98d690) 00:21:09.241 [2024-11-26 20:51:12.795193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.241 [2024-11-26 20:51:12.795213] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ef580, cid 3, qid 0 00:21:09.241 [2024-11-26 20:51:12.795300] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.241 [2024-11-26 20:51:12.795323] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.241 [2024-11-26 20:51:12.795330] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.241 [2024-11-26 20:51:12.795337] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ef580) on tqpair=0x98d690 00:21:09.241 [2024-11-26 20:51:12.795354] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.241 [2024-11-26 20:51:12.795364] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.241 [2024-11-26 20:51:12.795371] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98d690) 00:21:09.241 [2024-11-26 20:51:12.795381] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.241 [2024-11-26 20:51:12.795402] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ef580, cid 3, qid 0 00:21:09.241 [2024-11-26 20:51:12.795478] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.241 [2024-11-26 20:51:12.795491] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.241 [2024-11-26 20:51:12.795498] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.241 [2024-11-26 20:51:12.795505] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ef580) on tqpair=0x98d690 00:21:09.241 [2024-11-26 20:51:12.795522] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.241 [2024-11-26 20:51:12.795531] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.241 [2024-11-26 20:51:12.795538] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98d690) 00:21:09.241 [2024-11-26 20:51:12.795548] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.241 [2024-11-26 20:51:12.795569] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ef580, cid 3, qid 0 00:21:09.241 [2024-11-26 20:51:12.795640] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.241 [2024-11-26 20:51:12.795652] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.241 [2024-11-26 20:51:12.795659] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.241 [2024-11-26 20:51:12.795666] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ef580) on tqpair=0x98d690 00:21:09.241 [2024-11-26 20:51:12.795682] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.241 [2024-11-26 20:51:12.795691] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.241 [2024-11-26 20:51:12.795698] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98d690) 00:21:09.241 [2024-11-26 20:51:12.795708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.241 [2024-11-26 20:51:12.795728] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ef580, cid 3, qid 0 00:21:09.241 [2024-11-26 20:51:12.795804] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.241 [2024-11-26 20:51:12.795818] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.241 [2024-11-26 20:51:12.795825] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.241 [2024-11-26 20:51:12.795831] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ef580) on tqpair=0x98d690 00:21:09.241 [2024-11-26 20:51:12.795852] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.241 [2024-11-26 20:51:12.795863] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.241 [2024-11-26 20:51:12.795869] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98d690) 00:21:09.241 [2024-11-26 20:51:12.795880] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.241 [2024-11-26 20:51:12.795901] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ef580, cid 3, qid 0 00:21:09.241 [2024-11-26 20:51:12.795969] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.241 [2024-11-26 20:51:12.795981] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.241 [2024-11-26 20:51:12.795988] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.241 [2024-11-26 20:51:12.795995] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ef580) on tqpair=0x98d690 00:21:09.241 [2024-11-26 20:51:12.796010] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.241 [2024-11-26 20:51:12.796020] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.241 [2024-11-26 20:51:12.796026] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98d690) 00:21:09.241 [2024-11-26 20:51:12.796036] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.241 [2024-11-26 20:51:12.796057] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ef580, cid 3, qid 0 00:21:09.241 [2024-11-26 20:51:12.796126] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.241 [2024-11-26 20:51:12.796139] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.241 [2024-11-26 20:51:12.796146] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.241 [2024-11-26 20:51:12.796152] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ef580) on tqpair=0x98d690 00:21:09.241 [2024-11-26 20:51:12.796168] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.241 [2024-11-26 20:51:12.796178] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.241 [2024-11-26 20:51:12.796184] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98d690) 00:21:09.241 [2024-11-26 20:51:12.796194] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.241 [2024-11-26 20:51:12.796215] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ef580, cid 3, qid 0 00:21:09.241 [2024-11-26 20:51:12.796286] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.241 [2024-11-26 20:51:12.796298] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.241 [2024-11-26 20:51:12.796314] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.241 [2024-11-26 20:51:12.796322] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ef580) on tqpair=0x98d690 00:21:09.241 [2024-11-26 20:51:12.796339] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.241 [2024-11-26 20:51:12.796349] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.241 [2024-11-26 20:51:12.796355] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98d690) 00:21:09.241 [2024-11-26 20:51:12.796366] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.241 [2024-11-26 20:51:12.796387] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ef580, cid 3, qid 0 00:21:09.241 [2024-11-26 20:51:12.796463] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.241 [2024-11-26 20:51:12.796477] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.241 [2024-11-26 20:51:12.796484] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.241 [2024-11-26 20:51:12.796491] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ef580) on tqpair=0x98d690 00:21:09.241 [2024-11-26 20:51:12.796507] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.241 [2024-11-26 20:51:12.796520] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.241 [2024-11-26 20:51:12.796528] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98d690) 00:21:09.241 [2024-11-26 20:51:12.796539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.241 [2024-11-26 20:51:12.796560] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ef580, cid 3, qid 0 00:21:09.241 [2024-11-26 20:51:12.796642] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.241 [2024-11-26 20:51:12.796656] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.241 [2024-11-26 20:51:12.796663] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.241 [2024-11-26 20:51:12.796670] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ef580) on tqpair=0x98d690 00:21:09.241 [2024-11-26 20:51:12.796686] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.241 [2024-11-26 20:51:12.796696] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.241 [2024-11-26 20:51:12.796703] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98d690) 00:21:09.241 [2024-11-26 20:51:12.796713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.241 [2024-11-26 20:51:12.796734] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ef580, cid 3, qid 0 00:21:09.241 [2024-11-26 20:51:12.796813] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.241 [2024-11-26 20:51:12.796831] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.241 [2024-11-26 20:51:12.796843] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.241 [2024-11-26 20:51:12.796853] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ef580) on tqpair=0x98d690 00:21:09.241 [2024-11-26 20:51:12.796879] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.241 [2024-11-26 20:51:12.796894] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.241 [2024-11-26 20:51:12.796906] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98d690) 00:21:09.241 [2024-11-26 20:51:12.796922] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.241 [2024-11-26 20:51:12.796949] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ef580, cid 3, qid 0 00:21:09.241 [2024-11-26 20:51:12.797018] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.241 [2024-11-26 20:51:12.797031] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.241 [2024-11-26 20:51:12.797038] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.241 [2024-11-26 20:51:12.797045] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ef580) on tqpair=0x98d690 00:21:09.241 [2024-11-26 20:51:12.797061] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.241 [2024-11-26 20:51:12.797071] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.241 [2024-11-26 20:51:12.797077] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98d690) 00:21:09.241 [2024-11-26 20:51:12.797088] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.241 [2024-11-26 20:51:12.797108] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ef580, cid 3, qid 0 00:21:09.241 [2024-11-26 20:51:12.797183] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.241 [2024-11-26 20:51:12.797197] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.241 [2024-11-26 20:51:12.797204] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.241 [2024-11-26 20:51:12.797211] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ef580) on tqpair=0x98d690 00:21:09.241 [2024-11-26 20:51:12.797227] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.241 [2024-11-26 20:51:12.797237] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.241 [2024-11-26 20:51:12.797248] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98d690) 00:21:09.241 [2024-11-26 20:51:12.797259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.241 [2024-11-26 20:51:12.797280] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ef580, cid 3, qid 0 00:21:09.241 [2024-11-26 20:51:12.801317] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.241 [2024-11-26 20:51:12.801337] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.241 [2024-11-26 20:51:12.801344] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.241 [2024-11-26 20:51:12.801351] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ef580) on tqpair=0x98d690 00:21:09.242 [2024-11-26 20:51:12.801371] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:09.242 [2024-11-26 20:51:12.801382] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:09.242 [2024-11-26 20:51:12.801388] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x98d690) 00:21:09.242 [2024-11-26 20:51:12.801399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:09.242 [2024-11-26 20:51:12.801423] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ef580, cid 3, qid 0 00:21:09.242 [2024-11-26 20:51:12.801537] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:09.242 [2024-11-26 20:51:12.801550] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:09.242 [2024-11-26 20:51:12.801557] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:09.242 [2024-11-26 20:51:12.801564] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ef580) on tqpair=0x98d690 00:21:09.242 [2024-11-26 20:51:12.801578] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 00:21:09.242 0% 00:21:09.242 Data Units Read: 0 00:21:09.242 Data Units Written: 0 00:21:09.242 Host Read Commands: 0 00:21:09.242 Host Write Commands: 0 00:21:09.242 Controller Busy Time: 0 minutes 00:21:09.242 Power Cycles: 0 00:21:09.242 Power On Hours: 0 hours 00:21:09.242 Unsafe Shutdowns: 0 00:21:09.242 Unrecoverable Media Errors: 0 00:21:09.242 Lifetime Error Log Entries: 0 00:21:09.242 Warning Temperature Time: 0 minutes 00:21:09.242 Critical Temperature Time: 0 minutes 00:21:09.242 00:21:09.242 Number of Queues 00:21:09.242 ================ 00:21:09.242 Number of I/O Submission Queues: 127 00:21:09.242 Number of I/O Completion Queues: 127 00:21:09.242 00:21:09.242 Active Namespaces 00:21:09.242 ================= 00:21:09.242 Namespace ID:1 00:21:09.242 Error Recovery Timeout: Unlimited 00:21:09.242 Command Set Identifier: NVM (00h) 00:21:09.242 Deallocate: Supported 00:21:09.242 Deallocated/Unwritten Error: Not Supported 00:21:09.242 Deallocated Read Value: Unknown 00:21:09.242 Deallocate in Write Zeroes: Not Supported 00:21:09.242 Deallocated Guard Field: 0xFFFF 00:21:09.242 Flush: Supported 00:21:09.242 Reservation: Supported 00:21:09.242 Namespace Sharing Capabilities: Multiple Controllers 00:21:09.242 Size (in LBAs): 131072 (0GiB) 00:21:09.242 Capacity (in LBAs): 131072 (0GiB) 00:21:09.242 Utilization (in LBAs): 131072 (0GiB) 00:21:09.242 NGUID: ABCDEF0123456789ABCDEF0123456789 00:21:09.242 EUI64: ABCDEF0123456789 00:21:09.242 UUID: 223ad4c1-affd-4382-b471-f9a9870a799b 00:21:09.242 Thin Provisioning: Not Supported 00:21:09.242 Per-NS Atomic Units: Yes 00:21:09.242 Atomic Boundary Size (Normal): 0 00:21:09.242 Atomic Boundary Size (PFail): 0 00:21:09.242 Atomic Boundary Offset: 0 00:21:09.242 Maximum Single Source Range Length: 65535 00:21:09.242 Maximum Copy Length: 65535 00:21:09.242 Maximum Source Range Count: 1 00:21:09.242 NGUID/EUI64 Never Reused: No 00:21:09.242 Namespace Write Protected: No 00:21:09.242 Number of LBA Formats: 1 00:21:09.242 Current LBA Format: LBA Format #00 00:21:09.242 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:09.242 00:21:09.242 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:21:09.242 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:09.242 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.242 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:09.242 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.242 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:21:09.242 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:21:09.242 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:09.242 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:21:09.242 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:09.242 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:21:09.242 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:09.242 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:09.242 rmmod nvme_tcp 00:21:09.242 rmmod nvme_fabrics 00:21:09.242 rmmod nvme_keyring 00:21:09.242 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:09.242 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:21:09.242 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:21:09.242 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 1720422 ']' 00:21:09.242 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 1720422 00:21:09.242 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 1720422 ']' 00:21:09.242 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 1720422 00:21:09.242 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:21:09.242 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:09.242 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1720422 00:21:09.514 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:09.514 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:09.514 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1720422' 00:21:09.514 killing process with pid 1720422 00:21:09.514 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 1720422 00:21:09.514 20:51:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 1720422 00:21:09.514 20:51:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:09.514 20:51:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:09.514 20:51:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:09.514 20:51:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:21:09.514 20:51:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:21:09.514 20:51:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:09.514 20:51:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:21:09.514 20:51:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:09.514 20:51:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:09.514 20:51:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:09.514 20:51:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:09.514 20:51:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:12.049 20:51:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:12.049 00:21:12.049 real 0m5.624s 00:21:12.049 user 0m4.972s 00:21:12.049 sys 0m1.958s 00:21:12.049 20:51:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:12.049 20:51:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:12.049 ************************************ 00:21:12.049 END TEST nvmf_identify 00:21:12.049 ************************************ 00:21:12.049 20:51:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:12.049 20:51:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:12.049 20:51:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:12.049 20:51:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.049 ************************************ 00:21:12.049 START TEST nvmf_perf 00:21:12.049 ************************************ 00:21:12.049 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:12.049 * Looking for test storage... 00:21:12.049 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:12.049 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:12.049 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:21:12.049 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:12.049 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:12.049 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:12.049 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:12.049 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:12.049 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:21:12.049 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:21:12.049 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:21:12.049 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:21:12.049 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:21:12.049 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:21:12.049 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:21:12.049 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:12.049 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:21:12.049 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:21:12.049 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:12.049 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:12.049 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:21:12.049 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:21:12.049 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:12.049 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:21:12.049 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:12.049 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:21:12.049 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:21:12.049 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:12.049 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:21:12.049 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:12.049 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:12.049 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:12.049 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:21:12.049 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:12.050 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:12.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:12.050 --rc genhtml_branch_coverage=1 00:21:12.050 --rc genhtml_function_coverage=1 00:21:12.050 --rc genhtml_legend=1 00:21:12.050 --rc geninfo_all_blocks=1 00:21:12.050 --rc geninfo_unexecuted_blocks=1 00:21:12.050 00:21:12.050 ' 00:21:12.050 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:12.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:12.050 --rc genhtml_branch_coverage=1 00:21:12.050 --rc genhtml_function_coverage=1 00:21:12.050 --rc genhtml_legend=1 00:21:12.050 --rc geninfo_all_blocks=1 00:21:12.050 --rc geninfo_unexecuted_blocks=1 00:21:12.050 00:21:12.050 ' 00:21:12.050 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:12.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:12.050 --rc genhtml_branch_coverage=1 00:21:12.050 --rc genhtml_function_coverage=1 00:21:12.050 --rc genhtml_legend=1 00:21:12.050 --rc geninfo_all_blocks=1 00:21:12.050 --rc geninfo_unexecuted_blocks=1 00:21:12.050 00:21:12.050 ' 00:21:12.050 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:12.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:12.050 --rc genhtml_branch_coverage=1 00:21:12.050 --rc genhtml_function_coverage=1 00:21:12.050 --rc genhtml_legend=1 00:21:12.050 --rc geninfo_all_blocks=1 00:21:12.050 --rc geninfo_unexecuted_blocks=1 00:21:12.050 00:21:12.050 ' 00:21:12.050 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:12.050 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:21:12.050 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:12.050 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:12.050 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:12.050 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:12.050 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:12.050 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:12.050 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:12.050 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:12.050 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:12.050 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:12.050 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:12.050 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:12.050 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:12.050 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:12.050 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:12.050 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:12.050 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:12.050 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:12.050 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:12.050 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:12.050 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:12.050 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.050 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.050 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.050 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:21:12.050 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.050 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:21:12.050 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:12.050 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:12.050 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:12.050 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:12.050 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:12.050 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:12.050 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:12.050 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:12.050 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:12.050 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:12.050 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:12.050 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:12.050 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:12.050 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:21:12.050 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:12.050 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:12.050 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:12.050 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:12.050 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:12.050 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:12.050 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:12.050 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:12.050 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:12.050 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:12.050 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:21:12.050 20:51:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:13.954 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:13.954 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:13.954 Found net devices under 0000:09:00.0: cvl_0_0 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:13.954 Found net devices under 0000:09:00.1: cvl_0_1 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:13.954 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:14.213 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:14.213 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:14.213 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:14.213 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:14.213 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:14.213 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:14.213 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:14.214 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:14.214 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:14.214 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.317 ms 00:21:14.214 00:21:14.214 --- 10.0.0.2 ping statistics --- 00:21:14.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:14.214 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:21:14.214 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:14.214 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:14.214 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:21:14.214 00:21:14.214 --- 10.0.0.1 ping statistics --- 00:21:14.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:14.214 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:21:14.214 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:14.214 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:21:14.214 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:14.214 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:14.214 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:14.214 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:14.214 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:14.214 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:14.214 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:14.214 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:21:14.214 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:14.214 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:14.214 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:14.214 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=1722508 00:21:14.214 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:14.214 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 1722508 00:21:14.214 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 1722508 ']' 00:21:14.214 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:14.214 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:14.214 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:14.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:14.214 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:14.214 20:51:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:14.214 [2024-11-26 20:51:17.805721] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:21:14.214 [2024-11-26 20:51:17.805797] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:14.214 [2024-11-26 20:51:17.878953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:14.472 [2024-11-26 20:51:17.941246] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:14.472 [2024-11-26 20:51:17.941296] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:14.472 [2024-11-26 20:51:17.941333] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:14.472 [2024-11-26 20:51:17.941344] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:14.472 [2024-11-26 20:51:17.941354] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:14.472 [2024-11-26 20:51:17.942966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:14.472 [2024-11-26 20:51:17.943032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:14.472 [2024-11-26 20:51:17.943082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:14.472 [2024-11-26 20:51:17.943085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:14.472 20:51:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:14.472 20:51:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:21:14.472 20:51:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:14.472 20:51:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:14.472 20:51:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:14.472 20:51:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:14.472 20:51:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:21:14.472 20:51:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:21:17.748 20:51:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:21:17.748 20:51:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:21:18.005 20:51:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:0b:00.0 00:21:18.005 20:51:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:18.263 20:51:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:21:18.263 20:51:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:0b:00.0 ']' 00:21:18.263 20:51:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:21:18.263 20:51:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:21:18.263 20:51:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:18.520 [2024-11-26 20:51:22.095383] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:18.520 20:51:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:18.776 20:51:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:18.776 20:51:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:19.034 20:51:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:19.034 20:51:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:21:19.292 20:51:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:19.549 [2024-11-26 20:51:23.179258] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:19.549 20:51:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:19.807 20:51:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:0b:00.0 ']' 00:21:19.807 20:51:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:21:19.807 20:51:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:21:19.807 20:51:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:21:21.178 Initializing NVMe Controllers 00:21:21.178 Attached to NVMe Controller at 0000:0b:00.0 [8086:0a54] 00:21:21.178 Associating PCIE (0000:0b:00.0) NSID 1 with lcore 0 00:21:21.178 Initialization complete. Launching workers. 00:21:21.178 ======================================================== 00:21:21.178 Latency(us) 00:21:21.178 Device Information : IOPS MiB/s Average min max 00:21:21.178 PCIE (0000:0b:00.0) NSID 1 from core 0: 84493.49 330.05 378.28 28.59 8256.36 00:21:21.178 ======================================================== 00:21:21.178 Total : 84493.49 330.05 378.28 28.59 8256.36 00:21:21.178 00:21:21.178 20:51:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:22.547 Initializing NVMe Controllers 00:21:22.547 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:22.547 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:22.547 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:22.548 Initialization complete. Launching workers. 00:21:22.548 ======================================================== 00:21:22.548 Latency(us) 00:21:22.548 Device Information : IOPS MiB/s Average min max 00:21:22.548 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 94.67 0.37 10815.21 140.43 44938.87 00:21:22.548 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 55.80 0.22 17919.62 7448.39 47900.28 00:21:22.548 ======================================================== 00:21:22.548 Total : 150.47 0.59 13449.96 140.43 47900.28 00:21:22.548 00:21:22.548 20:51:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:23.922 Initializing NVMe Controllers 00:21:23.922 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:23.922 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:23.922 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:23.922 Initialization complete. Launching workers. 00:21:23.922 ======================================================== 00:21:23.922 Latency(us) 00:21:23.922 Device Information : IOPS MiB/s Average min max 00:21:23.922 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8283.03 32.36 3864.17 613.83 10005.06 00:21:23.922 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3751.56 14.65 8568.35 5078.87 16721.65 00:21:23.922 ======================================================== 00:21:23.922 Total : 12034.59 47.01 5330.61 613.83 16721.65 00:21:23.922 00:21:23.922 20:51:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:21:23.922 20:51:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:21:23.922 20:51:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:26.450 Initializing NVMe Controllers 00:21:26.450 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:26.450 Controller IO queue size 128, less than required. 00:21:26.450 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:26.450 Controller IO queue size 128, less than required. 00:21:26.450 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:26.450 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:26.450 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:26.450 Initialization complete. Launching workers. 00:21:26.450 ======================================================== 00:21:26.450 Latency(us) 00:21:26.450 Device Information : IOPS MiB/s Average min max 00:21:26.450 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1631.49 407.87 80439.57 47150.95 119338.50 00:21:26.450 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 573.00 143.25 228521.73 120787.05 329483.17 00:21:26.450 ======================================================== 00:21:26.450 Total : 2204.49 551.12 118929.52 47150.95 329483.17 00:21:26.450 00:21:26.450 20:51:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:21:26.707 No valid NVMe controllers or AIO or URING devices found 00:21:26.707 Initializing NVMe Controllers 00:21:26.707 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:26.707 Controller IO queue size 128, less than required. 00:21:26.707 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:26.707 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:21:26.707 Controller IO queue size 128, less than required. 00:21:26.707 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:26.707 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:21:26.707 WARNING: Some requested NVMe devices were skipped 00:21:26.707 20:51:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:21:29.233 Initializing NVMe Controllers 00:21:29.233 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:29.233 Controller IO queue size 128, less than required. 00:21:29.233 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:29.233 Controller IO queue size 128, less than required. 00:21:29.233 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:29.233 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:29.233 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:29.233 Initialization complete. Launching workers. 00:21:29.233 00:21:29.233 ==================== 00:21:29.233 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:21:29.233 TCP transport: 00:21:29.233 polls: 9016 00:21:29.233 idle_polls: 5879 00:21:29.233 sock_completions: 3137 00:21:29.233 nvme_completions: 5975 00:21:29.233 submitted_requests: 8988 00:21:29.233 queued_requests: 1 00:21:29.233 00:21:29.233 ==================== 00:21:29.233 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:21:29.233 TCP transport: 00:21:29.233 polls: 9033 00:21:29.233 idle_polls: 5535 00:21:29.233 sock_completions: 3498 00:21:29.233 nvme_completions: 6615 00:21:29.233 submitted_requests: 10000 00:21:29.233 queued_requests: 1 00:21:29.233 ======================================================== 00:21:29.233 Latency(us) 00:21:29.233 Device Information : IOPS MiB/s Average min max 00:21:29.233 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1493.45 373.36 88132.55 72109.27 150642.56 00:21:29.233 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1653.44 413.36 77724.57 39980.67 126555.99 00:21:29.233 ======================================================== 00:21:29.233 Total : 3146.89 786.72 82663.98 39980.67 150642.56 00:21:29.233 00:21:29.233 20:51:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:21:29.233 20:51:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:29.491 20:51:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:21:29.491 20:51:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:29.491 20:51:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:21:29.491 20:51:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:29.491 20:51:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:21:29.491 20:51:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:29.491 20:51:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:21:29.491 20:51:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:29.491 20:51:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:29.491 rmmod nvme_tcp 00:21:29.491 rmmod nvme_fabrics 00:21:29.491 rmmod nvme_keyring 00:21:29.491 20:51:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:29.491 20:51:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:21:29.491 20:51:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:21:29.491 20:51:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 1722508 ']' 00:21:29.491 20:51:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 1722508 00:21:29.491 20:51:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 1722508 ']' 00:21:29.491 20:51:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 1722508 00:21:29.491 20:51:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:21:29.491 20:51:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:29.491 20:51:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1722508 00:21:29.491 20:51:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:29.491 20:51:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:29.491 20:51:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1722508' 00:21:29.491 killing process with pid 1722508 00:21:29.491 20:51:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 1722508 00:21:29.491 20:51:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 1722508 00:21:31.390 20:51:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:31.390 20:51:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:31.390 20:51:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:31.390 20:51:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:21:31.390 20:51:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:21:31.390 20:51:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:31.390 20:51:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:21:31.390 20:51:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:31.390 20:51:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:31.390 20:51:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.390 20:51:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:31.390 20:51:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:33.295 20:51:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:33.295 00:21:33.295 real 0m21.508s 00:21:33.295 user 1m5.783s 00:21:33.295 sys 0m5.884s 00:21:33.295 20:51:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:33.295 20:51:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:33.295 ************************************ 00:21:33.295 END TEST nvmf_perf 00:21:33.295 ************************************ 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:33.296 ************************************ 00:21:33.296 START TEST nvmf_fio_host 00:21:33.296 ************************************ 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:33.296 * Looking for test storage... 00:21:33.296 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:33.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.296 --rc genhtml_branch_coverage=1 00:21:33.296 --rc genhtml_function_coverage=1 00:21:33.296 --rc genhtml_legend=1 00:21:33.296 --rc geninfo_all_blocks=1 00:21:33.296 --rc geninfo_unexecuted_blocks=1 00:21:33.296 00:21:33.296 ' 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:33.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.296 --rc genhtml_branch_coverage=1 00:21:33.296 --rc genhtml_function_coverage=1 00:21:33.296 --rc genhtml_legend=1 00:21:33.296 --rc geninfo_all_blocks=1 00:21:33.296 --rc geninfo_unexecuted_blocks=1 00:21:33.296 00:21:33.296 ' 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:33.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.296 --rc genhtml_branch_coverage=1 00:21:33.296 --rc genhtml_function_coverage=1 00:21:33.296 --rc genhtml_legend=1 00:21:33.296 --rc geninfo_all_blocks=1 00:21:33.296 --rc geninfo_unexecuted_blocks=1 00:21:33.296 00:21:33.296 ' 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:33.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.296 --rc genhtml_branch_coverage=1 00:21:33.296 --rc genhtml_function_coverage=1 00:21:33.296 --rc genhtml_legend=1 00:21:33.296 --rc geninfo_all_blocks=1 00:21:33.296 --rc geninfo_unexecuted_blocks=1 00:21:33.296 00:21:33.296 ' 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:33.296 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:33.297 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.297 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.297 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.297 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:33.297 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.297 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:21:33.297 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:33.297 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:33.297 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:33.297 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:33.297 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:33.297 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:33.297 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:33.297 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:33.297 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:33.297 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:33.297 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:33.297 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:21:33.297 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:33.297 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:33.297 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:33.297 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:33.297 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:33.297 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:33.297 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:33.297 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:33.554 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:33.554 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:33.554 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:21:33.554 20:51:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.096 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:36.096 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:21:36.096 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:36.096 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:36.096 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:36.096 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:36.096 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:36.096 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:21:36.096 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:36.096 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:21:36.096 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:21:36.096 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:21:36.096 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:21:36.096 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:21:36.096 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:21:36.096 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:36.096 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:36.096 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:36.096 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:36.096 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:36.096 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:36.096 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:36.096 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:36.096 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:36.096 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:36.096 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:36.096 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:36.096 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:36.096 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:36.096 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:36.096 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:36.096 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:36.096 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:36.096 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:36.096 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:36.096 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:36.096 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:36.096 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:36.096 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:36.096 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:36.096 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:36.096 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:36.096 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:36.096 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:36.096 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:36.096 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:36.096 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:36.096 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:36.096 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:36.096 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:36.097 Found net devices under 0000:09:00.0: cvl_0_0 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:36.097 Found net devices under 0000:09:00.1: cvl_0_1 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:36.097 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:36.097 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.323 ms 00:21:36.097 00:21:36.097 --- 10.0.0.2 ping statistics --- 00:21:36.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:36.097 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:36.097 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:36.097 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:21:36.097 00:21:36.097 --- 10.0.0.1 ping statistics --- 00:21:36.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:36.097 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1726484 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1726484 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 1726484 ']' 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:36.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.097 [2024-11-26 20:51:39.404814] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:21:36.097 [2024-11-26 20:51:39.404881] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:36.097 [2024-11-26 20:51:39.477973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:36.097 [2024-11-26 20:51:39.537979] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:36.097 [2024-11-26 20:51:39.538026] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:36.097 [2024-11-26 20:51:39.538050] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:36.097 [2024-11-26 20:51:39.538061] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:36.097 [2024-11-26 20:51:39.538071] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:36.097 [2024-11-26 20:51:39.539753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:36.097 [2024-11-26 20:51:39.539818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:36.097 [2024-11-26 20:51:39.539893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:36.097 [2024-11-26 20:51:39.539897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:21:36.097 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:36.356 [2024-11-26 20:51:39.974320] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:36.356 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:21:36.356 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:36.356 20:51:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:36.356 20:51:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:36.922 Malloc1 00:21:36.922 20:51:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:36.922 20:51:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:37.488 20:51:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:37.488 [2024-11-26 20:51:41.181301] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:37.745 20:51:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:38.003 20:51:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:21:38.003 20:51:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:38.003 20:51:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:38.003 20:51:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:38.003 20:51:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:38.003 20:51:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:38.003 20:51:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:38.003 20:51:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:21:38.003 20:51:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:38.003 20:51:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:38.003 20:51:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:38.003 20:51:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:21:38.003 20:51:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:38.003 20:51:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:38.003 20:51:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:38.003 20:51:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:38.003 20:51:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:38.003 20:51:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:21:38.003 20:51:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:38.003 20:51:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:38.003 20:51:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:38.003 20:51:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:38.003 20:51:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:38.260 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:38.260 fio-3.35 00:21:38.260 Starting 1 thread 00:21:40.788 00:21:40.788 test: (groupid=0, jobs=1): err= 0: pid=1726956: Tue Nov 26 20:51:43 2024 00:21:40.788 read: IOPS=8880, BW=34.7MiB/s (36.4MB/s)(69.6MiB/2006msec) 00:21:40.788 slat (nsec): min=1921, max=235506, avg=2476.57, stdev=2794.29 00:21:40.788 clat (usec): min=3019, max=14025, avg=7869.15, stdev=655.09 00:21:40.788 lat (usec): min=3058, max=14028, avg=7871.63, stdev=654.90 00:21:40.788 clat percentiles (usec): 00:21:40.788 | 1.00th=[ 6390], 5.00th=[ 6849], 10.00th=[ 7046], 20.00th=[ 7308], 00:21:40.788 | 30.00th=[ 7570], 40.00th=[ 7701], 50.00th=[ 7898], 60.00th=[ 8029], 00:21:40.788 | 70.00th=[ 8225], 80.00th=[ 8356], 90.00th=[ 8717], 95.00th=[ 8848], 00:21:40.788 | 99.00th=[ 9241], 99.50th=[ 9503], 99.90th=[11863], 99.95th=[12518], 00:21:40.788 | 99.99th=[13960] 00:21:40.788 bw ( KiB/s): min=34552, max=36032, per=99.89%, avg=35486.00, stdev=644.43, samples=4 00:21:40.788 iops : min= 8638, max= 9008, avg=8871.50, stdev=161.11, samples=4 00:21:40.788 write: IOPS=8893, BW=34.7MiB/s (36.4MB/s)(69.7MiB/2006msec); 0 zone resets 00:21:40.788 slat (usec): min=2, max=195, avg= 2.58, stdev= 1.85 00:21:40.788 clat (usec): min=2229, max=12513, avg=6469.16, stdev=541.97 00:21:40.788 lat (usec): min=2243, max=12515, avg=6471.73, stdev=541.91 00:21:40.788 clat percentiles (usec): 00:21:40.788 | 1.00th=[ 5211], 5.00th=[ 5669], 10.00th=[ 5866], 20.00th=[ 6063], 00:21:40.788 | 30.00th=[ 6194], 40.00th=[ 6325], 50.00th=[ 6456], 60.00th=[ 6587], 00:21:40.788 | 70.00th=[ 6718], 80.00th=[ 6915], 90.00th=[ 7111], 95.00th=[ 7242], 00:21:40.788 | 99.00th=[ 7635], 99.50th=[ 7767], 99.90th=[10683], 99.95th=[11731], 00:21:40.788 | 99.99th=[12518] 00:21:40.788 bw ( KiB/s): min=35272, max=35832, per=99.99%, avg=35572.00, stdev=274.54, samples=4 00:21:40.788 iops : min= 8818, max= 8958, avg=8893.00, stdev=68.63, samples=4 00:21:40.788 lat (msec) : 4=0.11%, 10=99.71%, 20=0.18% 00:21:40.788 cpu : usr=67.08%, sys=31.42%, ctx=83, majf=0, minf=31 00:21:40.788 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:40.788 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:40.788 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:40.788 issued rwts: total=17815,17841,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:40.788 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:40.788 00:21:40.788 Run status group 0 (all jobs): 00:21:40.788 READ: bw=34.7MiB/s (36.4MB/s), 34.7MiB/s-34.7MiB/s (36.4MB/s-36.4MB/s), io=69.6MiB (73.0MB), run=2006-2006msec 00:21:40.788 WRITE: bw=34.7MiB/s (36.4MB/s), 34.7MiB/s-34.7MiB/s (36.4MB/s-36.4MB/s), io=69.7MiB (73.1MB), run=2006-2006msec 00:21:40.788 20:51:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:40.788 20:51:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:40.788 20:51:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:40.788 20:51:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:40.788 20:51:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:40.788 20:51:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:40.788 20:51:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:21:40.788 20:51:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:40.788 20:51:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:40.788 20:51:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:40.788 20:51:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:21:40.788 20:51:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:40.788 20:51:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:40.788 20:51:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:40.788 20:51:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:40.788 20:51:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:40.788 20:51:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:21:40.789 20:51:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:40.789 20:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:21:40.789 20:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:21:40.789 20:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:40.789 20:51:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:40.789 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:40.789 fio-3.35 00:21:40.789 Starting 1 thread 00:21:43.384 00:21:43.384 test: (groupid=0, jobs=1): err= 0: pid=1727294: Tue Nov 26 20:51:46 2024 00:21:43.384 read: IOPS=8106, BW=127MiB/s (133MB/s)(254MiB/2008msec) 00:21:43.384 slat (nsec): min=2847, max=96836, avg=3758.28, stdev=1637.19 00:21:43.384 clat (usec): min=2764, max=17723, avg=9021.13, stdev=2166.92 00:21:43.384 lat (usec): min=2768, max=17726, avg=9024.89, stdev=2166.94 00:21:43.384 clat percentiles (usec): 00:21:43.384 | 1.00th=[ 4752], 5.00th=[ 5800], 10.00th=[ 6390], 20.00th=[ 7177], 00:21:43.384 | 30.00th=[ 7767], 40.00th=[ 8455], 50.00th=[ 8848], 60.00th=[ 9372], 00:21:43.384 | 70.00th=[ 9896], 80.00th=[10683], 90.00th=[11863], 95.00th=[13173], 00:21:43.384 | 99.00th=[15139], 99.50th=[15795], 99.90th=[16581], 99.95th=[16712], 00:21:43.384 | 99.99th=[17171] 00:21:43.384 bw ( KiB/s): min=62912, max=72320, per=50.73%, avg=65800.00, stdev=4426.37, samples=4 00:21:43.384 iops : min= 3932, max= 4520, avg=4112.50, stdev=276.65, samples=4 00:21:43.384 write: IOPS=4666, BW=72.9MiB/s (76.5MB/s)(135MiB/1846msec); 0 zone resets 00:21:43.384 slat (usec): min=30, max=148, avg=33.13, stdev= 4.58 00:21:43.384 clat (usec): min=5193, max=23831, avg=11978.46, stdev=2231.64 00:21:43.384 lat (usec): min=5224, max=23862, avg=12011.59, stdev=2231.78 00:21:43.384 clat percentiles (usec): 00:21:43.385 | 1.00th=[ 7635], 5.00th=[ 8586], 10.00th=[ 9241], 20.00th=[10028], 00:21:43.385 | 30.00th=[10683], 40.00th=[11207], 50.00th=[11863], 60.00th=[12387], 00:21:43.385 | 70.00th=[13042], 80.00th=[13829], 90.00th=[14877], 95.00th=[15664], 00:21:43.385 | 99.00th=[17433], 99.50th=[19268], 99.90th=[23200], 99.95th=[23462], 00:21:43.385 | 99.99th=[23725] 00:21:43.385 bw ( KiB/s): min=64160, max=75680, per=91.56%, avg=68360.00, stdev=5090.94, samples=4 00:21:43.385 iops : min= 4010, max= 4730, avg=4272.50, stdev=318.18, samples=4 00:21:43.385 lat (msec) : 4=0.15%, 10=53.17%, 20=46.58%, 50=0.09% 00:21:43.385 cpu : usr=76.08%, sys=22.62%, ctx=42, majf=0, minf=51 00:21:43.385 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:21:43.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:43.385 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:43.385 issued rwts: total=16278,8614,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:43.385 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:43.385 00:21:43.385 Run status group 0 (all jobs): 00:21:43.385 READ: bw=127MiB/s (133MB/s), 127MiB/s-127MiB/s (133MB/s-133MB/s), io=254MiB (267MB), run=2008-2008msec 00:21:43.385 WRITE: bw=72.9MiB/s (76.5MB/s), 72.9MiB/s-72.9MiB/s (76.5MB/s-76.5MB/s), io=135MiB (141MB), run=1846-1846msec 00:21:43.385 20:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:43.385 20:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:21:43.385 20:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:43.385 20:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:21:43.385 20:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:21:43.385 20:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:43.385 20:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:21:43.385 20:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:43.385 20:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:21:43.385 20:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:43.385 20:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:43.385 rmmod nvme_tcp 00:21:43.385 rmmod nvme_fabrics 00:21:43.385 rmmod nvme_keyring 00:21:43.385 20:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:43.385 20:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:21:43.385 20:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:21:43.385 20:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 1726484 ']' 00:21:43.385 20:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 1726484 00:21:43.385 20:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 1726484 ']' 00:21:43.385 20:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 1726484 00:21:43.385 20:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:21:43.385 20:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:43.385 20:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1726484 00:21:43.385 20:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:43.385 20:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:43.385 20:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1726484' 00:21:43.385 killing process with pid 1726484 00:21:43.385 20:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 1726484 00:21:43.385 20:51:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 1726484 00:21:43.644 20:51:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:43.644 20:51:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:43.644 20:51:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:43.644 20:51:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:21:43.644 20:51:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:21:43.644 20:51:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:43.644 20:51:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:21:43.644 20:51:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:43.644 20:51:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:43.644 20:51:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.644 20:51:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:43.644 20:51:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.177 20:51:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:46.177 00:21:46.177 real 0m12.475s 00:21:46.177 user 0m36.680s 00:21:46.177 sys 0m4.055s 00:21:46.177 20:51:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:46.177 20:51:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:46.177 ************************************ 00:21:46.177 END TEST nvmf_fio_host 00:21:46.177 ************************************ 00:21:46.177 20:51:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:46.177 20:51:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:46.177 20:51:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:46.177 20:51:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:46.177 ************************************ 00:21:46.177 START TEST nvmf_failover 00:21:46.177 ************************************ 00:21:46.177 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:46.177 * Looking for test storage... 00:21:46.177 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:46.177 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:46.177 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:46.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.178 --rc genhtml_branch_coverage=1 00:21:46.178 --rc genhtml_function_coverage=1 00:21:46.178 --rc genhtml_legend=1 00:21:46.178 --rc geninfo_all_blocks=1 00:21:46.178 --rc geninfo_unexecuted_blocks=1 00:21:46.178 00:21:46.178 ' 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:46.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.178 --rc genhtml_branch_coverage=1 00:21:46.178 --rc genhtml_function_coverage=1 00:21:46.178 --rc genhtml_legend=1 00:21:46.178 --rc geninfo_all_blocks=1 00:21:46.178 --rc geninfo_unexecuted_blocks=1 00:21:46.178 00:21:46.178 ' 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:46.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.178 --rc genhtml_branch_coverage=1 00:21:46.178 --rc genhtml_function_coverage=1 00:21:46.178 --rc genhtml_legend=1 00:21:46.178 --rc geninfo_all_blocks=1 00:21:46.178 --rc geninfo_unexecuted_blocks=1 00:21:46.178 00:21:46.178 ' 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:46.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.178 --rc genhtml_branch_coverage=1 00:21:46.178 --rc genhtml_function_coverage=1 00:21:46.178 --rc genhtml_legend=1 00:21:46.178 --rc geninfo_all_blocks=1 00:21:46.178 --rc geninfo_unexecuted_blocks=1 00:21:46.178 00:21:46.178 ' 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:46.178 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:46.179 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:46.179 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:46.179 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:46.179 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:46.179 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:46.179 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:46.179 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:46.179 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:46.179 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:46.179 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:21:46.179 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:46.179 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:46.179 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:46.179 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:46.179 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:46.179 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:46.179 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:46.179 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.179 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:46.179 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:46.179 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:21:46.179 20:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:48.078 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:48.078 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:48.078 Found net devices under 0000:09:00.0: cvl_0_0 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:48.078 Found net devices under 0000:09:00.1: cvl_0_1 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:48.078 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:48.079 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:48.079 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:48.079 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:48.079 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:48.079 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:48.079 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:48.079 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:48.079 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:48.079 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:48.079 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:48.079 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:48.079 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:48.079 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:48.079 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:48.079 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:48.079 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:48.079 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:21:48.079 00:21:48.079 --- 10.0.0.2 ping statistics --- 00:21:48.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.079 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:21:48.079 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:48.336 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:48.336 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:21:48.336 00:21:48.336 --- 10.0.0.1 ping statistics --- 00:21:48.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.336 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:21:48.336 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:48.336 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:21:48.336 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:48.336 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:48.336 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:48.336 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:48.336 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:48.336 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:48.336 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:48.336 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:48.336 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:48.336 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:48.336 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:48.336 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=1729499 00:21:48.336 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:48.336 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 1729499 00:21:48.336 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1729499 ']' 00:21:48.336 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:48.336 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:48.336 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:48.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:48.336 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:48.336 20:51:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:48.336 [2024-11-26 20:51:51.852998] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:21:48.336 [2024-11-26 20:51:51.853093] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:48.336 [2024-11-26 20:51:51.927201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:48.336 [2024-11-26 20:51:51.985136] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:48.336 [2024-11-26 20:51:51.985187] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:48.336 [2024-11-26 20:51:51.985211] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:48.336 [2024-11-26 20:51:51.985222] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:48.336 [2024-11-26 20:51:51.985231] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:48.336 [2024-11-26 20:51:51.986713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:48.336 [2024-11-26 20:51:51.986770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:48.336 [2024-11-26 20:51:51.986774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:48.596 20:51:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:48.596 20:51:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:21:48.596 20:51:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:48.596 20:51:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:48.596 20:51:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:48.596 20:51:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:48.596 20:51:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:48.853 [2024-11-26 20:51:52.405947] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:48.853 20:51:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:49.111 Malloc0 00:21:49.111 20:51:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:49.371 20:51:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:49.630 20:51:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:49.888 [2024-11-26 20:51:53.542993] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:49.888 20:51:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:50.145 [2024-11-26 20:51:53.811817] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:50.145 20:51:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:50.710 [2024-11-26 20:51:54.120813] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:50.710 20:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1729841 00:21:50.710 20:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:50.710 20:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:50.710 20:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1729841 /var/tmp/bdevperf.sock 00:21:50.710 20:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1729841 ']' 00:21:50.710 20:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:50.710 20:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:50.710 20:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:50.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:50.710 20:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:50.710 20:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:50.967 20:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:50.967 20:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:21:50.967 20:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:51.226 NVMe0n1 00:21:51.484 20:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:51.741 00:21:51.741 20:51:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1730045 00:21:51.741 20:51:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:51.741 20:51:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:21:53.114 20:51:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:53.114 20:51:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:21:56.390 20:51:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:56.390 00:21:56.390 20:52:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:56.647 20:52:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:21:59.927 20:52:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:00.185 [2024-11-26 20:52:03.629666] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:00.186 20:52:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:22:01.119 20:52:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:01.376 20:52:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1730045 00:22:07.939 { 00:22:07.940 "results": [ 00:22:07.940 { 00:22:07.940 "job": "NVMe0n1", 00:22:07.940 "core_mask": "0x1", 00:22:07.940 "workload": "verify", 00:22:07.940 "status": "finished", 00:22:07.940 "verify_range": { 00:22:07.940 "start": 0, 00:22:07.940 "length": 16384 00:22:07.940 }, 00:22:07.940 "queue_depth": 128, 00:22:07.940 "io_size": 4096, 00:22:07.940 "runtime": 15.00472, 00:22:07.940 "iops": 8693.597747908658, 00:22:07.940 "mibps": 33.959366202768194, 00:22:07.940 "io_failed": 5133, 00:22:07.940 "io_timeout": 0, 00:22:07.940 "avg_latency_us": 14137.806033995463, 00:22:07.940 "min_latency_us": 555.2355555555556, 00:22:07.940 "max_latency_us": 16019.91111111111 00:22:07.940 } 00:22:07.940 ], 00:22:07.940 "core_count": 1 00:22:07.940 } 00:22:07.940 20:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1729841 00:22:07.940 20:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1729841 ']' 00:22:07.940 20:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1729841 00:22:07.940 20:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:22:07.940 20:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:07.940 20:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1729841 00:22:07.940 20:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:07.940 20:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:07.940 20:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1729841' 00:22:07.940 killing process with pid 1729841 00:22:07.940 20:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1729841 00:22:07.940 20:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1729841 00:22:07.940 20:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:07.940 [2024-11-26 20:51:54.191524] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:22:07.940 [2024-11-26 20:51:54.191652] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1729841 ] 00:22:07.940 [2024-11-26 20:51:54.262228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:07.940 [2024-11-26 20:51:54.321592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:07.940 Running I/O for 15 seconds... 00:22:07.940 8642.00 IOPS, 33.76 MiB/s [2024-11-26T19:52:11.637Z] [2024-11-26 20:51:56.638362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.940 [2024-11-26 20:51:56.638422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.940 [2024-11-26 20:51:56.638457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.940 [2024-11-26 20:51:56.638475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.940 [2024-11-26 20:51:56.638491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.940 [2024-11-26 20:51:56.638507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.940 [2024-11-26 20:51:56.638522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.940 [2024-11-26 20:51:56.638536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.940 [2024-11-26 20:51:56.638552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.940 [2024-11-26 20:51:56.638567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.940 [2024-11-26 20:51:56.638583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:80256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.940 [2024-11-26 20:51:56.638598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.940 [2024-11-26 20:51:56.638614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.940 [2024-11-26 20:51:56.638629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.940 [2024-11-26 20:51:56.638645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:80272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.940 [2024-11-26 20:51:56.638659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.940 [2024-11-26 20:51:56.638676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:80280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.940 [2024-11-26 20:51:56.638707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.940 [2024-11-26 20:51:56.638723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:80288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.940 [2024-11-26 20:51:56.638737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.940 [2024-11-26 20:51:56.638768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:80296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.940 [2024-11-26 20:51:56.638781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.940 [2024-11-26 20:51:56.638811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.940 [2024-11-26 20:51:56.638826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.940 [2024-11-26 20:51:56.638841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:80312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.940 [2024-11-26 20:51:56.638854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.940 [2024-11-26 20:51:56.638868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:80320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.940 [2024-11-26 20:51:56.638881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.940 [2024-11-26 20:51:56.638895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:80328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.940 [2024-11-26 20:51:56.638908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.940 [2024-11-26 20:51:56.638922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:80336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.940 [2024-11-26 20:51:56.638935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.940 [2024-11-26 20:51:56.638950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:80344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.940 [2024-11-26 20:51:56.638963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.940 [2024-11-26 20:51:56.638977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:80352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.940 [2024-11-26 20:51:56.638991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.940 [2024-11-26 20:51:56.639007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:80360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.940 [2024-11-26 20:51:56.639020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.940 [2024-11-26 20:51:56.639035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:80368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.940 [2024-11-26 20:51:56.639048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.940 [2024-11-26 20:51:56.639062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.940 [2024-11-26 20:51:56.639075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.940 [2024-11-26 20:51:56.639089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:80384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.940 [2024-11-26 20:51:56.639103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.940 [2024-11-26 20:51:56.639117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.940 [2024-11-26 20:51:56.639130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.940 [2024-11-26 20:51:56.639145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:80400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.940 [2024-11-26 20:51:56.639162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.940 [2024-11-26 20:51:56.639177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:80408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.940 [2024-11-26 20:51:56.639190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.940 [2024-11-26 20:51:56.639204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.940 [2024-11-26 20:51:56.639218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.940 [2024-11-26 20:51:56.639232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.940 [2024-11-26 20:51:56.639246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.940 [2024-11-26 20:51:56.639260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.940 [2024-11-26 20:51:56.639273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.941 [2024-11-26 20:51:56.639313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.941 [2024-11-26 20:51:56.639330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.941 [2024-11-26 20:51:56.639346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:80448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.941 [2024-11-26 20:51:56.639360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.941 [2024-11-26 20:51:56.639374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:80456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.941 [2024-11-26 20:51:56.639388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.941 [2024-11-26 20:51:56.639403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.941 [2024-11-26 20:51:56.639417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.941 [2024-11-26 20:51:56.639431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.941 [2024-11-26 20:51:56.639446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.941 [2024-11-26 20:51:56.639461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.941 [2024-11-26 20:51:56.639474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.941 [2024-11-26 20:51:56.639490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.941 [2024-11-26 20:51:56.639504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.941 [2024-11-26 20:51:56.639519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.941 [2024-11-26 20:51:56.639533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.941 [2024-11-26 20:51:56.639552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.941 [2024-11-26 20:51:56.639567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.941 [2024-11-26 20:51:56.639582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.941 [2024-11-26 20:51:56.639596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.941 [2024-11-26 20:51:56.639611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.941 [2024-11-26 20:51:56.639624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.941 [2024-11-26 20:51:56.639654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.941 [2024-11-26 20:51:56.639668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.941 [2024-11-26 20:51:56.639683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:79696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.941 [2024-11-26 20:51:56.639696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.941 [2024-11-26 20:51:56.639711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.941 [2024-11-26 20:51:56.639725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.941 [2024-11-26 20:51:56.639740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:79712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.941 [2024-11-26 20:51:56.639754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.941 [2024-11-26 20:51:56.639768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:79720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.941 [2024-11-26 20:51:56.639781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.941 [2024-11-26 20:51:56.639795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:79728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.941 [2024-11-26 20:51:56.639809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.941 [2024-11-26 20:51:56.639823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:79736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.941 [2024-11-26 20:51:56.639836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.941 [2024-11-26 20:51:56.639850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:79744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.941 [2024-11-26 20:51:56.639863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.941 [2024-11-26 20:51:56.639878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:79752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.941 [2024-11-26 20:51:56.639891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.941 [2024-11-26 20:51:56.639905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:79760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.941 [2024-11-26 20:51:56.639918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.941 [2024-11-26 20:51:56.639937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:79768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.941 [2024-11-26 20:51:56.639951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.941 [2024-11-26 20:51:56.639966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:79776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.941 [2024-11-26 20:51:56.639980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.941 [2024-11-26 20:51:56.639994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:79784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.941 [2024-11-26 20:51:56.640007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.941 [2024-11-26 20:51:56.640022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:79792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.941 [2024-11-26 20:51:56.640035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.941 [2024-11-26 20:51:56.640049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.941 [2024-11-26 20:51:56.640062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.941 [2024-11-26 20:51:56.640077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:79808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.941 [2024-11-26 20:51:56.640090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.941 [2024-11-26 20:51:56.640104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:79816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.941 [2024-11-26 20:51:56.640118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.941 [2024-11-26 20:51:56.640132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:79824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.941 [2024-11-26 20:51:56.640145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.941 [2024-11-26 20:51:56.640159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:79832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.941 [2024-11-26 20:51:56.640173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.941 [2024-11-26 20:51:56.640189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:79840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.941 [2024-11-26 20:51:56.640202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.941 [2024-11-26 20:51:56.640217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:79848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.941 [2024-11-26 20:51:56.640230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.941 [2024-11-26 20:51:56.640245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:79856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.941 [2024-11-26 20:51:56.640259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.941 [2024-11-26 20:51:56.640274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:79864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.941 [2024-11-26 20:51:56.640314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.941 [2024-11-26 20:51:56.640333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:79872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.941 [2024-11-26 20:51:56.640347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.941 [2024-11-26 20:51:56.640362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:79880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.941 [2024-11-26 20:51:56.640377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.941 [2024-11-26 20:51:56.640392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.941 [2024-11-26 20:51:56.640406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.941 [2024-11-26 20:51:56.640420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:79896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.941 [2024-11-26 20:51:56.640434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.941 [2024-11-26 20:51:56.640450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:79904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.941 [2024-11-26 20:51:56.640464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.941 [2024-11-26 20:51:56.640478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:79912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.942 [2024-11-26 20:51:56.640493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.942 [2024-11-26 20:51:56.640509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:79920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.942 [2024-11-26 20:51:56.640523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.942 [2024-11-26 20:51:56.640538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.942 [2024-11-26 20:51:56.640552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.942 [2024-11-26 20:51:56.640567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:79936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.942 [2024-11-26 20:51:56.640598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.942 [2024-11-26 20:51:56.640614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:79944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.942 [2024-11-26 20:51:56.640627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.942 [2024-11-26 20:51:56.640642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:79952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.942 [2024-11-26 20:51:56.640656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.942 [2024-11-26 20:51:56.640670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:79960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.942 [2024-11-26 20:51:56.640684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.942 [2024-11-26 20:51:56.640703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.942 [2024-11-26 20:51:56.640717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.942 [2024-11-26 20:51:56.640732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:79976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.942 [2024-11-26 20:51:56.640745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.942 [2024-11-26 20:51:56.640760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:79984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.942 [2024-11-26 20:51:56.640774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.942 [2024-11-26 20:51:56.640788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:79992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.942 [2024-11-26 20:51:56.640802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.942 [2024-11-26 20:51:56.640818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:80000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.942 [2024-11-26 20:51:56.640832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.942 [2024-11-26 20:51:56.640846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:80008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.942 [2024-11-26 20:51:56.640860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.942 [2024-11-26 20:51:56.640874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:80016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.942 [2024-11-26 20:51:56.640888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.942 [2024-11-26 20:51:56.640903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:80024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.942 [2024-11-26 20:51:56.640917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.942 [2024-11-26 20:51:56.640932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.942 [2024-11-26 20:51:56.640946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.942 [2024-11-26 20:51:56.640960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.942 [2024-11-26 20:51:56.640974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.942 [2024-11-26 20:51:56.640989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.942 [2024-11-26 20:51:56.641003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.942 [2024-11-26 20:51:56.641017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.942 [2024-11-26 20:51:56.641030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.942 [2024-11-26 20:51:56.641044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.942 [2024-11-26 20:51:56.641062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.942 [2024-11-26 20:51:56.641078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.942 [2024-11-26 20:51:56.641091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.942 [2024-11-26 20:51:56.641105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.942 [2024-11-26 20:51:56.641119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.942 [2024-11-26 20:51:56.641134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.942 [2024-11-26 20:51:56.641148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.942 [2024-11-26 20:51:56.641162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.942 [2024-11-26 20:51:56.641176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.942 [2024-11-26 20:51:56.641190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.942 [2024-11-26 20:51:56.641204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.942 [2024-11-26 20:51:56.641218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:80616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.942 [2024-11-26 20:51:56.641246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.942 [2024-11-26 20:51:56.641262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.942 [2024-11-26 20:51:56.641275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.942 [2024-11-26 20:51:56.641290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:80632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.942 [2024-11-26 20:51:56.641325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.942 [2024-11-26 20:51:56.641344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.942 [2024-11-26 20:51:56.641359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.942 [2024-11-26 20:51:56.641375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:80648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.942 [2024-11-26 20:51:56.641390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.942 [2024-11-26 20:51:56.641405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:80656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.942 [2024-11-26 20:51:56.641420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.942 [2024-11-26 20:51:56.641442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:80664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.942 [2024-11-26 20:51:56.641458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.942 [2024-11-26 20:51:56.641478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:80672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.942 [2024-11-26 20:51:56.641493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.942 [2024-11-26 20:51:56.641509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:80680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.942 [2024-11-26 20:51:56.641524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.942 [2024-11-26 20:51:56.641540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.942 [2024-11-26 20:51:56.641554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.942 [2024-11-26 20:51:56.641570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:80696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.942 [2024-11-26 20:51:56.641585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.942 [2024-11-26 20:51:56.641600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:80704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.942 [2024-11-26 20:51:56.641629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.942 [2024-11-26 20:51:56.641645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:80032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.942 [2024-11-26 20:51:56.641659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.942 [2024-11-26 20:51:56.641674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:80040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.942 [2024-11-26 20:51:56.641688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.942 [2024-11-26 20:51:56.641704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:80048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.942 [2024-11-26 20:51:56.641719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.942 [2024-11-26 20:51:56.641734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:80056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.943 [2024-11-26 20:51:56.641748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.943 [2024-11-26 20:51:56.641763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:80064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.943 [2024-11-26 20:51:56.641777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.943 [2024-11-26 20:51:56.641792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:80072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.943 [2024-11-26 20:51:56.641807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.943 [2024-11-26 20:51:56.641822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:80080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.943 [2024-11-26 20:51:56.641837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.943 [2024-11-26 20:51:56.641852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:80088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.943 [2024-11-26 20:51:56.641866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.943 [2024-11-26 20:51:56.641886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:80096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.943 [2024-11-26 20:51:56.641901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.943 [2024-11-26 20:51:56.641916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:80104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.943 [2024-11-26 20:51:56.641931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.943 [2024-11-26 20:51:56.641952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.943 [2024-11-26 20:51:56.641967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.943 [2024-11-26 20:51:56.641983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:80120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.943 [2024-11-26 20:51:56.641996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.943 [2024-11-26 20:51:56.642011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:80128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.943 [2024-11-26 20:51:56.642025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.943 [2024-11-26 20:51:56.642040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:80136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.943 [2024-11-26 20:51:56.642054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.943 [2024-11-26 20:51:56.642076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:80144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.943 [2024-11-26 20:51:56.642090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.943 [2024-11-26 20:51:56.642105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.943 [2024-11-26 20:51:56.642118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.943 [2024-11-26 20:51:56.642133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:80152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.943 [2024-11-26 20:51:56.642147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.943 [2024-11-26 20:51:56.642162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:80160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.943 [2024-11-26 20:51:56.642176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.943 [2024-11-26 20:51:56.642190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:80168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.943 [2024-11-26 20:51:56.642204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.943 [2024-11-26 20:51:56.642219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:80176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.943 [2024-11-26 20:51:56.642232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.943 [2024-11-26 20:51:56.642247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:80184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.943 [2024-11-26 20:51:56.642264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.943 [2024-11-26 20:51:56.642280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:80192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.943 [2024-11-26 20:51:56.642294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.943 [2024-11-26 20:51:56.642334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:80200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.943 [2024-11-26 20:51:56.642350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.943 [2024-11-26 20:51:56.642365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb21420 is same with the state(6) to be set 00:22:07.943 [2024-11-26 20:51:56.642382] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:07.943 [2024-11-26 20:51:56.642394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:07.943 [2024-11-26 20:51:56.642406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80208 len:8 PRP1 0x0 PRP2 0x0 00:22:07.943 [2024-11-26 20:51:56.642419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.943 [2024-11-26 20:51:56.642486] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:07.943 [2024-11-26 20:51:56.642531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.943 [2024-11-26 20:51:56.642550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.943 [2024-11-26 20:51:56.642567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.943 [2024-11-26 20:51:56.642580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.943 [2024-11-26 20:51:56.642594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.943 [2024-11-26 20:51:56.642607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.943 [2024-11-26 20:51:56.642621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.943 [2024-11-26 20:51:56.642635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.943 [2024-11-26 20:51:56.642650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:07.943 [2024-11-26 20:51:56.642717] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb00570 (9): Bad file descriptor 00:22:07.943 [2024-11-26 20:51:56.646002] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:07.943 [2024-11-26 20:51:56.672370] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:22:07.943 8490.00 IOPS, 33.16 MiB/s [2024-11-26T19:52:11.640Z] 8663.00 IOPS, 33.84 MiB/s [2024-11-26T19:52:11.640Z] 8685.00 IOPS, 33.93 MiB/s [2024-11-26T19:52:11.640Z] [2024-11-26 20:52:00.311350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:82872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.943 [2024-11-26 20:52:00.311416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.943 [2024-11-26 20:52:00.311452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:82880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.943 [2024-11-26 20:52:00.311479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.943 [2024-11-26 20:52:00.311497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:82888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.943 [2024-11-26 20:52:00.311512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.943 [2024-11-26 20:52:00.311527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:82896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.943 [2024-11-26 20:52:00.311542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.943 [2024-11-26 20:52:00.311558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:82904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.943 [2024-11-26 20:52:00.311572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.943 [2024-11-26 20:52:00.311588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:82912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.943 [2024-11-26 20:52:00.311618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.943 [2024-11-26 20:52:00.311634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.943 [2024-11-26 20:52:00.311648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.943 [2024-11-26 20:52:00.311664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:81984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.943 [2024-11-26 20:52:00.311680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.943 [2024-11-26 20:52:00.311695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:81992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.943 [2024-11-26 20:52:00.311710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.943 [2024-11-26 20:52:00.311725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:82000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.943 [2024-11-26 20:52:00.311740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.943 [2024-11-26 20:52:00.311755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:82008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.943 [2024-11-26 20:52:00.311785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.943 [2024-11-26 20:52:00.311801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:82016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.944 [2024-11-26 20:52:00.311816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.944 [2024-11-26 20:52:00.311831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:82024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.944 [2024-11-26 20:52:00.311845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.944 [2024-11-26 20:52:00.311860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:82032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.944 [2024-11-26 20:52:00.311874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.944 [2024-11-26 20:52:00.311894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:82040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.944 [2024-11-26 20:52:00.311909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.944 [2024-11-26 20:52:00.311924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:82048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.944 [2024-11-26 20:52:00.311938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.944 [2024-11-26 20:52:00.311953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.944 [2024-11-26 20:52:00.311968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.944 [2024-11-26 20:52:00.311984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:82064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.944 [2024-11-26 20:52:00.311999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.944 [2024-11-26 20:52:00.312016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:82072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.944 [2024-11-26 20:52:00.312031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.944 [2024-11-26 20:52:00.312046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:82080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.944 [2024-11-26 20:52:00.312061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.944 [2024-11-26 20:52:00.312076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:82088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.944 [2024-11-26 20:52:00.312105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.944 [2024-11-26 20:52:00.312120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:82096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.944 [2024-11-26 20:52:00.312135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.944 [2024-11-26 20:52:00.312150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:82104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.944 [2024-11-26 20:52:00.312163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.944 [2024-11-26 20:52:00.312178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:82112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.944 [2024-11-26 20:52:00.312192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.944 [2024-11-26 20:52:00.312206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:82120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.944 [2024-11-26 20:52:00.312220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.944 [2024-11-26 20:52:00.312235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.944 [2024-11-26 20:52:00.312249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.944 [2024-11-26 20:52:00.312264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:82136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.944 [2024-11-26 20:52:00.312296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.944 [2024-11-26 20:52:00.312324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:82144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.944 [2024-11-26 20:52:00.312339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.944 [2024-11-26 20:52:00.312354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:82152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.944 [2024-11-26 20:52:00.312369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.944 [2024-11-26 20:52:00.312383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:82160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.944 [2024-11-26 20:52:00.312398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.944 [2024-11-26 20:52:00.312413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:82168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.944 [2024-11-26 20:52:00.312427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.944 [2024-11-26 20:52:00.312442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:82176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.944 [2024-11-26 20:52:00.312456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.944 [2024-11-26 20:52:00.312471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:82184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.944 [2024-11-26 20:52:00.312485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.944 [2024-11-26 20:52:00.312500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:82192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.944 [2024-11-26 20:52:00.312515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.944 [2024-11-26 20:52:00.312531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:82200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.944 [2024-11-26 20:52:00.312545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.944 [2024-11-26 20:52:00.312560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:82208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.944 [2024-11-26 20:52:00.312577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.944 [2024-11-26 20:52:00.312592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:82216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.944 [2024-11-26 20:52:00.312606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.944 [2024-11-26 20:52:00.312621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:82224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.944 [2024-11-26 20:52:00.312635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.944 [2024-11-26 20:52:00.312650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:82232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.944 [2024-11-26 20:52:00.312664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.944 [2024-11-26 20:52:00.312679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:82240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.944 [2024-11-26 20:52:00.312697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.944 [2024-11-26 20:52:00.312713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:82248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.944 [2024-11-26 20:52:00.312727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.944 [2024-11-26 20:52:00.312742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:82256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.944 [2024-11-26 20:52:00.312756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.944 [2024-11-26 20:52:00.312771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:82264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.944 [2024-11-26 20:52:00.312785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.944 [2024-11-26 20:52:00.312800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.944 [2024-11-26 20:52:00.312814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.944 [2024-11-26 20:52:00.312829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:82280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.945 [2024-11-26 20:52:00.312842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.945 [2024-11-26 20:52:00.312858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:82288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.945 [2024-11-26 20:52:00.312886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.945 [2024-11-26 20:52:00.312901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:82928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.945 [2024-11-26 20:52:00.312915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.945 [2024-11-26 20:52:00.312945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:82296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.945 [2024-11-26 20:52:00.312959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.945 [2024-11-26 20:52:00.312975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:82304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.945 [2024-11-26 20:52:00.312989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.945 [2024-11-26 20:52:00.313004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:82312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.945 [2024-11-26 20:52:00.313018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.945 [2024-11-26 20:52:00.313034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:82320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.945 [2024-11-26 20:52:00.313048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.945 [2024-11-26 20:52:00.313064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:82328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.945 [2024-11-26 20:52:00.313078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.945 [2024-11-26 20:52:00.313097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.945 [2024-11-26 20:52:00.313112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.945 [2024-11-26 20:52:00.313128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:82344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.945 [2024-11-26 20:52:00.313142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.945 [2024-11-26 20:52:00.313158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:82352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.945 [2024-11-26 20:52:00.313172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.945 [2024-11-26 20:52:00.313188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:82360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.945 [2024-11-26 20:52:00.313202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.945 [2024-11-26 20:52:00.313218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:82368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.945 [2024-11-26 20:52:00.313232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.945 [2024-11-26 20:52:00.313263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:82376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.945 [2024-11-26 20:52:00.313277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.945 [2024-11-26 20:52:00.313291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.945 [2024-11-26 20:52:00.313328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.945 [2024-11-26 20:52:00.313346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:82392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.945 [2024-11-26 20:52:00.313361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.945 [2024-11-26 20:52:00.313376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.945 [2024-11-26 20:52:00.313390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.945 [2024-11-26 20:52:00.313405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:82408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.945 [2024-11-26 20:52:00.313419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.945 [2024-11-26 20:52:00.313434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:82416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.945 [2024-11-26 20:52:00.313448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.945 [2024-11-26 20:52:00.313466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:82424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.945 [2024-11-26 20:52:00.313480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.945 [2024-11-26 20:52:00.313496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:82432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.945 [2024-11-26 20:52:00.313514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.945 [2024-11-26 20:52:00.313530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:82440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.945 [2024-11-26 20:52:00.313545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.945 [2024-11-26 20:52:00.313561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:82448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.945 [2024-11-26 20:52:00.313575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.945 [2024-11-26 20:52:00.313602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:82456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.945 [2024-11-26 20:52:00.313617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.945 [2024-11-26 20:52:00.313633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:82464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.945 [2024-11-26 20:52:00.313647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.945 [2024-11-26 20:52:00.313662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:82472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.945 [2024-11-26 20:52:00.313677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.945 [2024-11-26 20:52:00.313693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:82480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.945 [2024-11-26 20:52:00.313707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.945 [2024-11-26 20:52:00.313722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:82488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.945 [2024-11-26 20:52:00.313737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.945 [2024-11-26 20:52:00.313753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:82496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.945 [2024-11-26 20:52:00.313768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.945 [2024-11-26 20:52:00.313783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.945 [2024-11-26 20:52:00.313798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.945 [2024-11-26 20:52:00.313813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:82512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.945 [2024-11-26 20:52:00.313828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.945 [2024-11-26 20:52:00.313844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:82520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.945 [2024-11-26 20:52:00.313858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.945 [2024-11-26 20:52:00.313873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:82528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.945 [2024-11-26 20:52:00.313889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.945 [2024-11-26 20:52:00.313909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:82536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.945 [2024-11-26 20:52:00.313924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.945 [2024-11-26 20:52:00.313954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:82544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.945 [2024-11-26 20:52:00.313969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.945 [2024-11-26 20:52:00.313984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:82936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.945 [2024-11-26 20:52:00.313998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.945 [2024-11-26 20:52:00.314013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:82944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.945 [2024-11-26 20:52:00.314027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.945 [2024-11-26 20:52:00.314042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.945 [2024-11-26 20:52:00.314056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.945 [2024-11-26 20:52:00.314071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:82960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.945 [2024-11-26 20:52:00.314086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.945 [2024-11-26 20:52:00.314101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:82968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.945 [2024-11-26 20:52:00.314115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.945 [2024-11-26 20:52:00.314129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:82976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.946 [2024-11-26 20:52:00.314144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.946 [2024-11-26 20:52:00.314159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:82984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.946 [2024-11-26 20:52:00.314172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.946 [2024-11-26 20:52:00.314187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:82992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.946 [2024-11-26 20:52:00.314201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.946 [2024-11-26 20:52:00.314217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:82552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.946 [2024-11-26 20:52:00.314232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.946 [2024-11-26 20:52:00.314247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:82560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.946 [2024-11-26 20:52:00.314261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.946 [2024-11-26 20:52:00.314276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:82568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.946 [2024-11-26 20:52:00.314317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.946 [2024-11-26 20:52:00.314337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:82576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.946 [2024-11-26 20:52:00.314352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.946 [2024-11-26 20:52:00.314367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.946 [2024-11-26 20:52:00.314382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.946 [2024-11-26 20:52:00.314397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:82592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.946 [2024-11-26 20:52:00.314412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.946 [2024-11-26 20:52:00.314428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:82600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.946 [2024-11-26 20:52:00.314442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.946 [2024-11-26 20:52:00.314458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:82608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.946 [2024-11-26 20:52:00.314473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.946 [2024-11-26 20:52:00.314489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:82616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.946 [2024-11-26 20:52:00.314503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.946 [2024-11-26 20:52:00.314519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:82624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.946 [2024-11-26 20:52:00.314533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.946 [2024-11-26 20:52:00.314549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.946 [2024-11-26 20:52:00.314563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.946 [2024-11-26 20:52:00.314586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:82640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.946 [2024-11-26 20:52:00.314601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.946 [2024-11-26 20:52:00.314616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.946 [2024-11-26 20:52:00.314631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.946 [2024-11-26 20:52:00.314646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:82656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.946 [2024-11-26 20:52:00.314661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.946 [2024-11-26 20:52:00.314676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:82664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.946 [2024-11-26 20:52:00.314690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.946 [2024-11-26 20:52:00.314706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:82672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.946 [2024-11-26 20:52:00.314724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.946 [2024-11-26 20:52:00.314750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.946 [2024-11-26 20:52:00.314764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.946 [2024-11-26 20:52:00.314779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:82688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.946 [2024-11-26 20:52:00.314793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.946 [2024-11-26 20:52:00.314811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:82696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.946 [2024-11-26 20:52:00.314825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.946 [2024-11-26 20:52:00.314840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.946 [2024-11-26 20:52:00.314854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.946 [2024-11-26 20:52:00.314869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.946 [2024-11-26 20:52:00.314883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.946 [2024-11-26 20:52:00.314899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:82720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.946 [2024-11-26 20:52:00.314913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.946 [2024-11-26 20:52:00.314928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:82728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.946 [2024-11-26 20:52:00.314942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.946 [2024-11-26 20:52:00.314958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:82736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.946 [2024-11-26 20:52:00.314972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.946 [2024-11-26 20:52:00.314987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:82744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.946 [2024-11-26 20:52:00.315001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.946 [2024-11-26 20:52:00.315016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:82752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.946 [2024-11-26 20:52:00.315045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.946 [2024-11-26 20:52:00.315061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:82760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.946 [2024-11-26 20:52:00.315075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.946 [2024-11-26 20:52:00.315090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:82768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.946 [2024-11-26 20:52:00.315104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.946 [2024-11-26 20:52:00.315132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:82776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.946 [2024-11-26 20:52:00.315146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.946 [2024-11-26 20:52:00.315162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:82784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.946 [2024-11-26 20:52:00.315176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.946 [2024-11-26 20:52:00.315191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:82792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.946 [2024-11-26 20:52:00.315204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.946 [2024-11-26 20:52:00.315219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:82800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.946 [2024-11-26 20:52:00.315233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.946 [2024-11-26 20:52:00.315248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:82808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.946 [2024-11-26 20:52:00.315262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.946 [2024-11-26 20:52:00.315277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:82816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.946 [2024-11-26 20:52:00.315291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.946 [2024-11-26 20:52:00.315327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:82824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.946 [2024-11-26 20:52:00.315344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.946 [2024-11-26 20:52:00.315359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:82832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.946 [2024-11-26 20:52:00.315374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.946 [2024-11-26 20:52:00.315390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.946 [2024-11-26 20:52:00.315405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.946 [2024-11-26 20:52:00.315420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:82848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.947 [2024-11-26 20:52:00.315434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.947 [2024-11-26 20:52:00.315449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:82856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.947 [2024-11-26 20:52:00.315463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.947 [2024-11-26 20:52:00.315478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:82864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.947 [2024-11-26 20:52:00.315492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.947 [2024-11-26 20:52:00.315522] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:07.947 [2024-11-26 20:52:00.315545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:07.947 [2024-11-26 20:52:00.315558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83000 len:8 PRP1 0x0 PRP2 0x0 00:22:07.947 [2024-11-26 20:52:00.315571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.947 [2024-11-26 20:52:00.315651] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:22:07.947 [2024-11-26 20:52:00.315704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.947 [2024-11-26 20:52:00.315723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.947 [2024-11-26 20:52:00.315739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.947 [2024-11-26 20:52:00.315753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.947 [2024-11-26 20:52:00.315767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.947 [2024-11-26 20:52:00.315781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.947 [2024-11-26 20:52:00.315795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.947 [2024-11-26 20:52:00.315808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.947 [2024-11-26 20:52:00.315823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:07.947 [2024-11-26 20:52:00.315861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb00570 (9): Bad file descriptor 00:22:07.947 [2024-11-26 20:52:00.319269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:07.947 [2024-11-26 20:52:00.341875] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:22:07.947 8630.00 IOPS, 33.71 MiB/s [2024-11-26T19:52:11.644Z] 8672.50 IOPS, 33.88 MiB/s [2024-11-26T19:52:11.644Z] 8676.86 IOPS, 33.89 MiB/s [2024-11-26T19:52:11.644Z] 8681.12 IOPS, 33.91 MiB/s [2024-11-26T19:52:11.644Z] 8690.56 IOPS, 33.95 MiB/s [2024-11-26T19:52:11.644Z] [2024-11-26 20:52:04.963174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.947 [2024-11-26 20:52:04.963235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.947 [2024-11-26 20:52:04.963262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.947 [2024-11-26 20:52:04.963278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.947 [2024-11-26 20:52:04.963319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.947 [2024-11-26 20:52:04.963336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.947 [2024-11-26 20:52:04.963352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.947 [2024-11-26 20:52:04.963366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.947 [2024-11-26 20:52:04.963382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.947 [2024-11-26 20:52:04.963397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.947 [2024-11-26 20:52:04.963424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.947 [2024-11-26 20:52:04.963440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.947 [2024-11-26 20:52:04.963455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:20568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.947 [2024-11-26 20:52:04.963470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.947 [2024-11-26 20:52:04.963486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:20576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.947 [2024-11-26 20:52:04.963500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.947 [2024-11-26 20:52:04.963517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.947 [2024-11-26 20:52:04.963531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.947 [2024-11-26 20:52:04.963547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.947 [2024-11-26 20:52:04.963562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.947 [2024-11-26 20:52:04.963579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:19640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.947 [2024-11-26 20:52:04.963593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.947 [2024-11-26 20:52:04.963623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:19648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.947 [2024-11-26 20:52:04.963636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.947 [2024-11-26 20:52:04.963651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:19656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.947 [2024-11-26 20:52:04.963665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.947 [2024-11-26 20:52:04.963679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.947 [2024-11-26 20:52:04.963692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.947 [2024-11-26 20:52:04.963706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:19672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.947 [2024-11-26 20:52:04.963719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.947 [2024-11-26 20:52:04.963733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:19680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.947 [2024-11-26 20:52:04.963746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.947 [2024-11-26 20:52:04.963760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:19688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.947 [2024-11-26 20:52:04.963773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.947 [2024-11-26 20:52:04.963788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.947 [2024-11-26 20:52:04.963806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.947 [2024-11-26 20:52:04.963821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.947 [2024-11-26 20:52:04.963834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.947 [2024-11-26 20:52:04.963848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.947 [2024-11-26 20:52:04.963861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.947 [2024-11-26 20:52:04.963876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.947 [2024-11-26 20:52:04.963889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.947 [2024-11-26 20:52:04.963903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:19728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.947 [2024-11-26 20:52:04.963916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.947 [2024-11-26 20:52:04.963931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:19736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.947 [2024-11-26 20:52:04.963959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.947 [2024-11-26 20:52:04.963975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.947 [2024-11-26 20:52:04.963989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.947 [2024-11-26 20:52:04.964003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:20592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.947 [2024-11-26 20:52:04.964017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.947 [2024-11-26 20:52:04.964032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:19752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.947 [2024-11-26 20:52:04.964047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.947 [2024-11-26 20:52:04.964062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:19760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.947 [2024-11-26 20:52:04.964075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.947 [2024-11-26 20:52:04.964089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:19768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.947 [2024-11-26 20:52:04.964103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.947 [2024-11-26 20:52:04.964118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.948 [2024-11-26 20:52:04.964131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.948 [2024-11-26 20:52:04.964146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.948 [2024-11-26 20:52:04.964159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.948 [2024-11-26 20:52:04.964179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:19792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.948 [2024-11-26 20:52:04.964193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.948 [2024-11-26 20:52:04.964207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.948 [2024-11-26 20:52:04.964221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.948 [2024-11-26 20:52:04.964235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.948 [2024-11-26 20:52:04.964248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.948 [2024-11-26 20:52:04.964263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.948 [2024-11-26 20:52:04.964277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.948 [2024-11-26 20:52:04.964292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.948 [2024-11-26 20:52:04.964332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.948 [2024-11-26 20:52:04.964358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.948 [2024-11-26 20:52:04.964374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.948 [2024-11-26 20:52:04.964390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.948 [2024-11-26 20:52:04.964404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.948 [2024-11-26 20:52:04.964419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.948 [2024-11-26 20:52:04.964433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.948 [2024-11-26 20:52:04.964449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.948 [2024-11-26 20:52:04.964464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.948 [2024-11-26 20:52:04.964479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.948 [2024-11-26 20:52:04.964493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.948 [2024-11-26 20:52:04.964508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:19872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.948 [2024-11-26 20:52:04.964522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.948 [2024-11-26 20:52:04.964538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:19880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.948 [2024-11-26 20:52:04.964552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.948 [2024-11-26 20:52:04.964567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.948 [2024-11-26 20:52:04.964585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.948 [2024-11-26 20:52:04.964602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.948 [2024-11-26 20:52:04.964632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.948 [2024-11-26 20:52:04.964647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:19904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.948 [2024-11-26 20:52:04.964661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.948 [2024-11-26 20:52:04.964676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.948 [2024-11-26 20:52:04.964689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.948 [2024-11-26 20:52:04.964704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.948 [2024-11-26 20:52:04.964718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.948 [2024-11-26 20:52:04.964733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:19928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.948 [2024-11-26 20:52:04.964746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.948 [2024-11-26 20:52:04.964761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.948 [2024-11-26 20:52:04.964775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.948 [2024-11-26 20:52:04.964791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:19944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.948 [2024-11-26 20:52:04.964805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.948 [2024-11-26 20:52:04.964819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.948 [2024-11-26 20:52:04.964833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.948 [2024-11-26 20:52:04.964848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:19960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.948 [2024-11-26 20:52:04.964861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.948 [2024-11-26 20:52:04.964876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:19968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.948 [2024-11-26 20:52:04.964890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.948 [2024-11-26 20:52:04.964904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.948 [2024-11-26 20:52:04.964917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.948 [2024-11-26 20:52:04.964932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.948 [2024-11-26 20:52:04.964946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.948 [2024-11-26 20:52:04.964965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.948 [2024-11-26 20:52:04.964979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.948 [2024-11-26 20:52:04.964994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.948 [2024-11-26 20:52:04.965007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.948 [2024-11-26 20:52:04.965022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.948 [2024-11-26 20:52:04.965036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.948 [2024-11-26 20:52:04.965051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.948 [2024-11-26 20:52:04.965064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.948 [2024-11-26 20:52:04.965080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:20024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.948 [2024-11-26 20:52:04.965094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.948 [2024-11-26 20:52:04.965109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.948 [2024-11-26 20:52:04.965123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.948 [2024-11-26 20:52:04.965137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.948 [2024-11-26 20:52:04.965151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.948 [2024-11-26 20:52:04.965166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.948 [2024-11-26 20:52:04.965180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.948 [2024-11-26 20:52:04.965195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.948 [2024-11-26 20:52:04.965208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.948 [2024-11-26 20:52:04.965223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.949 [2024-11-26 20:52:04.965237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.949 [2024-11-26 20:52:04.965252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:20072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.949 [2024-11-26 20:52:04.965266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.949 [2024-11-26 20:52:04.965280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.949 [2024-11-26 20:52:04.965293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.949 [2024-11-26 20:52:04.965332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.949 [2024-11-26 20:52:04.965355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.949 [2024-11-26 20:52:04.965377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:20096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.949 [2024-11-26 20:52:04.965392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.949 [2024-11-26 20:52:04.965408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:20104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.949 [2024-11-26 20:52:04.965422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.949 [2024-11-26 20:52:04.965438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.949 [2024-11-26 20:52:04.965452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.949 [2024-11-26 20:52:04.965467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:20120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.949 [2024-11-26 20:52:04.965481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.949 [2024-11-26 20:52:04.965498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:20128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.949 [2024-11-26 20:52:04.965512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.949 [2024-11-26 20:52:04.965528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.949 [2024-11-26 20:52:04.965553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.949 [2024-11-26 20:52:04.965569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.949 [2024-11-26 20:52:04.965584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.949 [2024-11-26 20:52:04.965599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.949 [2024-11-26 20:52:04.965629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.949 [2024-11-26 20:52:04.965645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.949 [2024-11-26 20:52:04.965659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.949 [2024-11-26 20:52:04.965675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.949 [2024-11-26 20:52:04.965689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.949 [2024-11-26 20:52:04.965705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.949 [2024-11-26 20:52:04.965719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.949 [2024-11-26 20:52:04.965734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.949 [2024-11-26 20:52:04.965748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.949 [2024-11-26 20:52:04.965763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.949 [2024-11-26 20:52:04.965782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.949 [2024-11-26 20:52:04.965798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.949 [2024-11-26 20:52:04.965812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.949 [2024-11-26 20:52:04.965827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:20208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.949 [2024-11-26 20:52:04.965842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.949 [2024-11-26 20:52:04.965857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:20216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.949 [2024-11-26 20:52:04.965871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.949 [2024-11-26 20:52:04.965886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:20224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.949 [2024-11-26 20:52:04.965900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.949 [2024-11-26 20:52:04.965916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.949 [2024-11-26 20:52:04.965930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.949 [2024-11-26 20:52:04.965945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.949 [2024-11-26 20:52:04.965959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.949 [2024-11-26 20:52:04.965974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.949 [2024-11-26 20:52:04.965989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.949 [2024-11-26 20:52:04.966004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.949 [2024-11-26 20:52:04.966017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.949 [2024-11-26 20:52:04.966032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.949 [2024-11-26 20:52:04.966046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.949 [2024-11-26 20:52:04.966060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:20272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.949 [2024-11-26 20:52:04.966075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.949 [2024-11-26 20:52:04.966090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.949 [2024-11-26 20:52:04.966103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.949 [2024-11-26 20:52:04.966118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.949 [2024-11-26 20:52:04.966132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.949 [2024-11-26 20:52:04.966151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:20296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.949 [2024-11-26 20:52:04.966166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.949 [2024-11-26 20:52:04.966181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:20304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.949 [2024-11-26 20:52:04.966194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.949 [2024-11-26 20:52:04.966209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.949 [2024-11-26 20:52:04.966223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.949 [2024-11-26 20:52:04.966238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.949 [2024-11-26 20:52:04.966252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.949 [2024-11-26 20:52:04.966267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.949 [2024-11-26 20:52:04.966281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.949 [2024-11-26 20:52:04.966296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.949 [2024-11-26 20:52:04.966338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.949 [2024-11-26 20:52:04.966356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.949 [2024-11-26 20:52:04.966370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.949 [2024-11-26 20:52:04.966386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.949 [2024-11-26 20:52:04.966400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.949 [2024-11-26 20:52:04.966416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.949 [2024-11-26 20:52:04.966430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.949 [2024-11-26 20:52:04.966445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.949 [2024-11-26 20:52:04.966459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.950 [2024-11-26 20:52:04.966475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:20376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.950 [2024-11-26 20:52:04.966489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.950 [2024-11-26 20:52:04.966504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:20384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.950 [2024-11-26 20:52:04.966518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.950 [2024-11-26 20:52:04.966534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.950 [2024-11-26 20:52:04.966552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.950 [2024-11-26 20:52:04.966568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.950 [2024-11-26 20:52:04.966582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.950 [2024-11-26 20:52:04.966597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.950 [2024-11-26 20:52:04.966626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.950 [2024-11-26 20:52:04.966642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.950 [2024-11-26 20:52:04.966656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.950 [2024-11-26 20:52:04.966671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.950 [2024-11-26 20:52:04.966685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.950 [2024-11-26 20:52:04.966700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:20432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.950 [2024-11-26 20:52:04.966715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.950 [2024-11-26 20:52:04.966730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:20440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.950 [2024-11-26 20:52:04.966743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.950 [2024-11-26 20:52:04.966759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:20448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.950 [2024-11-26 20:52:04.966772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.950 [2024-11-26 20:52:04.966795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.950 [2024-11-26 20:52:04.966809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.950 [2024-11-26 20:52:04.966824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.950 [2024-11-26 20:52:04.966838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.950 [2024-11-26 20:52:04.966853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.950 [2024-11-26 20:52:04.966867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.950 [2024-11-26 20:52:04.966881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.950 [2024-11-26 20:52:04.966895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.950 [2024-11-26 20:52:04.966910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.950 [2024-11-26 20:52:04.966924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.950 [2024-11-26 20:52:04.966942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:20640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.950 [2024-11-26 20:52:04.966956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.950 [2024-11-26 20:52:04.966971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:07.950 [2024-11-26 20:52:04.966985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.950 [2024-11-26 20:52:04.967000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.950 [2024-11-26 20:52:04.967014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.950 [2024-11-26 20:52:04.967029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.950 [2024-11-26 20:52:04.967042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.950 [2024-11-26 20:52:04.967058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.950 [2024-11-26 20:52:04.967072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.950 [2024-11-26 20:52:04.967088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:20480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.950 [2024-11-26 20:52:04.967101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.950 [2024-11-26 20:52:04.967115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.950 [2024-11-26 20:52:04.967129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.950 [2024-11-26 20:52:04.967144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.950 [2024-11-26 20:52:04.967158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.950 [2024-11-26 20:52:04.967172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:20504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.950 [2024-11-26 20:52:04.967186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.950 [2024-11-26 20:52:04.967216] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:07.950 [2024-11-26 20:52:04.967231] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:07.950 [2024-11-26 20:52:04.967242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20512 len:8 PRP1 0x0 PRP2 0x0 00:22:07.950 [2024-11-26 20:52:04.967255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.950 [2024-11-26 20:52:04.967352] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:22:07.950 [2024-11-26 20:52:04.967395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.950 [2024-11-26 20:52:04.967414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.950 [2024-11-26 20:52:04.967429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.950 [2024-11-26 20:52:04.967448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.950 [2024-11-26 20:52:04.967464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.950 [2024-11-26 20:52:04.967478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.950 [2024-11-26 20:52:04.967492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.950 [2024-11-26 20:52:04.967506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.950 [2024-11-26 20:52:04.967519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:22:07.950 [2024-11-26 20:52:04.970858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:22:07.950 [2024-11-26 20:52:04.970901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb00570 (9): Bad file descriptor 00:22:07.950 [2024-11-26 20:52:05.037192] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:22:07.950 8622.20 IOPS, 33.68 MiB/s [2024-11-26T19:52:11.647Z] 8630.73 IOPS, 33.71 MiB/s [2024-11-26T19:52:11.647Z] 8649.17 IOPS, 33.79 MiB/s [2024-11-26T19:52:11.647Z] 8666.15 IOPS, 33.85 MiB/s [2024-11-26T19:52:11.647Z] 8676.79 IOPS, 33.89 MiB/s 00:22:07.950 Latency(us) 00:22:07.950 [2024-11-26T19:52:11.647Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:07.950 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:07.950 Verification LBA range: start 0x0 length 0x4000 00:22:07.950 NVMe0n1 : 15.00 8693.60 33.96 342.09 0.00 14137.81 555.24 16019.91 00:22:07.950 [2024-11-26T19:52:11.647Z] =================================================================================================================== 00:22:07.950 [2024-11-26T19:52:11.647Z] Total : 8693.60 33.96 342.09 0.00 14137.81 555.24 16019.91 00:22:07.950 Received shutdown signal, test time was about 15.000000 seconds 00:22:07.950 00:22:07.950 Latency(us) 00:22:07.950 [2024-11-26T19:52:11.647Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:07.950 [2024-11-26T19:52:11.647Z] =================================================================================================================== 00:22:07.950 [2024-11-26T19:52:11.647Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:07.950 20:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:07.950 20:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:22:07.950 20:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:22:07.950 20:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1731768 00:22:07.950 20:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:07.950 20:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1731768 /var/tmp/bdevperf.sock 00:22:07.950 20:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1731768 ']' 00:22:07.951 20:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:07.951 20:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:07.951 20:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:07.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:07.951 20:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:07.951 20:52:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:07.951 20:52:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:07.951 20:52:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:22:07.951 20:52:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:07.951 [2024-11-26 20:52:11.356999] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:07.951 20:52:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:07.951 [2024-11-26 20:52:11.621798] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:08.208 20:52:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:08.466 NVMe0n1 00:22:08.466 20:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:09.032 00:22:09.032 20:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:09.289 00:22:09.289 20:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:09.289 20:52:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:22:09.546 20:52:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:09.804 20:52:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:22:13.083 20:52:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:13.083 20:52:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:22:13.083 20:52:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1732511 00:22:13.083 20:52:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:13.083 20:52:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1732511 00:22:14.455 { 00:22:14.455 "results": [ 00:22:14.455 { 00:22:14.455 "job": "NVMe0n1", 00:22:14.455 "core_mask": "0x1", 00:22:14.455 "workload": "verify", 00:22:14.455 "status": "finished", 00:22:14.455 "verify_range": { 00:22:14.455 "start": 0, 00:22:14.455 "length": 16384 00:22:14.455 }, 00:22:14.455 "queue_depth": 128, 00:22:14.455 "io_size": 4096, 00:22:14.455 "runtime": 1.008321, 00:22:14.455 "iops": 8748.206176406125, 00:22:14.455 "mibps": 34.17268037658643, 00:22:14.455 "io_failed": 0, 00:22:14.455 "io_timeout": 0, 00:22:14.455 "avg_latency_us": 14552.221374749653, 00:22:14.455 "min_latency_us": 2791.348148148148, 00:22:14.455 "max_latency_us": 15825.730370370371 00:22:14.455 } 00:22:14.455 ], 00:22:14.455 "core_count": 1 00:22:14.455 } 00:22:14.455 20:52:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:14.455 [2024-11-26 20:52:10.857021] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:22:14.455 [2024-11-26 20:52:10.857132] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1731768 ] 00:22:14.456 [2024-11-26 20:52:10.925859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:14.456 [2024-11-26 20:52:10.982235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:14.456 [2024-11-26 20:52:13.366511] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:14.456 [2024-11-26 20:52:13.366588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:14.456 [2024-11-26 20:52:13.366610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.456 [2024-11-26 20:52:13.366626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:14.456 [2024-11-26 20:52:13.366648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.456 [2024-11-26 20:52:13.366661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:14.456 [2024-11-26 20:52:13.366674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.456 [2024-11-26 20:52:13.366705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:14.456 [2024-11-26 20:52:13.366719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.456 [2024-11-26 20:52:13.366733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:22:14.456 [2024-11-26 20:52:13.366778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:22:14.456 [2024-11-26 20:52:13.366810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d5570 (9): Bad file descriptor 00:22:14.456 [2024-11-26 20:52:13.498449] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:22:14.456 Running I/O for 1 seconds... 00:22:14.456 8693.00 IOPS, 33.96 MiB/s 00:22:14.456 Latency(us) 00:22:14.456 [2024-11-26T19:52:18.153Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:14.456 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:14.456 Verification LBA range: start 0x0 length 0x4000 00:22:14.456 NVMe0n1 : 1.01 8748.21 34.17 0.00 0.00 14552.22 2791.35 15825.73 00:22:14.456 [2024-11-26T19:52:18.153Z] =================================================================================================================== 00:22:14.456 [2024-11-26T19:52:18.153Z] Total : 8748.21 34.17 0.00 0.00 14552.22 2791.35 15825.73 00:22:14.456 20:52:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:14.456 20:52:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:22:14.456 20:52:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:14.715 20:52:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:14.715 20:52:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:22:14.972 20:52:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:15.266 20:52:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:22:18.573 20:52:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:18.573 20:52:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:22:18.573 20:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1731768 00:22:18.573 20:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1731768 ']' 00:22:18.573 20:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1731768 00:22:18.573 20:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:22:18.573 20:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:18.573 20:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1731768 00:22:18.573 20:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:18.573 20:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:18.573 20:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1731768' 00:22:18.573 killing process with pid 1731768 00:22:18.573 20:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1731768 00:22:18.573 20:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1731768 00:22:18.831 20:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:22:18.831 20:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:19.089 20:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:19.089 20:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:19.089 20:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:22:19.089 20:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:19.089 20:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:22:19.089 20:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:19.089 20:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:22:19.089 20:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:19.089 20:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:19.089 rmmod nvme_tcp 00:22:19.089 rmmod nvme_fabrics 00:22:19.089 rmmod nvme_keyring 00:22:19.347 20:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:19.347 20:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:22:19.347 20:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:22:19.347 20:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 1729499 ']' 00:22:19.347 20:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 1729499 00:22:19.347 20:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1729499 ']' 00:22:19.347 20:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1729499 00:22:19.347 20:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:22:19.347 20:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:19.347 20:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1729499 00:22:19.347 20:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:19.347 20:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:19.347 20:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1729499' 00:22:19.347 killing process with pid 1729499 00:22:19.347 20:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1729499 00:22:19.347 20:52:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1729499 00:22:19.605 20:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:19.605 20:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:19.605 20:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:19.605 20:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:22:19.605 20:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:22:19.605 20:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:19.605 20:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:22:19.605 20:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:19.605 20:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:19.605 20:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:19.605 20:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:19.605 20:52:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:21.531 20:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:21.531 00:22:21.531 real 0m35.810s 00:22:21.531 user 2m6.456s 00:22:21.531 sys 0m5.892s 00:22:21.531 20:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:21.531 20:52:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:21.531 ************************************ 00:22:21.531 END TEST nvmf_failover 00:22:21.531 ************************************ 00:22:21.531 20:52:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:21.531 20:52:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:21.531 20:52:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:21.531 20:52:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.531 ************************************ 00:22:21.531 START TEST nvmf_host_discovery 00:22:21.531 ************************************ 00:22:21.531 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:21.791 * Looking for test storage... 00:22:21.791 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:21.791 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:21.791 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:22:21.791 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:21.791 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:21.791 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:21.791 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:21.791 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:21.791 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:22:21.791 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:22:21.791 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:22:21.791 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:22:21.791 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:22:21.791 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:22:21.791 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:22:21.791 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:21.791 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:22:21.791 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:22:21.791 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:21.791 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:21.791 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:22:21.791 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:22:21.791 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:21.791 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:22:21.791 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:22:21.791 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:22:21.791 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:22:21.791 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:21.791 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:22:21.791 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:22:21.791 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:21.791 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:21.791 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:22:21.791 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:21.791 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:21.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:21.791 --rc genhtml_branch_coverage=1 00:22:21.791 --rc genhtml_function_coverage=1 00:22:21.791 --rc genhtml_legend=1 00:22:21.791 --rc geninfo_all_blocks=1 00:22:21.791 --rc geninfo_unexecuted_blocks=1 00:22:21.791 00:22:21.791 ' 00:22:21.791 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:21.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:21.791 --rc genhtml_branch_coverage=1 00:22:21.791 --rc genhtml_function_coverage=1 00:22:21.791 --rc genhtml_legend=1 00:22:21.791 --rc geninfo_all_blocks=1 00:22:21.791 --rc geninfo_unexecuted_blocks=1 00:22:21.791 00:22:21.791 ' 00:22:21.791 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:21.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:21.791 --rc genhtml_branch_coverage=1 00:22:21.791 --rc genhtml_function_coverage=1 00:22:21.791 --rc genhtml_legend=1 00:22:21.791 --rc geninfo_all_blocks=1 00:22:21.791 --rc geninfo_unexecuted_blocks=1 00:22:21.791 00:22:21.791 ' 00:22:21.791 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:21.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:21.791 --rc genhtml_branch_coverage=1 00:22:21.791 --rc genhtml_function_coverage=1 00:22:21.791 --rc genhtml_legend=1 00:22:21.791 --rc geninfo_all_blocks=1 00:22:21.791 --rc geninfo_unexecuted_blocks=1 00:22:21.791 00:22:21.791 ' 00:22:21.791 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:21.791 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:22:21.791 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:21.791 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:21.791 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:21.791 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:21.791 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:21.791 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:21.791 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:21.791 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:21.791 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:21.791 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:21.791 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:21.791 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:21.791 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:21.791 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:21.791 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:21.791 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:21.791 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:21.791 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:22:21.791 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:21.791 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:21.791 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:21.792 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.792 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.792 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.792 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:22:21.792 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.792 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:22:21.792 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:21.792 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:21.792 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:21.792 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:21.792 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:21.792 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:21.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:21.792 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:21.792 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:21.792 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:21.792 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:22:21.792 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:22:21.792 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:21.792 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:21.792 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:21.792 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:22:21.792 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:22:21.792 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:21.792 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:21.792 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:21.792 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:21.792 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:21.792 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:21.792 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:21.792 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:21.792 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:21.792 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:21.792 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:22:21.792 20:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:23.694 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:23.694 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:22:23.694 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:23.694 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:23.694 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:23.694 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:23.694 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:23.694 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:22:23.694 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:23.694 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:22:23.694 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:22:23.694 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:22:23.694 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:22:23.694 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:22:23.694 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:22:23.694 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:23.694 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:23.694 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:23.694 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:23.694 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:23.694 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:23.694 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:23.694 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:23.694 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:23.694 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:23.694 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:23.694 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:23.694 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:23.694 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:23.694 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:23.694 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:23.694 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:23.694 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:23.694 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:23.694 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:23.694 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:23.694 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:23.694 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:23.694 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:23.694 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:23.694 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:23.694 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:23.694 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:23.694 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:23.694 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:23.694 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:23.694 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:23.694 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:23.694 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:23.694 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:23.694 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:23.694 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:23.694 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:23.694 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:23.694 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:23.694 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:23.694 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:23.694 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:23.694 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:23.694 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:23.694 Found net devices under 0000:09:00.0: cvl_0_0 00:22:23.694 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:23.695 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:23.695 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:23.695 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:23.695 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:23.695 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:23.695 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:23.695 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:23.695 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:23.695 Found net devices under 0000:09:00.1: cvl_0_1 00:22:23.695 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:23.695 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:23.695 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:22:23.695 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:23.695 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:23.695 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:23.695 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:23.695 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:23.695 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:23.695 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:23.695 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:23.958 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:23.958 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:23.958 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:23.958 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:23.958 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:23.958 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:23.958 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:23.958 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:23.958 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:23.958 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:23.958 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:23.958 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:23.958 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:23.958 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:23.958 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:23.958 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:23.958 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:23.958 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:23.958 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:23.958 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:22:23.958 00:22:23.958 --- 10.0.0.2 ping statistics --- 00:22:23.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.958 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:22:23.958 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:23.958 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:23.958 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:22:23.958 00:22:23.958 --- 10.0.0.1 ping statistics --- 00:22:23.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.958 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:22:23.958 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:23.958 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:22:23.958 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:23.958 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:23.958 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:23.958 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:23.958 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:23.958 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:23.958 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:23.958 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:22:23.958 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:23.958 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:23.958 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:23.958 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=1735171 00:22:23.958 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:23.958 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 1735171 00:22:23.958 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1735171 ']' 00:22:23.958 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:23.958 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:23.958 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:23.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:23.958 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:23.958 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:23.958 [2024-11-26 20:52:27.611506] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:22:23.958 [2024-11-26 20:52:27.611581] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:24.216 [2024-11-26 20:52:27.685660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.216 [2024-11-26 20:52:27.744261] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:24.216 [2024-11-26 20:52:27.744322] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:24.216 [2024-11-26 20:52:27.744338] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:24.216 [2024-11-26 20:52:27.744350] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:24.216 [2024-11-26 20:52:27.744359] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:24.216 [2024-11-26 20:52:27.744961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:24.216 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:24.216 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:22:24.216 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:24.216 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:24.216 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.216 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:24.216 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:24.216 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.216 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.216 [2024-11-26 20:52:27.897205] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:24.216 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.216 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:22:24.216 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.216 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.216 [2024-11-26 20:52:27.905437] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:24.216 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.216 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:24.216 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.216 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.474 null0 00:22:24.474 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.474 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:24.474 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.474 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.474 null1 00:22:24.474 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.474 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:22:24.474 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.474 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.474 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.474 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1735196 00:22:24.474 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:24.474 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1735196 /tmp/host.sock 00:22:24.474 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1735196 ']' 00:22:24.474 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:22:24.474 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:24.474 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:24.474 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:24.474 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:24.474 20:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.474 [2024-11-26 20:52:27.977850] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:22:24.474 [2024-11-26 20:52:27.977914] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1735196 ] 00:22:24.474 [2024-11-26 20:52:28.044761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.474 [2024-11-26 20:52:28.103238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:24.732 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:24.732 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:22:24.732 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:24.732 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:24.732 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.732 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.732 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.732 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:22:24.732 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.732 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.732 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.732 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:22:24.732 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:22:24.732 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:24.732 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:24.732 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.732 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:24.732 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.732 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:24.732 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.732 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:22:24.732 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:22:24.732 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:24.732 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:24.732 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.732 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:24.732 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.732 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:24.732 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.732 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:22:24.732 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:24.732 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.732 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.732 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.732 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:22:24.732 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:24.732 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:24.732 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.732 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.732 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:24.732 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:24.732 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.732 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:22:24.732 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:22:24.732 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:24.732 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.732 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:24.732 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.732 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:24.732 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:24.991 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.991 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:22:24.991 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:24.991 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.991 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.991 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.991 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:22:24.991 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:24.991 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:24.991 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.991 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.991 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:24.991 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:24.991 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.991 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:22:24.991 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:22:24.991 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:24.991 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:24.991 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.991 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.991 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:24.991 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:24.991 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.991 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:22:24.991 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:24.991 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.991 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.991 [2024-11-26 20:52:28.551148] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:24.991 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.991 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:22:24.991 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:24.991 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:24.991 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.991 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:24.991 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.991 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:24.991 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.991 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:22:24.991 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:22:24.991 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:24.991 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:24.991 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.991 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:24.991 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.991 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:24.991 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.991 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:22:24.991 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:22:24.991 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:24.991 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:24.991 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:24.991 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:24.991 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:24.991 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:24.991 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:24.991 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:24.991 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:24.991 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.992 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.992 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.992 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:24.992 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:22:24.992 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:24.992 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:24.992 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:24.992 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.992 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.992 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.992 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:24.992 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:24.992 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:24.992 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:24.992 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:24.992 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:24.992 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:24.992 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.992 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:24.992 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:24.992 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:24.992 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:25.250 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.250 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:22:25.250 20:52:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:22:25.817 [2024-11-26 20:52:29.324887] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:25.817 [2024-11-26 20:52:29.324919] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:25.817 [2024-11-26 20:52:29.324939] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:25.817 [2024-11-26 20:52:29.411207] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:26.074 [2024-11-26 20:52:29.634532] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:22:26.074 [2024-11-26 20:52:29.635529] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1b0efe0:1 started. 00:22:26.075 [2024-11-26 20:52:29.637207] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:26.075 [2024-11-26 20:52:29.637226] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:26.075 [2024-11-26 20:52:29.685054] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1b0efe0 was disconnected and freed. delete nvme_qpair. 00:22:26.075 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:26.075 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:26.075 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:26.075 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:26.075 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:26.075 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.075 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:26.075 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:26.075 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:26.075 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.333 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.333 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:26.333 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:26.333 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:26.333 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:26.333 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:26.333 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:22:26.333 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:26.333 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:26.333 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:26.333 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.333 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:26.333 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:26.333 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:26.333 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.333 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:22:26.333 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:26.333 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:26.333 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:26.333 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:26.333 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:26.333 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:22:26.333 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:26.333 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:26.333 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:26.333 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.333 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:26.333 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:26.333 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:26.333 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.333 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:22:26.334 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:26.334 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:22:26.334 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:26.334 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:26.334 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:26.334 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:26.334 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:26.334 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:26.334 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:26.334 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:26.334 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:26.334 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.334 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:26.334 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.334 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:26.334 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:22:26.334 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:26.334 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:26.334 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:26.334 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.334 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:26.334 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.334 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:26.334 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:26.334 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:26.334 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:26.334 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:26.334 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:26.334 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:26.334 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:26.334 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.334 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:26.334 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:26.334 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:26.334 [2024-11-26 20:52:29.917472] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1b0f1c0:1 started. 00:22:26.334 [2024-11-26 20:52:29.923930] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1b0f1c0 was disconnected and freed. delete nvme_qpair. 00:22:26.334 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.334 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:26.334 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:26.334 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:22:26.334 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:26.334 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:26.334 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:26.334 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:26.334 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:26.334 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:26.334 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:26.334 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:22:26.334 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:26.334 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.334 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:26.334 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.334 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:26.334 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:26.334 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:26.334 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:26.334 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:22:26.334 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.334 20:52:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:26.334 [2024-11-26 20:52:30.003814] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:26.334 [2024-11-26 20:52:30.004178] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:26.334 [2024-11-26 20:52:30.004210] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:26.334 20:52:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.334 20:52:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:26.334 20:52:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:26.334 20:52:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:26.334 20:52:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:26.334 20:52:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:26.334 20:52:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:26.334 20:52:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:26.334 20:52:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.334 20:52:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:26.334 20:52:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:26.334 20:52:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:26.334 20:52:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:26.334 20:52:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.592 20:52:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.592 20:52:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:26.592 20:52:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:26.592 20:52:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:26.592 20:52:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:26.592 20:52:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:26.592 20:52:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:26.592 20:52:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:26.592 20:52:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:26.592 20:52:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:26.592 20:52:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.592 20:52:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:26.592 20:52:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:26.592 20:52:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:26.592 20:52:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.592 [2024-11-26 20:52:30.092862] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:22:26.592 20:52:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:26.592 20:52:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:26.592 20:52:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:26.592 20:52:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:26.592 20:52:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:26.592 20:52:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:26.592 20:52:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:26.592 20:52:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:26.592 20:52:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:26.592 20:52:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:26.592 20:52:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.592 20:52:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:26.592 20:52:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:26.592 20:52:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:26.592 20:52:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.592 20:52:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:22:26.592 20:52:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:22:26.593 [2024-11-26 20:52:30.199039] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:22:26.593 [2024-11-26 20:52:30.199095] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:26.593 [2024-11-26 20:52:30.199118] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:26.593 [2024-11-26 20:52:30.199126] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:27.526 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:27.526 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:27.526 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:27.526 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:27.526 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:27.526 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.526 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:27.526 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:27.526 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:27.526 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.526 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:27.526 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:27.526 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:22:27.526 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:27.526 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:27.526 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:27.526 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:27.526 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:27.526 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:27.526 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:27.526 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:27.526 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:27.526 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.526 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:27.526 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.786 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:27.786 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:27.786 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:27.786 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:27.786 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:27.786 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.786 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:27.786 [2024-11-26 20:52:31.240121] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:27.786 [2024-11-26 20:52:31.240180] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:27.786 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.786 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:27.786 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:27.786 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:27.786 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:27.786 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:27.786 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:27.786 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:27.786 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:27.786 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.787 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:27.787 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:27.787 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:27.787 [2024-11-26 20:52:31.249152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.787 [2024-11-26 20:52:31.249201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.787 [2024-11-26 20:52:31.249219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.787 [2024-11-26 20:52:31.249234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.787 [2024-11-26 20:52:31.249248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.787 [2024-11-26 20:52:31.249261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.787 [2024-11-26 20:52:31.249274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.787 [2024-11-26 20:52:31.249287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.787 [2024-11-26 20:52:31.249300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae10e0 is same with the state(6) to be set 00:22:27.787 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.787 [2024-11-26 20:52:31.259165] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae10e0 (9): Bad file descriptor 00:22:27.787 [2024-11-26 20:52:31.269212] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:27.787 [2024-11-26 20:52:31.269234] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:27.787 [2024-11-26 20:52:31.269243] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:27.787 [2024-11-26 20:52:31.269252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:27.787 [2024-11-26 20:52:31.269298] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:27.787 [2024-11-26 20:52:31.269473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:27.787 [2024-11-26 20:52:31.269507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae10e0 with addr=10.0.0.2, port=4420 00:22:27.787 [2024-11-26 20:52:31.269525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae10e0 is same with the state(6) to be set 00:22:27.787 [2024-11-26 20:52:31.269548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae10e0 (9): Bad file descriptor 00:22:27.787 [2024-11-26 20:52:31.269570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:27.787 [2024-11-26 20:52:31.269583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:27.787 [2024-11-26 20:52:31.269598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:27.787 [2024-11-26 20:52:31.269611] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:27.787 [2024-11-26 20:52:31.269621] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:27.787 [2024-11-26 20:52:31.269629] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:27.787 [2024-11-26 20:52:31.279330] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:27.787 [2024-11-26 20:52:31.279360] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:27.787 [2024-11-26 20:52:31.279369] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:27.787 [2024-11-26 20:52:31.279377] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:27.787 [2024-11-26 20:52:31.279401] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:27.787 [2024-11-26 20:52:31.279551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:27.787 [2024-11-26 20:52:31.279578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae10e0 with addr=10.0.0.2, port=4420 00:22:27.787 [2024-11-26 20:52:31.279594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae10e0 is same with the state(6) to be set 00:22:27.787 [2024-11-26 20:52:31.279616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae10e0 (9): Bad file descriptor 00:22:27.787 [2024-11-26 20:52:31.279636] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:27.787 [2024-11-26 20:52:31.279649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:27.787 [2024-11-26 20:52:31.279662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:27.787 [2024-11-26 20:52:31.279673] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:27.787 [2024-11-26 20:52:31.279682] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:27.787 [2024-11-26 20:52:31.279689] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:27.787 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.787 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:27.787 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:27.787 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:27.787 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:27.787 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:27.787 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:27.787 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:27.787 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:27.787 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:27.787 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.787 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:27.787 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:27.787 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:27.787 [2024-11-26 20:52:31.289437] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:27.787 [2024-11-26 20:52:31.289462] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:27.787 [2024-11-26 20:52:31.289474] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:27.787 [2024-11-26 20:52:31.289482] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:27.787 [2024-11-26 20:52:31.289510] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:27.787 [2024-11-26 20:52:31.289619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:27.787 [2024-11-26 20:52:31.289647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae10e0 with addr=10.0.0.2, port=4420 00:22:27.787 [2024-11-26 20:52:31.289664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae10e0 is same with the state(6) to be set 00:22:27.787 [2024-11-26 20:52:31.289686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae10e0 (9): Bad file descriptor 00:22:27.787 [2024-11-26 20:52:31.289706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:27.787 [2024-11-26 20:52:31.289719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:27.787 [2024-11-26 20:52:31.289732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:27.787 [2024-11-26 20:52:31.289745] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:27.787 [2024-11-26 20:52:31.289754] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:27.787 [2024-11-26 20:52:31.289761] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:27.787 [2024-11-26 20:52:31.299545] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:27.787 [2024-11-26 20:52:31.299569] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:27.787 [2024-11-26 20:52:31.299579] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:27.787 [2024-11-26 20:52:31.299587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:27.787 [2024-11-26 20:52:31.299628] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:27.787 [2024-11-26 20:52:31.299765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:27.787 [2024-11-26 20:52:31.299793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae10e0 with addr=10.0.0.2, port=4420 00:22:27.787 [2024-11-26 20:52:31.299809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae10e0 is same with the state(6) to be set 00:22:27.787 [2024-11-26 20:52:31.299837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae10e0 (9): Bad file descriptor 00:22:27.787 [2024-11-26 20:52:31.299858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:27.787 [2024-11-26 20:52:31.299872] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:27.787 [2024-11-26 20:52:31.299884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:27.787 [2024-11-26 20:52:31.299896] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:27.787 [2024-11-26 20:52:31.299905] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:27.787 [2024-11-26 20:52:31.299912] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:27.787 [2024-11-26 20:52:31.309662] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:27.787 [2024-11-26 20:52:31.309682] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:27.788 [2024-11-26 20:52:31.309691] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:27.788 [2024-11-26 20:52:31.309698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:27.788 [2024-11-26 20:52:31.309736] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:27.788 [2024-11-26 20:52:31.309907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:27.788 [2024-11-26 20:52:31.309934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae10e0 with addr=10.0.0.2, port=4420 00:22:27.788 [2024-11-26 20:52:31.309950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae10e0 is same with the state(6) to be set 00:22:27.788 [2024-11-26 20:52:31.309972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae10e0 (9): Bad file descriptor 00:22:27.788 [2024-11-26 20:52:31.309991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:27.788 [2024-11-26 20:52:31.310004] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:27.788 [2024-11-26 20:52:31.310017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:27.788 [2024-11-26 20:52:31.310029] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:27.788 [2024-11-26 20:52:31.310037] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:27.788 [2024-11-26 20:52:31.310045] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.788 [2024-11-26 20:52:31.319770] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:27.788 [2024-11-26 20:52:31.319789] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:27.788 [2024-11-26 20:52:31.319798] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:27.788 [2024-11-26 20:52:31.319804] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:27.788 [2024-11-26 20:52:31.319841] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:27.788 [2024-11-26 20:52:31.320045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:27.788 [2024-11-26 20:52:31.320072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ae10e0 with addr=10.0.0.2, port=4420 00:22:27.788 [2024-11-26 20:52:31.320093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae10e0 is same with the state(6) to be set 00:22:27.788 [2024-11-26 20:52:31.320115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ae10e0 (9): Bad file descriptor 00:22:27.788 [2024-11-26 20:52:31.320136] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:27.788 [2024-11-26 20:52:31.320149] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:27.788 [2024-11-26 20:52:31.320162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:27.788 [2024-11-26 20:52:31.320174] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:27.788 [2024-11-26 20:52:31.320182] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:27.788 [2024-11-26 20:52:31.320190] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:27.788 [2024-11-26 20:52:31.327485] bdev_nvme.c:7271:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:22:27.788 [2024-11-26 20:52:31.327514] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:27.788 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.046 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:22:28.046 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:28.046 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:22:28.046 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:22:28.046 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:28.046 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:28.046 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:28.046 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:28.046 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:28.046 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:28.046 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:28.046 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:28.046 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.046 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:28.046 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.046 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:22:28.046 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:22:28.046 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:28.046 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:28.047 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:28.047 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.047 20:52:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:28.982 [2024-11-26 20:52:32.602889] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:28.982 [2024-11-26 20:52:32.602922] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:28.982 [2024-11-26 20:52:32.602943] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:29.242 [2024-11-26 20:52:32.690225] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:22:29.242 [2024-11-26 20:52:32.796118] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:22:29.242 [2024-11-26 20:52:32.796930] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1c41460:1 started. 00:22:29.242 [2024-11-26 20:52:32.799057] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:29.242 [2024-11-26 20:52:32.799099] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:29.242 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.242 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:29.242 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:22:29.242 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:29.242 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:29.242 [2024-11-26 20:52:32.802051] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1c41460 was disconnected and freed. delete nvme_qpair. 00:22:29.242 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:29.242 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:29.242 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:29.242 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:29.242 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.242 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.242 request: 00:22:29.242 { 00:22:29.242 "name": "nvme", 00:22:29.242 "trtype": "tcp", 00:22:29.242 "traddr": "10.0.0.2", 00:22:29.242 "adrfam": "ipv4", 00:22:29.242 "trsvcid": "8009", 00:22:29.242 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:29.242 "wait_for_attach": true, 00:22:29.242 "method": "bdev_nvme_start_discovery", 00:22:29.242 "req_id": 1 00:22:29.242 } 00:22:29.242 Got JSON-RPC error response 00:22:29.242 response: 00:22:29.242 { 00:22:29.242 "code": -17, 00:22:29.242 "message": "File exists" 00:22:29.242 } 00:22:29.242 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:29.242 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:22:29.242 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:29.242 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:29.242 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:29.242 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:22:29.242 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:29.242 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:29.242 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.242 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:29.242 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.242 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:29.242 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.242 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:22:29.242 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:22:29.242 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:29.242 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:29.242 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.242 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.242 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:29.242 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:29.242 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.242 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:29.242 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:29.242 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:22:29.242 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:29.242 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:29.242 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:29.242 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:29.242 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:29.242 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:29.242 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.242 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.242 request: 00:22:29.242 { 00:22:29.242 "name": "nvme_second", 00:22:29.242 "trtype": "tcp", 00:22:29.242 "traddr": "10.0.0.2", 00:22:29.242 "adrfam": "ipv4", 00:22:29.242 "trsvcid": "8009", 00:22:29.242 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:29.242 "wait_for_attach": true, 00:22:29.242 "method": "bdev_nvme_start_discovery", 00:22:29.242 "req_id": 1 00:22:29.242 } 00:22:29.242 Got JSON-RPC error response 00:22:29.242 response: 00:22:29.242 { 00:22:29.243 "code": -17, 00:22:29.243 "message": "File exists" 00:22:29.243 } 00:22:29.243 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:29.243 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:22:29.243 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:29.243 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:29.243 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:29.243 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:22:29.243 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:29.243 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:29.243 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.243 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.243 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:29.243 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:29.243 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.501 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:22:29.501 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:22:29.501 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:29.501 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:29.501 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.501 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:29.501 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:29.501 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:29.501 20:52:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.501 20:52:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:29.501 20:52:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:29.501 20:52:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:22:29.501 20:52:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:29.501 20:52:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:29.501 20:52:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:29.501 20:52:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:29.501 20:52:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:29.501 20:52:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:29.501 20:52:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.501 20:52:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:30.436 [2024-11-26 20:52:34.014545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:30.436 [2024-11-26 20:52:34.014590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c41670 with addr=10.0.0.2, port=8010 00:22:30.436 [2024-11-26 20:52:34.014620] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:30.436 [2024-11-26 20:52:34.014635] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:30.436 [2024-11-26 20:52:34.014648] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:31.368 [2024-11-26 20:52:35.017101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:31.368 [2024-11-26 20:52:35.017157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c41670 with addr=10.0.0.2, port=8010 00:22:31.368 [2024-11-26 20:52:35.017204] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:31.368 [2024-11-26 20:52:35.017219] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:31.368 [2024-11-26 20:52:35.017231] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:32.737 [2024-11-26 20:52:36.019243] bdev_nvme.c:7527:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:22:32.737 request: 00:22:32.737 { 00:22:32.737 "name": "nvme_second", 00:22:32.737 "trtype": "tcp", 00:22:32.737 "traddr": "10.0.0.2", 00:22:32.737 "adrfam": "ipv4", 00:22:32.737 "trsvcid": "8010", 00:22:32.737 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:32.737 "wait_for_attach": false, 00:22:32.737 "attach_timeout_ms": 3000, 00:22:32.737 "method": "bdev_nvme_start_discovery", 00:22:32.737 "req_id": 1 00:22:32.737 } 00:22:32.737 Got JSON-RPC error response 00:22:32.737 response: 00:22:32.737 { 00:22:32.737 "code": -110, 00:22:32.737 "message": "Connection timed out" 00:22:32.737 } 00:22:32.737 20:52:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:32.737 20:52:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:22:32.737 20:52:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:32.737 20:52:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:32.737 20:52:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:32.737 20:52:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:22:32.737 20:52:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:32.737 20:52:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:32.737 20:52:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.737 20:52:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:32.737 20:52:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:32.737 20:52:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:32.737 20:52:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.737 20:52:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:22:32.737 20:52:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:22:32.737 20:52:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1735196 00:22:32.737 20:52:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:22:32.737 20:52:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:32.737 20:52:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:22:32.737 20:52:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:32.737 20:52:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:22:32.737 20:52:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:32.737 20:52:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:32.737 rmmod nvme_tcp 00:22:32.737 rmmod nvme_fabrics 00:22:32.737 rmmod nvme_keyring 00:22:32.737 20:52:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:32.737 20:52:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:22:32.737 20:52:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:22:32.737 20:52:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 1735171 ']' 00:22:32.737 20:52:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 1735171 00:22:32.737 20:52:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 1735171 ']' 00:22:32.737 20:52:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 1735171 00:22:32.737 20:52:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:22:32.737 20:52:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:32.737 20:52:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1735171 00:22:32.737 20:52:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:32.737 20:52:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:32.737 20:52:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1735171' 00:22:32.737 killing process with pid 1735171 00:22:32.737 20:52:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 1735171 00:22:32.737 20:52:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 1735171 00:22:32.737 20:52:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:32.737 20:52:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:32.737 20:52:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:32.737 20:52:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:22:32.737 20:52:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:22:32.737 20:52:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:32.737 20:52:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:22:32.737 20:52:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:32.737 20:52:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:32.737 20:52:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:32.737 20:52:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:32.737 20:52:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.273 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:35.273 00:22:35.273 real 0m13.254s 00:22:35.273 user 0m19.137s 00:22:35.273 sys 0m2.854s 00:22:35.273 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:35.273 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.273 ************************************ 00:22:35.273 END TEST nvmf_host_discovery 00:22:35.273 ************************************ 00:22:35.273 20:52:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:35.273 20:52:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:35.273 20:52:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:35.273 20:52:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:35.273 ************************************ 00:22:35.273 START TEST nvmf_host_multipath_status 00:22:35.273 ************************************ 00:22:35.273 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:35.273 * Looking for test storage... 00:22:35.273 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:35.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.274 --rc genhtml_branch_coverage=1 00:22:35.274 --rc genhtml_function_coverage=1 00:22:35.274 --rc genhtml_legend=1 00:22:35.274 --rc geninfo_all_blocks=1 00:22:35.274 --rc geninfo_unexecuted_blocks=1 00:22:35.274 00:22:35.274 ' 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:35.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.274 --rc genhtml_branch_coverage=1 00:22:35.274 --rc genhtml_function_coverage=1 00:22:35.274 --rc genhtml_legend=1 00:22:35.274 --rc geninfo_all_blocks=1 00:22:35.274 --rc geninfo_unexecuted_blocks=1 00:22:35.274 00:22:35.274 ' 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:35.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.274 --rc genhtml_branch_coverage=1 00:22:35.274 --rc genhtml_function_coverage=1 00:22:35.274 --rc genhtml_legend=1 00:22:35.274 --rc geninfo_all_blocks=1 00:22:35.274 --rc geninfo_unexecuted_blocks=1 00:22:35.274 00:22:35.274 ' 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:35.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:35.274 --rc genhtml_branch_coverage=1 00:22:35.274 --rc genhtml_function_coverage=1 00:22:35.274 --rc genhtml_legend=1 00:22:35.274 --rc geninfo_all_blocks=1 00:22:35.274 --rc geninfo_unexecuted_blocks=1 00:22:35.274 00:22:35.274 ' 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.274 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:22:35.275 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:35.275 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:22:35.275 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:35.275 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:35.275 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:35.275 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:35.275 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:35.275 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:35.275 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:35.275 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:35.275 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:35.275 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:35.275 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:35.275 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:35.275 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:35.275 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:22:35.275 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:35.275 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:22:35.275 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:22:35.275 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:35.275 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:35.275 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:35.275 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:35.275 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:35.275 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.275 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:35.275 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:35.275 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:35.275 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:35.275 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:22:35.275 20:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:37.220 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:37.220 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:37.220 Found net devices under 0000:09:00.0: cvl_0_0 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:37.220 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:37.221 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:37.221 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:37.221 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:37.221 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:37.221 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:37.221 Found net devices under 0000:09:00.1: cvl_0_1 00:22:37.221 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:37.221 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:37.221 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:22:37.221 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:37.221 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:37.221 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:37.221 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:37.221 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:37.221 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:37.221 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:37.221 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:37.221 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:37.221 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:37.221 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:37.221 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:37.221 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:37.221 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:37.221 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:37.221 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:37.221 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:37.221 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:37.221 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:37.221 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:37.221 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:37.221 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:37.221 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:37.221 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:37.221 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:37.221 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:37.221 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:37.221 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.311 ms 00:22:37.221 00:22:37.221 --- 10.0.0.2 ping statistics --- 00:22:37.221 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:37.221 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:22:37.221 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:37.221 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:37.221 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:22:37.221 00:22:37.221 --- 10.0.0.1 ping statistics --- 00:22:37.221 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:37.221 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:22:37.221 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:37.221 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:22:37.221 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:37.221 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:37.221 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:37.221 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:37.221 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:37.221 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:37.221 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:37.221 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:22:37.221 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:37.221 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:37.221 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:37.221 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=1738350 00:22:37.221 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:37.221 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 1738350 00:22:37.221 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1738350 ']' 00:22:37.221 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:37.221 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:37.221 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:37.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:37.221 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:37.221 20:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:37.480 [2024-11-26 20:52:40.930548] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:22:37.480 [2024-11-26 20:52:40.930659] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:37.480 [2024-11-26 20:52:41.004708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:37.480 [2024-11-26 20:52:41.060706] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:37.480 [2024-11-26 20:52:41.060765] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:37.480 [2024-11-26 20:52:41.060794] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:37.480 [2024-11-26 20:52:41.060805] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:37.480 [2024-11-26 20:52:41.060815] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:37.480 [2024-11-26 20:52:41.062276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:37.480 [2024-11-26 20:52:41.062281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:37.738 20:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:37.738 20:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:22:37.738 20:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:37.738 20:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:37.738 20:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:37.738 20:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:37.738 20:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1738350 00:22:37.738 20:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:37.996 [2024-11-26 20:52:41.454148] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:37.996 20:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:38.254 Malloc0 00:22:38.254 20:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:22:38.512 20:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:38.769 20:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:39.027 [2024-11-26 20:52:42.557060] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:39.027 20:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:39.285 [2024-11-26 20:52:42.817748] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:39.285 20:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1738523 00:22:39.285 20:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:22:39.285 20:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:39.285 20:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1738523 /var/tmp/bdevperf.sock 00:22:39.285 20:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1738523 ']' 00:22:39.285 20:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:39.285 20:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:39.285 20:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:39.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:39.285 20:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:39.285 20:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:39.543 20:52:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:39.543 20:52:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:22:39.543 20:52:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:39.801 20:52:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:40.366 Nvme0n1 00:22:40.366 20:52:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:40.931 Nvme0n1 00:22:40.931 20:52:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:22:40.931 20:52:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:22:42.831 20:52:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:22:42.831 20:52:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:43.089 20:52:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:43.347 20:52:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:22:44.721 20:52:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:22:44.721 20:52:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:44.721 20:52:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:44.721 20:52:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:44.721 20:52:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:44.721 20:52:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:44.721 20:52:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:44.721 20:52:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:44.979 20:52:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:44.979 20:52:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:44.979 20:52:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:44.979 20:52:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:45.237 20:52:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:45.237 20:52:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:45.237 20:52:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:45.237 20:52:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:45.495 20:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:45.495 20:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:45.495 20:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:45.495 20:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:45.752 20:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:45.752 20:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:45.752 20:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:45.752 20:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:46.011 20:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:46.011 20:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:22:46.011 20:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:46.268 20:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:46.526 20:52:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:22:47.898 20:52:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:22:47.898 20:52:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:47.898 20:52:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:47.898 20:52:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:47.898 20:52:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:47.898 20:52:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:47.898 20:52:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:47.898 20:52:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:48.156 20:52:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:48.156 20:52:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:48.156 20:52:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:48.156 20:52:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:48.414 20:52:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:48.414 20:52:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:48.414 20:52:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:48.414 20:52:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:48.672 20:52:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:48.672 20:52:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:48.672 20:52:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:48.672 20:52:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:48.931 20:52:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:48.931 20:52:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:48.931 20:52:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:48.931 20:52:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:49.189 20:52:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:49.189 20:52:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:22:49.189 20:52:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:49.446 20:52:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:49.703 20:52:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:22:51.074 20:52:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:22:51.074 20:52:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:51.074 20:52:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:51.074 20:52:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:51.074 20:52:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:51.074 20:52:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:51.074 20:52:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:51.074 20:52:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:51.332 20:52:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:51.332 20:52:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:51.332 20:52:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:51.332 20:52:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:51.590 20:52:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:51.590 20:52:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:51.590 20:52:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:51.590 20:52:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:51.848 20:52:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:51.848 20:52:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:51.848 20:52:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:51.848 20:52:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:52.106 20:52:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:52.106 20:52:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:52.106 20:52:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:52.106 20:52:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:52.672 20:52:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:52.672 20:52:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:22:52.672 20:52:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:52.672 20:52:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:52.946 20:52:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:22:54.319 20:52:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:22:54.319 20:52:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:54.319 20:52:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:54.319 20:52:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:54.319 20:52:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:54.319 20:52:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:54.319 20:52:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:54.319 20:52:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:54.577 20:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:54.577 20:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:54.577 20:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:54.577 20:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:54.834 20:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:54.834 20:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:54.834 20:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:54.834 20:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:55.093 20:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:55.093 20:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:55.093 20:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:55.093 20:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:55.351 20:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:55.352 20:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:55.352 20:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:55.352 20:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:55.610 20:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:55.610 20:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:22:55.610 20:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:56.175 20:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:56.175 20:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:22:57.549 20:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:22:57.549 20:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:57.549 20:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:57.549 20:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:57.549 20:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:57.549 20:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:57.549 20:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:57.549 20:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:57.806 20:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:57.807 20:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:57.807 20:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:57.807 20:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:58.065 20:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:58.065 20:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:58.065 20:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:58.065 20:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:58.331 20:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:58.331 20:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:58.331 20:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:58.331 20:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:58.647 20:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:58.647 20:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:58.647 20:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:58.647 20:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:58.905 20:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:58.905 20:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:22:58.905 20:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:59.163 20:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:59.420 20:53:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:23:00.354 20:53:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:23:00.354 20:53:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:00.354 20:53:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:00.354 20:53:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:00.612 20:53:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:00.612 20:53:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:00.612 20:53:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:00.612 20:53:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:01.178 20:53:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:01.178 20:53:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:01.178 20:53:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:01.178 20:53:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:01.178 20:53:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:01.178 20:53:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:01.178 20:53:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:01.178 20:53:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:01.435 20:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:01.435 20:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:01.435 20:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:01.435 20:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:01.693 20:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:01.693 20:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:01.951 20:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:01.951 20:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:02.210 20:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:02.210 20:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:23:02.468 20:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:23:02.468 20:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:02.726 20:53:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:02.985 20:53:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:23:03.919 20:53:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:23:03.919 20:53:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:03.919 20:53:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:03.919 20:53:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:04.177 20:53:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:04.177 20:53:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:04.177 20:53:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:04.177 20:53:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:04.435 20:53:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:04.435 20:53:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:04.436 20:53:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:04.436 20:53:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:04.694 20:53:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:04.694 20:53:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:04.694 20:53:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:04.694 20:53:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:04.952 20:53:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:04.952 20:53:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:04.952 20:53:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:04.952 20:53:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:05.211 20:53:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:05.211 20:53:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:05.211 20:53:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:05.211 20:53:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:05.469 20:53:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:05.469 20:53:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:23:05.469 20:53:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:05.726 20:53:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:05.984 20:53:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:23:07.359 20:53:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:23:07.359 20:53:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:07.359 20:53:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:07.359 20:53:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:07.359 20:53:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:07.359 20:53:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:07.359 20:53:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:07.359 20:53:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:07.617 20:53:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:07.617 20:53:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:07.617 20:53:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:07.617 20:53:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:07.875 20:53:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:07.875 20:53:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:07.875 20:53:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:07.875 20:53:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:08.134 20:53:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:08.134 20:53:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:08.134 20:53:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:08.134 20:53:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:08.392 20:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:08.392 20:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:08.392 20:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:08.392 20:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:08.959 20:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:08.959 20:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:23:08.959 20:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:08.959 20:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:09.217 20:53:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:23:10.590 20:53:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:23:10.590 20:53:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:10.590 20:53:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:10.590 20:53:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:10.590 20:53:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:10.590 20:53:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:10.590 20:53:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:10.590 20:53:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:10.848 20:53:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:10.848 20:53:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:10.848 20:53:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:10.848 20:53:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:11.106 20:53:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:11.106 20:53:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:11.106 20:53:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:11.106 20:53:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:11.365 20:53:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:11.365 20:53:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:11.365 20:53:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:11.365 20:53:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:11.623 20:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:11.623 20:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:11.623 20:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:11.623 20:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:11.881 20:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:11.881 20:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:23:11.881 20:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:12.139 20:53:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:12.706 20:53:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:23:13.639 20:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:23:13.639 20:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:13.639 20:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:13.640 20:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:13.897 20:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:13.897 20:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:13.897 20:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:13.897 20:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:14.155 20:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:14.155 20:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:14.155 20:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:14.155 20:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:14.414 20:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:14.414 20:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:14.414 20:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:14.414 20:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:14.672 20:53:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:14.672 20:53:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:14.672 20:53:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:14.672 20:53:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:14.930 20:53:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:14.930 20:53:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:14.930 20:53:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:14.930 20:53:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:15.188 20:53:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:15.188 20:53:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1738523 00:23:15.188 20:53:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1738523 ']' 00:23:15.188 20:53:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1738523 00:23:15.188 20:53:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:23:15.188 20:53:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:15.188 20:53:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1738523 00:23:15.188 20:53:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:15.188 20:53:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:15.188 20:53:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1738523' 00:23:15.188 killing process with pid 1738523 00:23:15.188 20:53:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1738523 00:23:15.188 20:53:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1738523 00:23:15.188 { 00:23:15.188 "results": [ 00:23:15.188 { 00:23:15.188 "job": "Nvme0n1", 00:23:15.188 "core_mask": "0x4", 00:23:15.188 "workload": "verify", 00:23:15.188 "status": "terminated", 00:23:15.188 "verify_range": { 00:23:15.188 "start": 0, 00:23:15.188 "length": 16384 00:23:15.188 }, 00:23:15.188 "queue_depth": 128, 00:23:15.188 "io_size": 4096, 00:23:15.188 "runtime": 34.24479, 00:23:15.188 "iops": 7998.705788530167, 00:23:15.188 "mibps": 31.244944486445966, 00:23:15.188 "io_failed": 0, 00:23:15.188 "io_timeout": 0, 00:23:15.188 "avg_latency_us": 15976.550144708839, 00:23:15.188 "min_latency_us": 274.5837037037037, 00:23:15.188 "max_latency_us": 4026531.84 00:23:15.188 } 00:23:15.188 ], 00:23:15.188 "core_count": 1 00:23:15.188 } 00:23:15.450 20:53:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1738523 00:23:15.450 20:53:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:15.450 [2024-11-26 20:52:42.886035] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:23:15.450 [2024-11-26 20:52:42.886119] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1738523 ] 00:23:15.450 [2024-11-26 20:52:42.958437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.450 [2024-11-26 20:52:43.021635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:15.450 Running I/O for 90 seconds... 00:23:15.450 8528.00 IOPS, 33.31 MiB/s [2024-11-26T19:53:19.147Z] 8509.50 IOPS, 33.24 MiB/s [2024-11-26T19:53:19.147Z] 8515.00 IOPS, 33.26 MiB/s [2024-11-26T19:53:19.147Z] 8540.25 IOPS, 33.36 MiB/s [2024-11-26T19:53:19.147Z] 8518.20 IOPS, 33.27 MiB/s [2024-11-26T19:53:19.147Z] 8457.33 IOPS, 33.04 MiB/s [2024-11-26T19:53:19.147Z] 8499.29 IOPS, 33.20 MiB/s [2024-11-26T19:53:19.147Z] 8499.62 IOPS, 33.20 MiB/s [2024-11-26T19:53:19.147Z] 8526.78 IOPS, 33.31 MiB/s [2024-11-26T19:53:19.147Z] 8516.30 IOPS, 33.27 MiB/s [2024-11-26T19:53:19.147Z] 8508.91 IOPS, 33.24 MiB/s [2024-11-26T19:53:19.147Z] 8516.25 IOPS, 33.27 MiB/s [2024-11-26T19:53:19.147Z] 8505.15 IOPS, 33.22 MiB/s [2024-11-26T19:53:19.147Z] 8510.14 IOPS, 33.24 MiB/s [2024-11-26T19:53:19.147Z] 8506.33 IOPS, 33.23 MiB/s [2024-11-26T19:53:19.147Z] [2024-11-26 20:52:59.549977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:105352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.450 [2024-11-26 20:52:59.550028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:15.450 [2024-11-26 20:52:59.550109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:105864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.450 [2024-11-26 20:52:59.550131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:15.450 [2024-11-26 20:52:59.550170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:105872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.450 [2024-11-26 20:52:59.550188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:15.450 [2024-11-26 20:52:59.550210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.450 [2024-11-26 20:52:59.550242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:15.450 [2024-11-26 20:52:59.550266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:105888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.450 [2024-11-26 20:52:59.550282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:15.450 [2024-11-26 20:52:59.550313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:105896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.450 [2024-11-26 20:52:59.550332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:15.450 [2024-11-26 20:52:59.550355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:105904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.450 [2024-11-26 20:52:59.550372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:15.450 [2024-11-26 20:52:59.550395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:105912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.450 [2024-11-26 20:52:59.550412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:15.450 [2024-11-26 20:52:59.550724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:105920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.450 [2024-11-26 20:52:59.550747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:15.450 [2024-11-26 20:52:59.550788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.450 [2024-11-26 20:52:59.550807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:15.450 [2024-11-26 20:52:59.550830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:105936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.450 [2024-11-26 20:52:59.550847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:15.450 [2024-11-26 20:52:59.550870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.450 [2024-11-26 20:52:59.550886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:15.450 [2024-11-26 20:52:59.550908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:105952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.450 [2024-11-26 20:52:59.550924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:15.450 [2024-11-26 20:52:59.550947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:105960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.450 [2024-11-26 20:52:59.550963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:15.450 [2024-11-26 20:52:59.550986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:105968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.450 [2024-11-26 20:52:59.551003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:15.450 [2024-11-26 20:52:59.551025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:105976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.450 [2024-11-26 20:52:59.551058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:15.450 [2024-11-26 20:52:59.551081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:105984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.450 [2024-11-26 20:52:59.551097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:15.450 [2024-11-26 20:52:59.551135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:105992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.450 [2024-11-26 20:52:59.551151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:15.450 [2024-11-26 20:52:59.551172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:106000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.450 [2024-11-26 20:52:59.551203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:15.450 [2024-11-26 20:52:59.551227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:106008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.450 [2024-11-26 20:52:59.551244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:15.450 [2024-11-26 20:52:59.551266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:106016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.450 [2024-11-26 20:52:59.551282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:15.450 [2024-11-26 20:52:59.551315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:106024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.450 [2024-11-26 20:52:59.551339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:15.450 [2024-11-26 20:52:59.551363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:106032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.450 [2024-11-26 20:52:59.551380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:15.451 [2024-11-26 20:52:59.551403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:106040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.451 [2024-11-26 20:52:59.551418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:15.451 [2024-11-26 20:52:59.551441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:106048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.451 [2024-11-26 20:52:59.551457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:15.451 [2024-11-26 20:52:59.551480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.451 [2024-11-26 20:52:59.551496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:15.451 [2024-11-26 20:52:59.551518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:106064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.451 [2024-11-26 20:52:59.551533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:15.451 [2024-11-26 20:52:59.551556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:106072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.451 [2024-11-26 20:52:59.551572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:15.451 [2024-11-26 20:52:59.551610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:106080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.451 [2024-11-26 20:52:59.551626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:15.451 [2024-11-26 20:52:59.551662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:106088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.451 [2024-11-26 20:52:59.551677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:15.451 [2024-11-26 20:52:59.551698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:106096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.451 [2024-11-26 20:52:59.551713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:15.451 [2024-11-26 20:52:59.551734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:106104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.451 [2024-11-26 20:52:59.551749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:15.451 [2024-11-26 20:52:59.551770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:106112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.451 [2024-11-26 20:52:59.551785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:15.451 [2024-11-26 20:52:59.551806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:106120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.451 [2024-11-26 20:52:59.551825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:15.451 [2024-11-26 20:52:59.551847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:106128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.451 [2024-11-26 20:52:59.551863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:15.451 [2024-11-26 20:52:59.551884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:106136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.451 [2024-11-26 20:52:59.551899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:15.451 [2024-11-26 20:52:59.551921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:106144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.451 [2024-11-26 20:52:59.551936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:15.451 [2024-11-26 20:52:59.551957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:106152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.451 [2024-11-26 20:52:59.551972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:15.451 [2024-11-26 20:52:59.551993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:106160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.451 [2024-11-26 20:52:59.552008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:15.451 [2024-11-26 20:52:59.552029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:106168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.451 [2024-11-26 20:52:59.552044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:15.451 [2024-11-26 20:52:59.552160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:106176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.451 [2024-11-26 20:52:59.552181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.451 [2024-11-26 20:52:59.552209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:106184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.451 [2024-11-26 20:52:59.552227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:15.451 [2024-11-26 20:52:59.552251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:106192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.451 [2024-11-26 20:52:59.552267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:15.451 [2024-11-26 20:52:59.552315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:106200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.451 [2024-11-26 20:52:59.552335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:15.451 [2024-11-26 20:52:59.552360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:106208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.451 [2024-11-26 20:52:59.552377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:15.451 [2024-11-26 20:52:59.552404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:106216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.451 [2024-11-26 20:52:59.552421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:15.451 [2024-11-26 20:52:59.552451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:106224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.451 [2024-11-26 20:52:59.552468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:15.451 [2024-11-26 20:52:59.552493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:106232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.451 [2024-11-26 20:52:59.552510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:15.451 [2024-11-26 20:52:59.552535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:106240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.451 [2024-11-26 20:52:59.552551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:15.451 [2024-11-26 20:52:59.552575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:106248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.451 [2024-11-26 20:52:59.552591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:15.451 [2024-11-26 20:52:59.552631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:106256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.451 [2024-11-26 20:52:59.552647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:15.451 [2024-11-26 20:52:59.552686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:106264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.451 [2024-11-26 20:52:59.552702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:15.451 [2024-11-26 20:52:59.552725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:106272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.451 [2024-11-26 20:52:59.552741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:15.451 [2024-11-26 20:52:59.552765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:106280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.451 [2024-11-26 20:52:59.552780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:15.451 [2024-11-26 20:52:59.552803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:106288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.451 [2024-11-26 20:52:59.552818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:15.451 [2024-11-26 20:52:59.552842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:106296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.451 [2024-11-26 20:52:59.552858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:15.451 [2024-11-26 20:52:59.553032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:106304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.451 [2024-11-26 20:52:59.553054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:15.451 [2024-11-26 20:52:59.553084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:106312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.451 [2024-11-26 20:52:59.553101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:15.451 [2024-11-26 20:52:59.553133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:105360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.451 [2024-11-26 20:52:59.553151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:15.451 [2024-11-26 20:52:59.553177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:105368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.451 [2024-11-26 20:52:59.553193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:15.451 [2024-11-26 20:52:59.553219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:105376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.451 [2024-11-26 20:52:59.553235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:15.451 [2024-11-26 20:52:59.553261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:105384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.451 [2024-11-26 20:52:59.553278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:15.452 [2024-11-26 20:52:59.553313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:105392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.452 [2024-11-26 20:52:59.553332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:15.452 [2024-11-26 20:52:59.553359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:105400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.452 [2024-11-26 20:52:59.553375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:15.452 [2024-11-26 20:52:59.553402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:105408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.452 [2024-11-26 20:52:59.553418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:15.452 [2024-11-26 20:52:59.553443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:106320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.452 [2024-11-26 20:52:59.553459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:15.452 [2024-11-26 20:52:59.553485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:106328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.452 [2024-11-26 20:52:59.553501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:15.452 [2024-11-26 20:52:59.553527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:106336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.452 [2024-11-26 20:52:59.553544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:15.452 [2024-11-26 20:52:59.553569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:106344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.452 [2024-11-26 20:52:59.553586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:15.452 [2024-11-26 20:52:59.553612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:106352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.452 [2024-11-26 20:52:59.553629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:15.452 [2024-11-26 20:52:59.553675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:106360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.452 [2024-11-26 20:52:59.553692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:15.452 [2024-11-26 20:52:59.553717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:105416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.452 [2024-11-26 20:52:59.553733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:15.452 [2024-11-26 20:52:59.553758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:105424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.452 [2024-11-26 20:52:59.553774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:15.452 [2024-11-26 20:52:59.553798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:105432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.452 [2024-11-26 20:52:59.553814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:15.452 [2024-11-26 20:52:59.553838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:105440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.452 [2024-11-26 20:52:59.553854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:15.452 [2024-11-26 20:52:59.553879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:105448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.452 [2024-11-26 20:52:59.553895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:15.452 [2024-11-26 20:52:59.553919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.452 [2024-11-26 20:52:59.553935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:15.452 [2024-11-26 20:52:59.553960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:105464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.452 [2024-11-26 20:52:59.553976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:15.452 [2024-11-26 20:52:59.554001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:105472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.452 [2024-11-26 20:52:59.554016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:15.452 [2024-11-26 20:52:59.554041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:105480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.452 [2024-11-26 20:52:59.554056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:15.452 [2024-11-26 20:52:59.554081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:105488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.452 [2024-11-26 20:52:59.554097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:15.452 [2024-11-26 20:52:59.554121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:105496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.452 [2024-11-26 20:52:59.554136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:15.452 [2024-11-26 20:52:59.554161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:105504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.452 [2024-11-26 20:52:59.554181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:15.452 [2024-11-26 20:52:59.554207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:105512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.452 [2024-11-26 20:52:59.554223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:15.452 [2024-11-26 20:52:59.554248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:105520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.452 [2024-11-26 20:52:59.554263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:15.452 [2024-11-26 20:52:59.554315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:105528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.452 [2024-11-26 20:52:59.554335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:15.452 [2024-11-26 20:52:59.554362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:105536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.452 [2024-11-26 20:52:59.554379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:15.452 [2024-11-26 20:52:59.554405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:105544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.452 [2024-11-26 20:52:59.554422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:15.452 [2024-11-26 20:52:59.554448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:105552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.452 [2024-11-26 20:52:59.554464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:15.452 [2024-11-26 20:52:59.554491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:105560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.452 [2024-11-26 20:52:59.554507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:15.452 [2024-11-26 20:52:59.554648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.452 [2024-11-26 20:52:59.554686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:15.452 [2024-11-26 20:52:59.554719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:105576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.452 [2024-11-26 20:52:59.554737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:15.452 [2024-11-26 20:52:59.554767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:105584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.452 [2024-11-26 20:52:59.554784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:15.452 [2024-11-26 20:52:59.554813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:105592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.452 [2024-11-26 20:52:59.554831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:15.452 [2024-11-26 20:52:59.554859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:105600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.452 [2024-11-26 20:52:59.554884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:15.452 [2024-11-26 20:52:59.554915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:105608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.452 [2024-11-26 20:52:59.554933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:15.452 [2024-11-26 20:52:59.554976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:105616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.452 [2024-11-26 20:52:59.554992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:15.452 [2024-11-26 20:52:59.555022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:105624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.452 [2024-11-26 20:52:59.555039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:15.452 [2024-11-26 20:52:59.555066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:105632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.452 [2024-11-26 20:52:59.555083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:15.452 [2024-11-26 20:52:59.555110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:105640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.452 [2024-11-26 20:52:59.555126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:15.452 [2024-11-26 20:52:59.555154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:105648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.452 [2024-11-26 20:52:59.555170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:15.453 [2024-11-26 20:52:59.555197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.453 [2024-11-26 20:52:59.555213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:15.453 [2024-11-26 20:52:59.555241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:105664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.453 [2024-11-26 20:52:59.555257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:15.453 [2024-11-26 20:52:59.555299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:105672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.453 [2024-11-26 20:52:59.555326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:15.453 [2024-11-26 20:52:59.555356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:105680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.453 [2024-11-26 20:52:59.555373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:15.453 [2024-11-26 20:52:59.555401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:105688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.453 [2024-11-26 20:52:59.555418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:15.453 [2024-11-26 20:52:59.555446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.453 [2024-11-26 20:52:59.555467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:15.453 [2024-11-26 20:52:59.555496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:105704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.453 [2024-11-26 20:52:59.555513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:15.453 [2024-11-26 20:52:59.555541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:105712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.453 [2024-11-26 20:52:59.555557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:15.453 [2024-11-26 20:52:59.555585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:105720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.453 [2024-11-26 20:52:59.555616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:15.453 [2024-11-26 20:52:59.555644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:105728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.453 [2024-11-26 20:52:59.555660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:15.453 [2024-11-26 20:52:59.555686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:105736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.453 [2024-11-26 20:52:59.555702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:15.453 [2024-11-26 20:52:59.555728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.453 [2024-11-26 20:52:59.555744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:15.453 [2024-11-26 20:52:59.555771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:105752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.453 [2024-11-26 20:52:59.555787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:15.453 [2024-11-26 20:52:59.555813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:105760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.453 [2024-11-26 20:52:59.555828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:15.453 [2024-11-26 20:52:59.555855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:105768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.453 [2024-11-26 20:52:59.555870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:15.453 [2024-11-26 20:52:59.555897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:105776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.453 [2024-11-26 20:52:59.555912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:15.453 [2024-11-26 20:52:59.555939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:105784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.453 [2024-11-26 20:52:59.555954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:15.453 [2024-11-26 20:52:59.555981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:105792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.453 [2024-11-26 20:52:59.555997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:15.453 [2024-11-26 20:52:59.556029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:105800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.453 [2024-11-26 20:52:59.556045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:15.453 [2024-11-26 20:52:59.556072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:105808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.453 [2024-11-26 20:52:59.556087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:15.453 [2024-11-26 20:52:59.556114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:105816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.453 [2024-11-26 20:52:59.556129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:15.453 [2024-11-26 20:52:59.556156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:105824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.453 [2024-11-26 20:52:59.556172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:15.453 [2024-11-26 20:52:59.556198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:105832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.453 [2024-11-26 20:52:59.556214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:15.453 [2024-11-26 20:52:59.556241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:105840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.453 [2024-11-26 20:52:59.556256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:15.453 [2024-11-26 20:52:59.556298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:105848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.453 [2024-11-26 20:52:59.556323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:15.453 [2024-11-26 20:52:59.556353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:105856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.453 [2024-11-26 20:52:59.556369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:15.453 [2024-11-26 20:52:59.556397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:106368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.453 [2024-11-26 20:52:59.556414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:15.453 7991.06 IOPS, 31.22 MiB/s [2024-11-26T19:53:19.150Z] 7521.00 IOPS, 29.38 MiB/s [2024-11-26T19:53:19.150Z] 7103.17 IOPS, 27.75 MiB/s [2024-11-26T19:53:19.150Z] 6729.32 IOPS, 26.29 MiB/s [2024-11-26T19:53:19.150Z] 6806.35 IOPS, 26.59 MiB/s [2024-11-26T19:53:19.150Z] 6889.19 IOPS, 26.91 MiB/s [2024-11-26T19:53:19.150Z] 6996.86 IOPS, 27.33 MiB/s [2024-11-26T19:53:19.150Z] 7176.70 IOPS, 28.03 MiB/s [2024-11-26T19:53:19.150Z] 7347.33 IOPS, 28.70 MiB/s [2024-11-26T19:53:19.150Z] 7480.64 IOPS, 29.22 MiB/s [2024-11-26T19:53:19.150Z] 7514.88 IOPS, 29.36 MiB/s [2024-11-26T19:53:19.150Z] 7553.59 IOPS, 29.51 MiB/s [2024-11-26T19:53:19.150Z] 7587.89 IOPS, 29.64 MiB/s [2024-11-26T19:53:19.150Z] 7679.72 IOPS, 30.00 MiB/s [2024-11-26T19:53:19.150Z] 7789.37 IOPS, 30.43 MiB/s [2024-11-26T19:53:19.150Z] 7893.65 IOPS, 30.83 MiB/s [2024-11-26T19:53:19.150Z] [2024-11-26 20:53:16.078158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:38264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.453 [2024-11-26 20:53:16.078223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:15.453 [2024-11-26 20:53:16.078282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:38280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.453 [2024-11-26 20:53:16.078324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:15.453 [2024-11-26 20:53:16.078351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:38296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.453 [2024-11-26 20:53:16.078369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:15.453 [2024-11-26 20:53:16.078392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.453 [2024-11-26 20:53:16.078408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:15.453 [2024-11-26 20:53:16.078431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:37672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.453 [2024-11-26 20:53:16.078447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:15.453 [2024-11-26 20:53:16.078470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:37704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.453 [2024-11-26 20:53:16.078486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:15.453 [2024-11-26 20:53:16.078508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.453 [2024-11-26 20:53:16.078525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:15.453 [2024-11-26 20:53:16.078546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:37768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.453 [2024-11-26 20:53:16.078563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:15.453 [2024-11-26 20:53:16.078585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:37808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.454 [2024-11-26 20:53:16.078601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:15.454 [2024-11-26 20:53:16.078623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:37840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.454 [2024-11-26 20:53:16.078639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:15.454 [2024-11-26 20:53:16.078661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:37872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.454 [2024-11-26 20:53:16.078678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:15.454 [2024-11-26 20:53:16.078700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:37904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.454 [2024-11-26 20:53:16.078717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:15.454 [2024-11-26 20:53:16.078739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:37936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.454 [2024-11-26 20:53:16.078755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:15.454 [2024-11-26 20:53:16.078777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:37968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.454 [2024-11-26 20:53:16.078798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:15.454 [2024-11-26 20:53:16.078837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:37648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.454 [2024-11-26 20:53:16.078854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:15.454 [2024-11-26 20:53:16.078876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:37680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.454 [2024-11-26 20:53:16.078906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:15.454 [2024-11-26 20:53:16.078928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:37712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.454 [2024-11-26 20:53:16.078944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:15.454 [2024-11-26 20:53:16.078964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:37744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.454 [2024-11-26 20:53:16.078979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:15.454 [2024-11-26 20:53:16.079000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:37776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.454 [2024-11-26 20:53:16.079015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:15.454 [2024-11-26 20:53:16.079036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:37800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.454 [2024-11-26 20:53:16.079051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:15.454 [2024-11-26 20:53:16.079072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:37832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.454 [2024-11-26 20:53:16.079087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:15.454 [2024-11-26 20:53:16.079108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:37864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.454 [2024-11-26 20:53:16.079123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:15.454 [2024-11-26 20:53:16.079144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:37896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.454 [2024-11-26 20:53:16.079159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:15.454 [2024-11-26 20:53:16.079180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:37928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.454 [2024-11-26 20:53:16.079196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:15.454 [2024-11-26 20:53:16.079217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:37960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.454 [2024-11-26 20:53:16.079232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:15.454 [2024-11-26 20:53:16.079253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:37992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.454 [2024-11-26 20:53:16.079268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:15.454 [2024-11-26 20:53:16.079293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:38320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.454 [2024-11-26 20:53:16.079333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:15.454 [2024-11-26 20:53:16.079357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:38336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.454 [2024-11-26 20:53:16.079373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:15.454 [2024-11-26 20:53:16.079396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:38024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.454 [2024-11-26 20:53:16.079412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:15.454 [2024-11-26 20:53:16.080185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:38016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.454 [2024-11-26 20:53:16.080224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:15.454 [2024-11-26 20:53:16.080251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.454 [2024-11-26 20:53:16.080268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:15.454 [2024-11-26 20:53:16.080313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:38088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.454 [2024-11-26 20:53:16.080332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:15.454 [2024-11-26 20:53:16.080370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:38120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.454 [2024-11-26 20:53:16.080387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:15.454 [2024-11-26 20:53:16.080410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:38152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.454 [2024-11-26 20:53:16.080426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:15.454 [2024-11-26 20:53:16.080449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.454 [2024-11-26 20:53:16.080465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:15.454 [2024-11-26 20:53:16.080487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:38216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.454 [2024-11-26 20:53:16.080502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:15.454 [2024-11-26 20:53:16.080525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:38240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.454 [2024-11-26 20:53:16.080541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:15.454 [2024-11-26 20:53:16.080564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:38032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.454 [2024-11-26 20:53:16.080581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:15.454 [2024-11-26 20:53:16.080608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:38064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.454 [2024-11-26 20:53:16.080626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:15.454 [2024-11-26 20:53:16.080668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:38352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.454 [2024-11-26 20:53:16.080684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:15.454 [2024-11-26 20:53:16.080720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:38368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.454 [2024-11-26 20:53:16.080735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:15.455 [2024-11-26 20:53:16.080756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:38384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.455 [2024-11-26 20:53:16.080772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:15.455 [2024-11-26 20:53:16.080792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.455 [2024-11-26 20:53:16.080807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:15.455 [2024-11-26 20:53:16.080827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:38416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.455 [2024-11-26 20:53:16.080843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:15.455 [2024-11-26 20:53:16.080863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:38432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:15.455 [2024-11-26 20:53:16.080878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:15.455 [2024-11-26 20:53:16.080898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:38080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.455 [2024-11-26 20:53:16.080913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:15.455 [2024-11-26 20:53:16.080933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:38112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.455 [2024-11-26 20:53:16.080949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:15.455 [2024-11-26 20:53:16.080969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.455 [2024-11-26 20:53:16.080984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:15.455 [2024-11-26 20:53:16.081004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:38176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.455 [2024-11-26 20:53:16.081019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:15.455 [2024-11-26 20:53:16.081040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:38208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.455 [2024-11-26 20:53:16.081055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:15.455 [2024-11-26 20:53:16.081076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:38248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:15.455 [2024-11-26 20:53:16.081095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:15.455 7960.97 IOPS, 31.10 MiB/s [2024-11-26T19:53:19.152Z] 7980.76 IOPS, 31.17 MiB/s [2024-11-26T19:53:19.152Z] 7999.41 IOPS, 31.25 MiB/s [2024-11-26T19:53:19.152Z] Received shutdown signal, test time was about 34.245597 seconds 00:23:15.455 00:23:15.455 Latency(us) 00:23:15.455 [2024-11-26T19:53:19.152Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:15.455 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:15.455 Verification LBA range: start 0x0 length 0x4000 00:23:15.455 Nvme0n1 : 34.24 7998.71 31.24 0.00 0.00 15976.55 274.58 4026531.84 00:23:15.455 [2024-11-26T19:53:19.152Z] =================================================================================================================== 00:23:15.455 [2024-11-26T19:53:19.152Z] Total : 7998.71 31.24 0.00 0.00 15976.55 274.58 4026531.84 00:23:15.455 20:53:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:15.711 20:53:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:23:15.711 20:53:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:15.711 20:53:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:23:15.711 20:53:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:15.711 20:53:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:23:15.711 20:53:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:15.711 20:53:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:23:15.711 20:53:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:15.711 20:53:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:15.711 rmmod nvme_tcp 00:23:15.711 rmmod nvme_fabrics 00:23:15.711 rmmod nvme_keyring 00:23:15.711 20:53:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:15.711 20:53:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:23:15.711 20:53:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:23:15.711 20:53:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 1738350 ']' 00:23:15.711 20:53:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 1738350 00:23:15.711 20:53:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1738350 ']' 00:23:15.711 20:53:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1738350 00:23:15.711 20:53:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:23:15.711 20:53:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:15.711 20:53:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1738350 00:23:15.711 20:53:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:15.711 20:53:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:15.711 20:53:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1738350' 00:23:15.711 killing process with pid 1738350 00:23:15.711 20:53:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1738350 00:23:15.711 20:53:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1738350 00:23:15.969 20:53:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:15.969 20:53:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:15.969 20:53:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:15.969 20:53:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:23:15.969 20:53:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:23:15.969 20:53:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:23:15.969 20:53:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:15.969 20:53:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:15.969 20:53:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:15.969 20:53:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:15.969 20:53:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:15.969 20:53:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:18.542 20:53:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:18.542 00:23:18.542 real 0m43.158s 00:23:18.542 user 2m11.976s 00:23:18.542 sys 0m10.395s 00:23:18.542 20:53:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:18.542 20:53:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:18.542 ************************************ 00:23:18.542 END TEST nvmf_host_multipath_status 00:23:18.542 ************************************ 00:23:18.542 20:53:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:18.542 20:53:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:18.542 20:53:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:18.542 20:53:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.542 ************************************ 00:23:18.542 START TEST nvmf_discovery_remove_ifc 00:23:18.542 ************************************ 00:23:18.542 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:18.542 * Looking for test storage... 00:23:18.542 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:18.542 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:18.542 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:23:18.542 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:18.542 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:18.542 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:18.542 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:18.542 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:18.542 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:23:18.542 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:23:18.542 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:23:18.542 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:23:18.542 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:23:18.542 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:23:18.542 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:23:18.542 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:18.542 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:23:18.542 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:23:18.542 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:18.542 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:18.542 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:23:18.542 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:23:18.542 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:18.542 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:23:18.542 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:23:18.542 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:23:18.542 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:23:18.542 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:18.542 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:23:18.542 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:23:18.542 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:18.542 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:18.542 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:23:18.542 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:18.542 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:18.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.542 --rc genhtml_branch_coverage=1 00:23:18.542 --rc genhtml_function_coverage=1 00:23:18.542 --rc genhtml_legend=1 00:23:18.542 --rc geninfo_all_blocks=1 00:23:18.542 --rc geninfo_unexecuted_blocks=1 00:23:18.542 00:23:18.542 ' 00:23:18.542 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:18.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.543 --rc genhtml_branch_coverage=1 00:23:18.543 --rc genhtml_function_coverage=1 00:23:18.543 --rc genhtml_legend=1 00:23:18.543 --rc geninfo_all_blocks=1 00:23:18.543 --rc geninfo_unexecuted_blocks=1 00:23:18.543 00:23:18.543 ' 00:23:18.543 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:18.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.543 --rc genhtml_branch_coverage=1 00:23:18.543 --rc genhtml_function_coverage=1 00:23:18.543 --rc genhtml_legend=1 00:23:18.543 --rc geninfo_all_blocks=1 00:23:18.543 --rc geninfo_unexecuted_blocks=1 00:23:18.543 00:23:18.543 ' 00:23:18.543 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:18.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.543 --rc genhtml_branch_coverage=1 00:23:18.543 --rc genhtml_function_coverage=1 00:23:18.543 --rc genhtml_legend=1 00:23:18.543 --rc geninfo_all_blocks=1 00:23:18.543 --rc geninfo_unexecuted_blocks=1 00:23:18.543 00:23:18.543 ' 00:23:18.543 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:18.543 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:23:18.543 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:18.543 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:18.543 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:18.543 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:18.543 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:18.543 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:18.543 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:18.543 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:18.543 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:18.543 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:18.543 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:18.543 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:18.543 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:18.543 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:18.543 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:18.543 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:18.543 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:18.543 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:23:18.543 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:18.543 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:18.543 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:18.543 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.543 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.543 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.543 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:23:18.543 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:18.543 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:23:18.543 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:18.543 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:18.543 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:18.543 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:18.543 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:18.543 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:18.543 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:18.543 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:18.543 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:18.543 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:18.543 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:23:18.543 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:23:18.543 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:23:18.543 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:23:18.543 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:23:18.543 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:23:18.543 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:23:18.543 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:18.543 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:18.543 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:18.543 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:18.543 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:18.543 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:18.543 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:18.543 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:18.543 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:18.543 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:18.543 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:23:18.543 20:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:20.447 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:20.447 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:23:20.447 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:20.447 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:20.447 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:20.448 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:20.448 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:20.448 Found net devices under 0000:09:00.0: cvl_0_0 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:20.448 Found net devices under 0000:09:00.1: cvl_0_1 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:20.448 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:20.706 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:20.706 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:20.706 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:20.706 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:20.706 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:20.706 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.402 ms 00:23:20.706 00:23:20.706 --- 10.0.0.2 ping statistics --- 00:23:20.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.706 rtt min/avg/max/mdev = 0.402/0.402/0.402/0.000 ms 00:23:20.706 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:20.706 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:20.706 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:23:20.706 00:23:20.706 --- 10.0.0.1 ping statistics --- 00:23:20.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.706 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:23:20.706 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:20.706 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:23:20.706 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:20.706 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:20.706 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:20.706 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:20.706 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:20.706 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:20.706 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:20.706 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:23:20.706 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:20.706 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:20.706 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:20.706 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=1744988 00:23:20.706 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 1744988 00:23:20.706 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1744988 ']' 00:23:20.706 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:20.706 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:20.706 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:20.706 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:20.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:20.706 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:20.706 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:20.706 [2024-11-26 20:53:24.253272] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:23:20.706 [2024-11-26 20:53:24.253396] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:20.706 [2024-11-26 20:53:24.326227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.706 [2024-11-26 20:53:24.384293] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:20.706 [2024-11-26 20:53:24.384367] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:20.706 [2024-11-26 20:53:24.384396] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:20.706 [2024-11-26 20:53:24.384408] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:20.706 [2024-11-26 20:53:24.384418] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:20.706 [2024-11-26 20:53:24.385022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:20.964 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:20.964 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:23:20.964 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:20.964 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:20.964 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:20.964 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:20.964 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:23:20.964 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.964 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:20.964 [2024-11-26 20:53:24.545790] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:20.964 [2024-11-26 20:53:24.553973] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:20.964 null0 00:23:20.964 [2024-11-26 20:53:24.585945] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:20.964 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.964 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1745037 00:23:20.964 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1745037 /tmp/host.sock 00:23:20.964 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:23:20.964 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1745037 ']' 00:23:20.964 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:23:20.964 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:20.964 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:20.964 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:20.964 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:20.964 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:20.964 [2024-11-26 20:53:24.658742] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:23:20.964 [2024-11-26 20:53:24.658845] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1745037 ] 00:23:21.221 [2024-11-26 20:53:24.731023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.221 [2024-11-26 20:53:24.794511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:21.221 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:21.221 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:23:21.221 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:21.221 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:23:21.221 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.221 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:21.221 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.221 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:23:21.221 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.221 20:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:21.479 20:53:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.479 20:53:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:23:21.479 20:53:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.479 20:53:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:22.411 [2024-11-26 20:53:26.084457] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:22.411 [2024-11-26 20:53:26.084489] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:22.411 [2024-11-26 20:53:26.084518] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:22.669 [2024-11-26 20:53:26.170816] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:22.669 [2024-11-26 20:53:26.232502] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:23:22.669 [2024-11-26 20:53:26.233521] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x15d1fd0:1 started. 00:23:22.669 [2024-11-26 20:53:26.235239] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:22.669 [2024-11-26 20:53:26.235321] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:22.669 [2024-11-26 20:53:26.235367] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:22.669 [2024-11-26 20:53:26.235390] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:22.669 [2024-11-26 20:53:26.235423] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:22.669 20:53:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.669 20:53:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:23:22.669 20:53:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:22.669 20:53:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:22.669 20:53:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:22.669 20:53:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.669 20:53:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:22.669 20:53:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:22.669 20:53:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:22.669 [2024-11-26 20:53:26.241937] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x15d1fd0 was disconnected and freed. delete nvme_qpair. 00:23:22.669 20:53:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.669 20:53:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:23:22.669 20:53:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:23:22.669 20:53:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:23:22.669 20:53:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:23:22.669 20:53:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:22.669 20:53:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:22.669 20:53:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:22.669 20:53:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.669 20:53:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:22.669 20:53:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:22.669 20:53:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:22.669 20:53:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.927 20:53:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:22.927 20:53:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:23.860 20:53:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:23.860 20:53:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:23.860 20:53:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.860 20:53:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:23.860 20:53:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:23.860 20:53:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:23.860 20:53:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:23.860 20:53:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.860 20:53:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:23.860 20:53:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:24.791 20:53:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:24.791 20:53:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:24.791 20:53:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:24.791 20:53:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.791 20:53:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:24.791 20:53:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:24.791 20:53:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:24.791 20:53:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.791 20:53:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:24.791 20:53:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:26.159 20:53:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:26.159 20:53:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:26.159 20:53:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:26.159 20:53:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.159 20:53:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:26.159 20:53:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:26.159 20:53:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:26.159 20:53:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.159 20:53:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:26.159 20:53:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:27.090 20:53:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:27.090 20:53:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:27.090 20:53:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.090 20:53:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:27.090 20:53:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:27.090 20:53:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:27.090 20:53:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:27.090 20:53:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.090 20:53:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:27.090 20:53:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:28.024 20:53:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:28.024 20:53:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:28.024 20:53:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:28.024 20:53:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.024 20:53:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:28.024 20:53:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:28.024 20:53:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:28.024 20:53:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.024 20:53:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:28.024 20:53:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:28.024 [2024-11-26 20:53:31.676515] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:23:28.024 [2024-11-26 20:53:31.676579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.024 [2024-11-26 20:53:31.676614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.024 [2024-11-26 20:53:31.676633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.024 [2024-11-26 20:53:31.676646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.024 [2024-11-26 20:53:31.676660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.024 [2024-11-26 20:53:31.676672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.024 [2024-11-26 20:53:31.676699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.024 [2024-11-26 20:53:31.676711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.024 [2024-11-26 20:53:31.676724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.024 [2024-11-26 20:53:31.676736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.024 [2024-11-26 20:53:31.676748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ae860 is same with the state(6) to be set 00:23:28.024 [2024-11-26 20:53:31.686533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ae860 (9): Bad file descriptor 00:23:28.024 [2024-11-26 20:53:31.696574] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:28.024 [2024-11-26 20:53:31.696612] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:28.024 [2024-11-26 20:53:31.696622] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:28.024 [2024-11-26 20:53:31.696630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:28.024 [2024-11-26 20:53:31.696687] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:28.958 20:53:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:28.958 20:53:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:28.958 20:53:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:28.958 20:53:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.958 20:53:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:28.958 20:53:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:28.958 20:53:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:29.216 [2024-11-26 20:53:32.758376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:23:29.216 [2024-11-26 20:53:32.758458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ae860 with addr=10.0.0.2, port=4420 00:23:29.216 [2024-11-26 20:53:32.758494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ae860 is same with the state(6) to be set 00:23:29.216 [2024-11-26 20:53:32.758545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ae860 (9): Bad file descriptor 00:23:29.216 [2024-11-26 20:53:32.759032] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:23:29.216 [2024-11-26 20:53:32.759077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:29.216 [2024-11-26 20:53:32.759094] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:29.216 [2024-11-26 20:53:32.759109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:29.216 [2024-11-26 20:53:32.759123] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:29.216 [2024-11-26 20:53:32.759133] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:29.216 [2024-11-26 20:53:32.759140] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:29.216 [2024-11-26 20:53:32.759154] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:29.216 [2024-11-26 20:53:32.759162] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:29.216 20:53:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.216 20:53:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:29.216 20:53:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:30.149 [2024-11-26 20:53:33.761664] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:30.149 [2024-11-26 20:53:33.761694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:30.149 [2024-11-26 20:53:33.761713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:30.149 [2024-11-26 20:53:33.761740] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:30.149 [2024-11-26 20:53:33.761752] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:23:30.149 [2024-11-26 20:53:33.761765] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:30.149 [2024-11-26 20:53:33.761774] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:30.149 [2024-11-26 20:53:33.761781] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:30.149 [2024-11-26 20:53:33.761819] bdev_nvme.c:7235:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:23:30.149 [2024-11-26 20:53:33.761870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.149 [2024-11-26 20:53:33.761891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.149 [2024-11-26 20:53:33.761909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.149 [2024-11-26 20:53:33.761921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.149 [2024-11-26 20:53:33.761934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.149 [2024-11-26 20:53:33.761946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.149 [2024-11-26 20:53:33.761965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.149 [2024-11-26 20:53:33.761978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.149 [2024-11-26 20:53:33.761992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.149 [2024-11-26 20:53:33.762004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.149 [2024-11-26 20:53:33.762016] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:23:30.149 [2024-11-26 20:53:33.762069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x159db50 (9): Bad file descriptor 00:23:30.149 [2024-11-26 20:53:33.763055] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:23:30.149 [2024-11-26 20:53:33.763077] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:23:30.149 20:53:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:30.149 20:53:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:30.149 20:53:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:30.149 20:53:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.149 20:53:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:30.149 20:53:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:30.149 20:53:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:30.149 20:53:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.149 20:53:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:23:30.149 20:53:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:30.149 20:53:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:30.408 20:53:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:23:30.408 20:53:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:30.408 20:53:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:30.408 20:53:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.408 20:53:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:30.408 20:53:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:30.408 20:53:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:30.408 20:53:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:30.408 20:53:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.408 20:53:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:30.408 20:53:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:31.343 20:53:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:31.343 20:53:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:31.343 20:53:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:31.343 20:53:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.343 20:53:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:31.343 20:53:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:31.343 20:53:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:31.343 20:53:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.343 20:53:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:31.343 20:53:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:32.277 [2024-11-26 20:53:35.819991] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:32.277 [2024-11-26 20:53:35.820019] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:32.277 [2024-11-26 20:53:35.820040] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:32.277 20:53:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:32.277 20:53:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:32.277 20:53:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:32.277 20:53:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.277 20:53:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:32.277 20:53:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:32.277 20:53:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:32.277 20:53:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.277 [2024-11-26 20:53:35.946468] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:23:32.535 20:53:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:32.535 20:53:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:32.535 [2024-11-26 20:53:36.049442] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:23:32.535 [2024-11-26 20:53:36.050348] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1587b40:1 started. 00:23:32.535 [2024-11-26 20:53:36.051678] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:32.535 [2024-11-26 20:53:36.051723] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:32.535 [2024-11-26 20:53:36.051758] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:32.535 [2024-11-26 20:53:36.051780] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:23:32.535 [2024-11-26 20:53:36.051793] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:32.535 [2024-11-26 20:53:36.097945] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1587b40 was disconnected and freed. delete nvme_qpair. 00:23:33.468 20:53:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:33.468 20:53:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:33.468 20:53:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:33.468 20:53:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.468 20:53:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:33.468 20:53:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:33.468 20:53:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:33.468 20:53:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.468 20:53:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:23:33.468 20:53:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:23:33.468 20:53:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1745037 00:23:33.468 20:53:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1745037 ']' 00:23:33.468 20:53:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1745037 00:23:33.468 20:53:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:23:33.468 20:53:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:33.468 20:53:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1745037 00:23:33.468 20:53:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:33.468 20:53:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:33.468 20:53:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1745037' 00:23:33.468 killing process with pid 1745037 00:23:33.468 20:53:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1745037 00:23:33.468 20:53:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1745037 00:23:33.725 20:53:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:23:33.726 20:53:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:33.726 20:53:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:23:33.726 20:53:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:33.726 20:53:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:23:33.726 20:53:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:33.726 20:53:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:33.726 rmmod nvme_tcp 00:23:33.726 rmmod nvme_fabrics 00:23:33.726 rmmod nvme_keyring 00:23:33.726 20:53:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:33.726 20:53:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:23:33.726 20:53:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:23:33.726 20:53:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 1744988 ']' 00:23:33.726 20:53:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 1744988 00:23:33.726 20:53:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1744988 ']' 00:23:33.726 20:53:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1744988 00:23:33.726 20:53:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:23:33.726 20:53:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:33.726 20:53:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1744988 00:23:33.726 20:53:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:33.726 20:53:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:33.726 20:53:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1744988' 00:23:33.726 killing process with pid 1744988 00:23:33.726 20:53:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1744988 00:23:33.726 20:53:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1744988 00:23:33.986 20:53:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:33.986 20:53:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:33.986 20:53:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:33.986 20:53:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:23:33.986 20:53:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:23:33.986 20:53:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:33.986 20:53:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:23:33.986 20:53:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:33.986 20:53:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:33.986 20:53:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.986 20:53:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:33.986 20:53:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:36.522 20:53:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:36.522 00:23:36.522 real 0m17.944s 00:23:36.522 user 0m25.821s 00:23:36.522 sys 0m3.159s 00:23:36.522 20:53:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:36.522 20:53:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:36.522 ************************************ 00:23:36.522 END TEST nvmf_discovery_remove_ifc 00:23:36.522 ************************************ 00:23:36.522 20:53:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:36.522 20:53:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:36.522 20:53:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:36.522 20:53:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.522 ************************************ 00:23:36.522 START TEST nvmf_identify_kernel_target 00:23:36.522 ************************************ 00:23:36.522 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:36.522 * Looking for test storage... 00:23:36.522 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:36.522 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:36.522 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:23:36.522 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:36.522 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:36.522 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:36.522 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:36.522 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:36.522 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:23:36.522 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:23:36.522 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:23:36.522 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:23:36.522 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:23:36.522 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:23:36.522 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:23:36.522 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:36.522 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:23:36.522 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:23:36.522 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:36.522 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:36.522 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:23:36.522 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:23:36.522 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:36.522 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:23:36.523 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:23:36.523 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:23:36.523 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:23:36.523 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:36.523 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:23:36.523 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:23:36.523 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:36.523 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:36.523 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:23:36.523 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:36.523 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:36.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.523 --rc genhtml_branch_coverage=1 00:23:36.523 --rc genhtml_function_coverage=1 00:23:36.523 --rc genhtml_legend=1 00:23:36.523 --rc geninfo_all_blocks=1 00:23:36.523 --rc geninfo_unexecuted_blocks=1 00:23:36.523 00:23:36.523 ' 00:23:36.523 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:36.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.523 --rc genhtml_branch_coverage=1 00:23:36.523 --rc genhtml_function_coverage=1 00:23:36.523 --rc genhtml_legend=1 00:23:36.523 --rc geninfo_all_blocks=1 00:23:36.523 --rc geninfo_unexecuted_blocks=1 00:23:36.523 00:23:36.523 ' 00:23:36.523 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:36.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.523 --rc genhtml_branch_coverage=1 00:23:36.523 --rc genhtml_function_coverage=1 00:23:36.523 --rc genhtml_legend=1 00:23:36.523 --rc geninfo_all_blocks=1 00:23:36.523 --rc geninfo_unexecuted_blocks=1 00:23:36.523 00:23:36.523 ' 00:23:36.523 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:36.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:36.523 --rc genhtml_branch_coverage=1 00:23:36.523 --rc genhtml_function_coverage=1 00:23:36.523 --rc genhtml_legend=1 00:23:36.523 --rc geninfo_all_blocks=1 00:23:36.523 --rc geninfo_unexecuted_blocks=1 00:23:36.523 00:23:36.523 ' 00:23:36.523 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:36.523 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:23:36.523 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:36.523 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:36.523 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:36.523 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:36.523 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:36.523 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:36.523 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:36.523 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:36.523 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:36.523 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:36.523 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:36.523 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:36.523 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:36.523 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:36.523 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:36.523 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:36.523 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:36.523 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:23:36.523 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:36.523 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:36.523 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:36.523 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.523 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.523 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.523 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:23:36.523 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.523 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:23:36.523 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:36.523 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:36.523 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:36.523 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:36.523 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:36.523 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:36.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:36.524 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:36.524 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:36.524 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:36.524 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:23:36.524 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:36.524 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:36.524 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:36.524 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:36.524 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:36.524 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.524 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:36.524 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:36.524 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:36.524 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:36.524 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:23:36.524 20:53:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:38.427 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:38.427 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:38.428 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:38.428 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:38.428 Found net devices under 0000:09:00.0: cvl_0_0 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:38.428 Found net devices under 0000:09:00.1: cvl_0_1 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:38.428 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:38.429 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:38.429 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:38.429 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:38.429 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:38.429 20:53:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:38.429 20:53:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:38.429 20:53:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:38.429 20:53:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:38.429 20:53:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:38.429 20:53:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:38.429 20:53:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:38.429 20:53:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:38.429 20:53:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:38.429 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:38.429 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.319 ms 00:23:38.429 00:23:38.429 --- 10.0.0.2 ping statistics --- 00:23:38.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:38.429 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:23:38.429 20:53:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:38.429 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:38.429 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:23:38.429 00:23:38.429 --- 10.0.0.1 ping statistics --- 00:23:38.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:38.429 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:23:38.429 20:53:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:38.429 20:53:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:23:38.429 20:53:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:38.429 20:53:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:38.429 20:53:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:38.429 20:53:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:38.429 20:53:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:38.429 20:53:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:38.429 20:53:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:38.429 20:53:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:23:38.429 20:53:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:23:38.429 20:53:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:23:38.429 20:53:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:38.429 20:53:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:38.429 20:53:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.429 20:53:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.429 20:53:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:38.429 20:53:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.429 20:53:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:38.429 20:53:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:38.429 20:53:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:38.429 20:53:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:23:38.429 20:53:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:38.429 20:53:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:38.429 20:53:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:23:38.429 20:53:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:38.429 20:53:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:38.429 20:53:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:38.429 20:53:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:23:38.429 20:53:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:23:38.429 20:53:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:23:38.688 20:53:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:38.688 20:53:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:40.066 Waiting for block devices as requested 00:23:40.066 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:40.066 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:40.066 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:40.066 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:40.066 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:40.324 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:40.324 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:40.324 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:40.324 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:23:40.582 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:40.582 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:40.582 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:40.842 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:40.842 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:40.842 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:40.842 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:41.100 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:41.100 20:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:41.100 20:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:41.100 20:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:23:41.100 20:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:23:41.100 20:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:41.100 20:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:41.100 20:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:23:41.100 20:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:23:41.100 20:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:41.100 No valid GPT data, bailing 00:23:41.100 20:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:41.100 20:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:23:41.100 20:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:23:41.100 20:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:23:41.100 20:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:23:41.100 20:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:41.100 20:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:41.394 20:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:41.394 20:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:41.394 20:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:23:41.394 20:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:23:41.394 20:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:23:41.395 20:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:23:41.395 20:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:23:41.395 20:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:23:41.395 20:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:23:41.395 20:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:41.395 20:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:23:41.395 00:23:41.395 Discovery Log Number of Records 2, Generation counter 2 00:23:41.395 =====Discovery Log Entry 0====== 00:23:41.395 trtype: tcp 00:23:41.395 adrfam: ipv4 00:23:41.395 subtype: current discovery subsystem 00:23:41.395 treq: not specified, sq flow control disable supported 00:23:41.395 portid: 1 00:23:41.395 trsvcid: 4420 00:23:41.395 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:41.395 traddr: 10.0.0.1 00:23:41.395 eflags: none 00:23:41.395 sectype: none 00:23:41.395 =====Discovery Log Entry 1====== 00:23:41.395 trtype: tcp 00:23:41.395 adrfam: ipv4 00:23:41.395 subtype: nvme subsystem 00:23:41.395 treq: not specified, sq flow control disable supported 00:23:41.395 portid: 1 00:23:41.395 trsvcid: 4420 00:23:41.395 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:41.395 traddr: 10.0.0.1 00:23:41.395 eflags: none 00:23:41.395 sectype: none 00:23:41.395 20:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:23:41.395 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:23:41.395 ===================================================== 00:23:41.395 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:41.395 ===================================================== 00:23:41.395 Controller Capabilities/Features 00:23:41.395 ================================ 00:23:41.395 Vendor ID: 0000 00:23:41.395 Subsystem Vendor ID: 0000 00:23:41.395 Serial Number: ad212f438582edca90bc 00:23:41.395 Model Number: Linux 00:23:41.395 Firmware Version: 6.8.9-20 00:23:41.395 Recommended Arb Burst: 0 00:23:41.395 IEEE OUI Identifier: 00 00 00 00:23:41.395 Multi-path I/O 00:23:41.395 May have multiple subsystem ports: No 00:23:41.395 May have multiple controllers: No 00:23:41.395 Associated with SR-IOV VF: No 00:23:41.395 Max Data Transfer Size: Unlimited 00:23:41.395 Max Number of Namespaces: 0 00:23:41.395 Max Number of I/O Queues: 1024 00:23:41.395 NVMe Specification Version (VS): 1.3 00:23:41.395 NVMe Specification Version (Identify): 1.3 00:23:41.395 Maximum Queue Entries: 1024 00:23:41.395 Contiguous Queues Required: No 00:23:41.395 Arbitration Mechanisms Supported 00:23:41.395 Weighted Round Robin: Not Supported 00:23:41.395 Vendor Specific: Not Supported 00:23:41.395 Reset Timeout: 7500 ms 00:23:41.395 Doorbell Stride: 4 bytes 00:23:41.395 NVM Subsystem Reset: Not Supported 00:23:41.395 Command Sets Supported 00:23:41.395 NVM Command Set: Supported 00:23:41.395 Boot Partition: Not Supported 00:23:41.395 Memory Page Size Minimum: 4096 bytes 00:23:41.395 Memory Page Size Maximum: 4096 bytes 00:23:41.395 Persistent Memory Region: Not Supported 00:23:41.395 Optional Asynchronous Events Supported 00:23:41.395 Namespace Attribute Notices: Not Supported 00:23:41.395 Firmware Activation Notices: Not Supported 00:23:41.395 ANA Change Notices: Not Supported 00:23:41.395 PLE Aggregate Log Change Notices: Not Supported 00:23:41.395 LBA Status Info Alert Notices: Not Supported 00:23:41.395 EGE Aggregate Log Change Notices: Not Supported 00:23:41.395 Normal NVM Subsystem Shutdown event: Not Supported 00:23:41.395 Zone Descriptor Change Notices: Not Supported 00:23:41.395 Discovery Log Change Notices: Supported 00:23:41.395 Controller Attributes 00:23:41.395 128-bit Host Identifier: Not Supported 00:23:41.395 Non-Operational Permissive Mode: Not Supported 00:23:41.395 NVM Sets: Not Supported 00:23:41.395 Read Recovery Levels: Not Supported 00:23:41.395 Endurance Groups: Not Supported 00:23:41.395 Predictable Latency Mode: Not Supported 00:23:41.395 Traffic Based Keep ALive: Not Supported 00:23:41.395 Namespace Granularity: Not Supported 00:23:41.395 SQ Associations: Not Supported 00:23:41.395 UUID List: Not Supported 00:23:41.395 Multi-Domain Subsystem: Not Supported 00:23:41.395 Fixed Capacity Management: Not Supported 00:23:41.395 Variable Capacity Management: Not Supported 00:23:41.395 Delete Endurance Group: Not Supported 00:23:41.395 Delete NVM Set: Not Supported 00:23:41.395 Extended LBA Formats Supported: Not Supported 00:23:41.395 Flexible Data Placement Supported: Not Supported 00:23:41.395 00:23:41.395 Controller Memory Buffer Support 00:23:41.395 ================================ 00:23:41.395 Supported: No 00:23:41.395 00:23:41.395 Persistent Memory Region Support 00:23:41.395 ================================ 00:23:41.395 Supported: No 00:23:41.395 00:23:41.395 Admin Command Set Attributes 00:23:41.395 ============================ 00:23:41.395 Security Send/Receive: Not Supported 00:23:41.395 Format NVM: Not Supported 00:23:41.395 Firmware Activate/Download: Not Supported 00:23:41.395 Namespace Management: Not Supported 00:23:41.395 Device Self-Test: Not Supported 00:23:41.395 Directives: Not Supported 00:23:41.395 NVMe-MI: Not Supported 00:23:41.395 Virtualization Management: Not Supported 00:23:41.395 Doorbell Buffer Config: Not Supported 00:23:41.395 Get LBA Status Capability: Not Supported 00:23:41.395 Command & Feature Lockdown Capability: Not Supported 00:23:41.395 Abort Command Limit: 1 00:23:41.395 Async Event Request Limit: 1 00:23:41.395 Number of Firmware Slots: N/A 00:23:41.395 Firmware Slot 1 Read-Only: N/A 00:23:41.395 Firmware Activation Without Reset: N/A 00:23:41.395 Multiple Update Detection Support: N/A 00:23:41.395 Firmware Update Granularity: No Information Provided 00:23:41.395 Per-Namespace SMART Log: No 00:23:41.395 Asymmetric Namespace Access Log Page: Not Supported 00:23:41.395 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:41.395 Command Effects Log Page: Not Supported 00:23:41.395 Get Log Page Extended Data: Supported 00:23:41.395 Telemetry Log Pages: Not Supported 00:23:41.395 Persistent Event Log Pages: Not Supported 00:23:41.395 Supported Log Pages Log Page: May Support 00:23:41.395 Commands Supported & Effects Log Page: Not Supported 00:23:41.395 Feature Identifiers & Effects Log Page:May Support 00:23:41.395 NVMe-MI Commands & Effects Log Page: May Support 00:23:41.395 Data Area 4 for Telemetry Log: Not Supported 00:23:41.395 Error Log Page Entries Supported: 1 00:23:41.395 Keep Alive: Not Supported 00:23:41.395 00:23:41.395 NVM Command Set Attributes 00:23:41.395 ========================== 00:23:41.395 Submission Queue Entry Size 00:23:41.395 Max: 1 00:23:41.395 Min: 1 00:23:41.395 Completion Queue Entry Size 00:23:41.395 Max: 1 00:23:41.395 Min: 1 00:23:41.395 Number of Namespaces: 0 00:23:41.395 Compare Command: Not Supported 00:23:41.395 Write Uncorrectable Command: Not Supported 00:23:41.395 Dataset Management Command: Not Supported 00:23:41.395 Write Zeroes Command: Not Supported 00:23:41.395 Set Features Save Field: Not Supported 00:23:41.395 Reservations: Not Supported 00:23:41.395 Timestamp: Not Supported 00:23:41.395 Copy: Not Supported 00:23:41.395 Volatile Write Cache: Not Present 00:23:41.395 Atomic Write Unit (Normal): 1 00:23:41.395 Atomic Write Unit (PFail): 1 00:23:41.395 Atomic Compare & Write Unit: 1 00:23:41.395 Fused Compare & Write: Not Supported 00:23:41.395 Scatter-Gather List 00:23:41.395 SGL Command Set: Supported 00:23:41.395 SGL Keyed: Not Supported 00:23:41.395 SGL Bit Bucket Descriptor: Not Supported 00:23:41.395 SGL Metadata Pointer: Not Supported 00:23:41.395 Oversized SGL: Not Supported 00:23:41.395 SGL Metadata Address: Not Supported 00:23:41.395 SGL Offset: Supported 00:23:41.395 Transport SGL Data Block: Not Supported 00:23:41.395 Replay Protected Memory Block: Not Supported 00:23:41.395 00:23:41.395 Firmware Slot Information 00:23:41.395 ========================= 00:23:41.395 Active slot: 0 00:23:41.395 00:23:41.395 00:23:41.395 Error Log 00:23:41.395 ========= 00:23:41.395 00:23:41.395 Active Namespaces 00:23:41.395 ================= 00:23:41.395 Discovery Log Page 00:23:41.395 ================== 00:23:41.395 Generation Counter: 2 00:23:41.395 Number of Records: 2 00:23:41.395 Record Format: 0 00:23:41.395 00:23:41.395 Discovery Log Entry 0 00:23:41.395 ---------------------- 00:23:41.395 Transport Type: 3 (TCP) 00:23:41.396 Address Family: 1 (IPv4) 00:23:41.396 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:41.396 Entry Flags: 00:23:41.396 Duplicate Returned Information: 0 00:23:41.396 Explicit Persistent Connection Support for Discovery: 0 00:23:41.396 Transport Requirements: 00:23:41.396 Secure Channel: Not Specified 00:23:41.396 Port ID: 1 (0x0001) 00:23:41.396 Controller ID: 65535 (0xffff) 00:23:41.396 Admin Max SQ Size: 32 00:23:41.396 Transport Service Identifier: 4420 00:23:41.396 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:41.396 Transport Address: 10.0.0.1 00:23:41.396 Discovery Log Entry 1 00:23:41.396 ---------------------- 00:23:41.396 Transport Type: 3 (TCP) 00:23:41.396 Address Family: 1 (IPv4) 00:23:41.396 Subsystem Type: 2 (NVM Subsystem) 00:23:41.396 Entry Flags: 00:23:41.396 Duplicate Returned Information: 0 00:23:41.396 Explicit Persistent Connection Support for Discovery: 0 00:23:41.396 Transport Requirements: 00:23:41.396 Secure Channel: Not Specified 00:23:41.396 Port ID: 1 (0x0001) 00:23:41.396 Controller ID: 65535 (0xffff) 00:23:41.396 Admin Max SQ Size: 32 00:23:41.396 Transport Service Identifier: 4420 00:23:41.396 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:23:41.396 Transport Address: 10.0.0.1 00:23:41.396 20:53:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:41.679 get_feature(0x01) failed 00:23:41.679 get_feature(0x02) failed 00:23:41.679 get_feature(0x04) failed 00:23:41.679 ===================================================== 00:23:41.679 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:41.679 ===================================================== 00:23:41.679 Controller Capabilities/Features 00:23:41.679 ================================ 00:23:41.679 Vendor ID: 0000 00:23:41.679 Subsystem Vendor ID: 0000 00:23:41.679 Serial Number: 06c4bac800cbccc3f99f 00:23:41.679 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:23:41.679 Firmware Version: 6.8.9-20 00:23:41.679 Recommended Arb Burst: 6 00:23:41.679 IEEE OUI Identifier: 00 00 00 00:23:41.679 Multi-path I/O 00:23:41.679 May have multiple subsystem ports: Yes 00:23:41.679 May have multiple controllers: Yes 00:23:41.679 Associated with SR-IOV VF: No 00:23:41.679 Max Data Transfer Size: Unlimited 00:23:41.679 Max Number of Namespaces: 1024 00:23:41.679 Max Number of I/O Queues: 128 00:23:41.679 NVMe Specification Version (VS): 1.3 00:23:41.679 NVMe Specification Version (Identify): 1.3 00:23:41.679 Maximum Queue Entries: 1024 00:23:41.679 Contiguous Queues Required: No 00:23:41.679 Arbitration Mechanisms Supported 00:23:41.679 Weighted Round Robin: Not Supported 00:23:41.679 Vendor Specific: Not Supported 00:23:41.679 Reset Timeout: 7500 ms 00:23:41.679 Doorbell Stride: 4 bytes 00:23:41.679 NVM Subsystem Reset: Not Supported 00:23:41.679 Command Sets Supported 00:23:41.679 NVM Command Set: Supported 00:23:41.679 Boot Partition: Not Supported 00:23:41.679 Memory Page Size Minimum: 4096 bytes 00:23:41.679 Memory Page Size Maximum: 4096 bytes 00:23:41.679 Persistent Memory Region: Not Supported 00:23:41.679 Optional Asynchronous Events Supported 00:23:41.679 Namespace Attribute Notices: Supported 00:23:41.679 Firmware Activation Notices: Not Supported 00:23:41.679 ANA Change Notices: Supported 00:23:41.679 PLE Aggregate Log Change Notices: Not Supported 00:23:41.679 LBA Status Info Alert Notices: Not Supported 00:23:41.679 EGE Aggregate Log Change Notices: Not Supported 00:23:41.679 Normal NVM Subsystem Shutdown event: Not Supported 00:23:41.679 Zone Descriptor Change Notices: Not Supported 00:23:41.679 Discovery Log Change Notices: Not Supported 00:23:41.679 Controller Attributes 00:23:41.679 128-bit Host Identifier: Supported 00:23:41.679 Non-Operational Permissive Mode: Not Supported 00:23:41.679 NVM Sets: Not Supported 00:23:41.679 Read Recovery Levels: Not Supported 00:23:41.679 Endurance Groups: Not Supported 00:23:41.679 Predictable Latency Mode: Not Supported 00:23:41.679 Traffic Based Keep ALive: Supported 00:23:41.679 Namespace Granularity: Not Supported 00:23:41.679 SQ Associations: Not Supported 00:23:41.679 UUID List: Not Supported 00:23:41.679 Multi-Domain Subsystem: Not Supported 00:23:41.679 Fixed Capacity Management: Not Supported 00:23:41.679 Variable Capacity Management: Not Supported 00:23:41.679 Delete Endurance Group: Not Supported 00:23:41.679 Delete NVM Set: Not Supported 00:23:41.679 Extended LBA Formats Supported: Not Supported 00:23:41.679 Flexible Data Placement Supported: Not Supported 00:23:41.679 00:23:41.679 Controller Memory Buffer Support 00:23:41.679 ================================ 00:23:41.679 Supported: No 00:23:41.679 00:23:41.679 Persistent Memory Region Support 00:23:41.679 ================================ 00:23:41.679 Supported: No 00:23:41.679 00:23:41.679 Admin Command Set Attributes 00:23:41.679 ============================ 00:23:41.679 Security Send/Receive: Not Supported 00:23:41.679 Format NVM: Not Supported 00:23:41.679 Firmware Activate/Download: Not Supported 00:23:41.679 Namespace Management: Not Supported 00:23:41.679 Device Self-Test: Not Supported 00:23:41.679 Directives: Not Supported 00:23:41.679 NVMe-MI: Not Supported 00:23:41.679 Virtualization Management: Not Supported 00:23:41.679 Doorbell Buffer Config: Not Supported 00:23:41.679 Get LBA Status Capability: Not Supported 00:23:41.679 Command & Feature Lockdown Capability: Not Supported 00:23:41.679 Abort Command Limit: 4 00:23:41.679 Async Event Request Limit: 4 00:23:41.679 Number of Firmware Slots: N/A 00:23:41.679 Firmware Slot 1 Read-Only: N/A 00:23:41.679 Firmware Activation Without Reset: N/A 00:23:41.679 Multiple Update Detection Support: N/A 00:23:41.679 Firmware Update Granularity: No Information Provided 00:23:41.679 Per-Namespace SMART Log: Yes 00:23:41.679 Asymmetric Namespace Access Log Page: Supported 00:23:41.679 ANA Transition Time : 10 sec 00:23:41.679 00:23:41.679 Asymmetric Namespace Access Capabilities 00:23:41.679 ANA Optimized State : Supported 00:23:41.679 ANA Non-Optimized State : Supported 00:23:41.679 ANA Inaccessible State : Supported 00:23:41.679 ANA Persistent Loss State : Supported 00:23:41.679 ANA Change State : Supported 00:23:41.679 ANAGRPID is not changed : No 00:23:41.679 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:23:41.679 00:23:41.679 ANA Group Identifier Maximum : 128 00:23:41.679 Number of ANA Group Identifiers : 128 00:23:41.679 Max Number of Allowed Namespaces : 1024 00:23:41.679 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:23:41.679 Command Effects Log Page: Supported 00:23:41.679 Get Log Page Extended Data: Supported 00:23:41.679 Telemetry Log Pages: Not Supported 00:23:41.679 Persistent Event Log Pages: Not Supported 00:23:41.679 Supported Log Pages Log Page: May Support 00:23:41.679 Commands Supported & Effects Log Page: Not Supported 00:23:41.679 Feature Identifiers & Effects Log Page:May Support 00:23:41.679 NVMe-MI Commands & Effects Log Page: May Support 00:23:41.679 Data Area 4 for Telemetry Log: Not Supported 00:23:41.679 Error Log Page Entries Supported: 128 00:23:41.679 Keep Alive: Supported 00:23:41.679 Keep Alive Granularity: 1000 ms 00:23:41.679 00:23:41.679 NVM Command Set Attributes 00:23:41.679 ========================== 00:23:41.679 Submission Queue Entry Size 00:23:41.679 Max: 64 00:23:41.679 Min: 64 00:23:41.679 Completion Queue Entry Size 00:23:41.679 Max: 16 00:23:41.679 Min: 16 00:23:41.679 Number of Namespaces: 1024 00:23:41.679 Compare Command: Not Supported 00:23:41.679 Write Uncorrectable Command: Not Supported 00:23:41.679 Dataset Management Command: Supported 00:23:41.679 Write Zeroes Command: Supported 00:23:41.679 Set Features Save Field: Not Supported 00:23:41.679 Reservations: Not Supported 00:23:41.679 Timestamp: Not Supported 00:23:41.679 Copy: Not Supported 00:23:41.679 Volatile Write Cache: Present 00:23:41.679 Atomic Write Unit (Normal): 1 00:23:41.679 Atomic Write Unit (PFail): 1 00:23:41.679 Atomic Compare & Write Unit: 1 00:23:41.679 Fused Compare & Write: Not Supported 00:23:41.679 Scatter-Gather List 00:23:41.679 SGL Command Set: Supported 00:23:41.679 SGL Keyed: Not Supported 00:23:41.679 SGL Bit Bucket Descriptor: Not Supported 00:23:41.679 SGL Metadata Pointer: Not Supported 00:23:41.679 Oversized SGL: Not Supported 00:23:41.679 SGL Metadata Address: Not Supported 00:23:41.679 SGL Offset: Supported 00:23:41.679 Transport SGL Data Block: Not Supported 00:23:41.679 Replay Protected Memory Block: Not Supported 00:23:41.679 00:23:41.679 Firmware Slot Information 00:23:41.679 ========================= 00:23:41.679 Active slot: 0 00:23:41.679 00:23:41.679 Asymmetric Namespace Access 00:23:41.679 =========================== 00:23:41.679 Change Count : 0 00:23:41.679 Number of ANA Group Descriptors : 1 00:23:41.679 ANA Group Descriptor : 0 00:23:41.679 ANA Group ID : 1 00:23:41.679 Number of NSID Values : 1 00:23:41.679 Change Count : 0 00:23:41.679 ANA State : 1 00:23:41.679 Namespace Identifier : 1 00:23:41.679 00:23:41.679 Commands Supported and Effects 00:23:41.679 ============================== 00:23:41.679 Admin Commands 00:23:41.679 -------------- 00:23:41.679 Get Log Page (02h): Supported 00:23:41.679 Identify (06h): Supported 00:23:41.679 Abort (08h): Supported 00:23:41.680 Set Features (09h): Supported 00:23:41.680 Get Features (0Ah): Supported 00:23:41.680 Asynchronous Event Request (0Ch): Supported 00:23:41.680 Keep Alive (18h): Supported 00:23:41.680 I/O Commands 00:23:41.680 ------------ 00:23:41.680 Flush (00h): Supported 00:23:41.680 Write (01h): Supported LBA-Change 00:23:41.680 Read (02h): Supported 00:23:41.680 Write Zeroes (08h): Supported LBA-Change 00:23:41.680 Dataset Management (09h): Supported 00:23:41.680 00:23:41.680 Error Log 00:23:41.680 ========= 00:23:41.680 Entry: 0 00:23:41.680 Error Count: 0x3 00:23:41.680 Submission Queue Id: 0x0 00:23:41.680 Command Id: 0x5 00:23:41.680 Phase Bit: 0 00:23:41.680 Status Code: 0x2 00:23:41.680 Status Code Type: 0x0 00:23:41.680 Do Not Retry: 1 00:23:41.680 Error Location: 0x28 00:23:41.680 LBA: 0x0 00:23:41.680 Namespace: 0x0 00:23:41.680 Vendor Log Page: 0x0 00:23:41.680 ----------- 00:23:41.680 Entry: 1 00:23:41.680 Error Count: 0x2 00:23:41.680 Submission Queue Id: 0x0 00:23:41.680 Command Id: 0x5 00:23:41.680 Phase Bit: 0 00:23:41.680 Status Code: 0x2 00:23:41.680 Status Code Type: 0x0 00:23:41.680 Do Not Retry: 1 00:23:41.680 Error Location: 0x28 00:23:41.680 LBA: 0x0 00:23:41.680 Namespace: 0x0 00:23:41.680 Vendor Log Page: 0x0 00:23:41.680 ----------- 00:23:41.680 Entry: 2 00:23:41.680 Error Count: 0x1 00:23:41.680 Submission Queue Id: 0x0 00:23:41.680 Command Id: 0x4 00:23:41.680 Phase Bit: 0 00:23:41.680 Status Code: 0x2 00:23:41.680 Status Code Type: 0x0 00:23:41.680 Do Not Retry: 1 00:23:41.680 Error Location: 0x28 00:23:41.680 LBA: 0x0 00:23:41.680 Namespace: 0x0 00:23:41.680 Vendor Log Page: 0x0 00:23:41.680 00:23:41.680 Number of Queues 00:23:41.680 ================ 00:23:41.680 Number of I/O Submission Queues: 128 00:23:41.680 Number of I/O Completion Queues: 128 00:23:41.680 00:23:41.680 ZNS Specific Controller Data 00:23:41.680 ============================ 00:23:41.680 Zone Append Size Limit: 0 00:23:41.680 00:23:41.680 00:23:41.680 Active Namespaces 00:23:41.680 ================= 00:23:41.680 get_feature(0x05) failed 00:23:41.680 Namespace ID:1 00:23:41.680 Command Set Identifier: NVM (00h) 00:23:41.680 Deallocate: Supported 00:23:41.680 Deallocated/Unwritten Error: Not Supported 00:23:41.680 Deallocated Read Value: Unknown 00:23:41.680 Deallocate in Write Zeroes: Not Supported 00:23:41.680 Deallocated Guard Field: 0xFFFF 00:23:41.680 Flush: Supported 00:23:41.680 Reservation: Not Supported 00:23:41.680 Namespace Sharing Capabilities: Multiple Controllers 00:23:41.680 Size (in LBAs): 1953525168 (931GiB) 00:23:41.680 Capacity (in LBAs): 1953525168 (931GiB) 00:23:41.680 Utilization (in LBAs): 1953525168 (931GiB) 00:23:41.680 UUID: 802104c3-2e4a-40f9-a14c-5e4bc1afc604 00:23:41.680 Thin Provisioning: Not Supported 00:23:41.680 Per-NS Atomic Units: Yes 00:23:41.680 Atomic Boundary Size (Normal): 0 00:23:41.680 Atomic Boundary Size (PFail): 0 00:23:41.680 Atomic Boundary Offset: 0 00:23:41.680 NGUID/EUI64 Never Reused: No 00:23:41.680 ANA group ID: 1 00:23:41.680 Namespace Write Protected: No 00:23:41.680 Number of LBA Formats: 1 00:23:41.680 Current LBA Format: LBA Format #00 00:23:41.680 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:41.680 00:23:41.680 20:53:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:23:41.680 20:53:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:41.680 20:53:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:23:41.680 20:53:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:41.680 20:53:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:23:41.680 20:53:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:41.680 20:53:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:41.680 rmmod nvme_tcp 00:23:41.680 rmmod nvme_fabrics 00:23:41.680 20:53:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:41.680 20:53:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:23:41.680 20:53:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:23:41.680 20:53:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:23:41.680 20:53:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:41.680 20:53:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:41.680 20:53:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:41.680 20:53:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:23:41.680 20:53:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:23:41.680 20:53:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:41.680 20:53:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:23:41.680 20:53:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:41.680 20:53:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:41.680 20:53:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:41.680 20:53:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:41.680 20:53:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:43.591 20:53:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:43.591 20:53:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:23:43.591 20:53:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:43.591 20:53:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:23:43.591 20:53:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:43.591 20:53:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:43.591 20:53:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:43.591 20:53:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:43.591 20:53:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:23:43.591 20:53:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:23:43.591 20:53:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:44.967 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:44.967 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:44.967 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:44.967 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:44.967 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:44.967 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:44.967 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:44.967 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:44.967 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:44.967 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:44.967 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:44.967 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:44.967 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:44.967 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:44.967 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:44.967 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:45.903 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:23:46.162 00:23:46.162 real 0m9.924s 00:23:46.162 user 0m2.206s 00:23:46.162 sys 0m3.700s 00:23:46.162 20:53:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:46.162 20:53:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:46.162 ************************************ 00:23:46.162 END TEST nvmf_identify_kernel_target 00:23:46.162 ************************************ 00:23:46.162 20:53:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:46.162 20:53:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:46.162 20:53:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:46.162 20:53:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.162 ************************************ 00:23:46.162 START TEST nvmf_auth_host 00:23:46.162 ************************************ 00:23:46.162 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:46.162 * Looking for test storage... 00:23:46.162 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:46.162 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:46.162 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:23:46.162 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:46.162 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:46.162 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:46.162 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:46.162 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:46.162 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:46.162 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:46.162 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:46.162 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:46.162 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:46.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.163 --rc genhtml_branch_coverage=1 00:23:46.163 --rc genhtml_function_coverage=1 00:23:46.163 --rc genhtml_legend=1 00:23:46.163 --rc geninfo_all_blocks=1 00:23:46.163 --rc geninfo_unexecuted_blocks=1 00:23:46.163 00:23:46.163 ' 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:46.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.163 --rc genhtml_branch_coverage=1 00:23:46.163 --rc genhtml_function_coverage=1 00:23:46.163 --rc genhtml_legend=1 00:23:46.163 --rc geninfo_all_blocks=1 00:23:46.163 --rc geninfo_unexecuted_blocks=1 00:23:46.163 00:23:46.163 ' 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:46.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.163 --rc genhtml_branch_coverage=1 00:23:46.163 --rc genhtml_function_coverage=1 00:23:46.163 --rc genhtml_legend=1 00:23:46.163 --rc geninfo_all_blocks=1 00:23:46.163 --rc geninfo_unexecuted_blocks=1 00:23:46.163 00:23:46.163 ' 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:46.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.163 --rc genhtml_branch_coverage=1 00:23:46.163 --rc genhtml_function_coverage=1 00:23:46.163 --rc genhtml_legend=1 00:23:46.163 --rc geninfo_all_blocks=1 00:23:46.163 --rc geninfo_unexecuted_blocks=1 00:23:46.163 00:23:46.163 ' 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:46.163 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:46.163 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:46.164 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:23:46.164 20:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:48.696 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:48.696 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:48.696 Found net devices under 0000:09:00.0: cvl_0_0 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:48.696 Found net devices under 0000:09:00.1: cvl_0_1 00:23:48.696 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:48.697 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:48.697 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:23:48.697 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:48.697 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:48.697 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:48.697 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:48.697 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:48.697 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:48.697 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:48.697 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:48.697 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:48.697 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:48.697 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:48.697 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:48.697 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:48.697 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:48.697 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:48.697 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:48.697 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:48.697 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:48.697 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:48.697 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:48.697 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:48.697 20:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:48.697 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:48.697 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:48.697 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:48.697 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:48.697 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:48.697 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.406 ms 00:23:48.697 00:23:48.697 --- 10.0.0.2 ping statistics --- 00:23:48.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:48.697 rtt min/avg/max/mdev = 0.406/0.406/0.406/0.000 ms 00:23:48.697 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:48.697 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:48.697 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:23:48.697 00:23:48.697 --- 10.0.0.1 ping statistics --- 00:23:48.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:48.697 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:23:48.697 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:48.697 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:23:48.697 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:48.697 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:48.697 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:48.697 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:48.697 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:48.697 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:48.697 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:48.697 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:23:48.697 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:48.697 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:48.697 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.697 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=1752229 00:23:48.697 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:23:48.697 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 1752229 00:23:48.697 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1752229 ']' 00:23:48.697 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:48.697 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:48.697 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:48.697 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:48.697 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.697 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:48.697 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:23:48.697 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:48.697 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:48.697 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.697 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:48.697 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:23:48.697 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:23:48.697 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:48.697 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:48.697 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:48.697 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:23:48.697 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:23:48.697 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:48.956 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=dda37a0db704ac758cd60cb92e555471 00:23:48.956 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:23:48.956 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Azs 00:23:48.956 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key dda37a0db704ac758cd60cb92e555471 0 00:23:48.956 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 dda37a0db704ac758cd60cb92e555471 0 00:23:48.956 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:48.956 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:48.956 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=dda37a0db704ac758cd60cb92e555471 00:23:48.956 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:23:48.956 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:48.956 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Azs 00:23:48.956 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Azs 00:23:48.956 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Azs 00:23:48.956 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:23:48.956 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:48.956 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:48.956 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:48.956 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:23:48.956 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:23:48.956 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:48.956 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1320845aeedc4bfe78c9293c4e900ce5cfaae0b7a8bce7d4ff4b84bdac901c9d 00:23:48.956 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:23:48.956 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.ClA 00:23:48.956 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1320845aeedc4bfe78c9293c4e900ce5cfaae0b7a8bce7d4ff4b84bdac901c9d 3 00:23:48.956 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1320845aeedc4bfe78c9293c4e900ce5cfaae0b7a8bce7d4ff4b84bdac901c9d 3 00:23:48.956 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:48.956 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:48.956 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1320845aeedc4bfe78c9293c4e900ce5cfaae0b7a8bce7d4ff4b84bdac901c9d 00:23:48.956 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:23:48.956 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:48.956 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.ClA 00:23:48.956 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.ClA 00:23:48.956 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.ClA 00:23:48.956 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:23:48.956 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:48.956 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=49345ecc6283267c5473e672f591095c892d709d609cc9f1 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.2Ir 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 49345ecc6283267c5473e672f591095c892d709d609cc9f1 0 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 49345ecc6283267c5473e672f591095c892d709d609cc9f1 0 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=49345ecc6283267c5473e672f591095c892d709d609cc9f1 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.2Ir 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.2Ir 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.2Ir 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8e675a58ec85bdff94e05a60e855732878009757c76bdea4 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.aye 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8e675a58ec85bdff94e05a60e855732878009757c76bdea4 2 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8e675a58ec85bdff94e05a60e855732878009757c76bdea4 2 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8e675a58ec85bdff94e05a60e855732878009757c76bdea4 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.aye 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.aye 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.aye 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9254e3be311d5410acd6e1712081feb9 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.JOB 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9254e3be311d5410acd6e1712081feb9 1 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9254e3be311d5410acd6e1712081feb9 1 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9254e3be311d5410acd6e1712081feb9 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.JOB 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.JOB 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.JOB 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ca3847a950937f1c311fe2f257e35deb 00:23:48.957 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:23:49.216 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.qlE 00:23:49.216 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ca3847a950937f1c311fe2f257e35deb 1 00:23:49.216 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ca3847a950937f1c311fe2f257e35deb 1 00:23:49.216 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:49.216 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:49.216 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ca3847a950937f1c311fe2f257e35deb 00:23:49.216 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:23:49.216 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:49.216 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.qlE 00:23:49.216 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.qlE 00:23:49.216 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.qlE 00:23:49.216 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:23:49.216 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:49.216 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:49.216 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:49.216 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:23:49.216 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:23:49.216 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:49.216 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7eea6498a2ca02af418c2a485c84e63ba77229437a085f86 00:23:49.216 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:23:49.216 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.PFs 00:23:49.216 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7eea6498a2ca02af418c2a485c84e63ba77229437a085f86 2 00:23:49.216 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7eea6498a2ca02af418c2a485c84e63ba77229437a085f86 2 00:23:49.216 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:49.216 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:49.217 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7eea6498a2ca02af418c2a485c84e63ba77229437a085f86 00:23:49.217 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:23:49.217 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:49.217 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.PFs 00:23:49.217 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.PFs 00:23:49.217 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.PFs 00:23:49.217 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:23:49.217 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:49.217 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:49.217 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:49.217 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:23:49.217 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:23:49.217 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:49.217 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=cc7f358c5ff042dd99451835079e309a 00:23:49.217 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:23:49.217 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.uUu 00:23:49.217 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key cc7f358c5ff042dd99451835079e309a 0 00:23:49.217 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 cc7f358c5ff042dd99451835079e309a 0 00:23:49.217 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:49.217 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:49.217 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=cc7f358c5ff042dd99451835079e309a 00:23:49.217 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:23:49.217 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:49.217 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.uUu 00:23:49.217 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.uUu 00:23:49.217 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.uUu 00:23:49.217 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:23:49.217 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:49.217 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:49.217 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:49.217 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:23:49.217 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:23:49.217 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:49.217 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a5fbdba17f5ffa6ad3a06ebd71d0f9ea818d628da9ab9b1dca7bced45face6fd 00:23:49.217 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:23:49.217 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.w4S 00:23:49.217 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a5fbdba17f5ffa6ad3a06ebd71d0f9ea818d628da9ab9b1dca7bced45face6fd 3 00:23:49.217 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a5fbdba17f5ffa6ad3a06ebd71d0f9ea818d628da9ab9b1dca7bced45face6fd 3 00:23:49.217 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:49.217 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:49.217 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a5fbdba17f5ffa6ad3a06ebd71d0f9ea818d628da9ab9b1dca7bced45face6fd 00:23:49.217 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:23:49.217 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:49.217 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.w4S 00:23:49.217 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.w4S 00:23:49.217 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.w4S 00:23:49.217 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:23:49.217 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1752229 00:23:49.217 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1752229 ']' 00:23:49.217 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:49.217 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:49.217 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:49.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:49.217 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:49.217 20:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.476 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:49.476 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:23:49.476 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:49.476 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Azs 00:23:49.476 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.476 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.476 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.476 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.ClA ]] 00:23:49.476 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ClA 00:23:49.476 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.476 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.476 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.476 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:49.476 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.2Ir 00:23:49.476 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.476 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.476 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.476 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.aye ]] 00:23:49.476 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.aye 00:23:49.476 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.476 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.476 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.476 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:49.476 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.JOB 00:23:49.476 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.476 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.735 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.735 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.qlE ]] 00:23:49.735 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.qlE 00:23:49.735 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.735 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.735 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.735 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:49.735 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.PFs 00:23:49.735 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.735 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.735 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.735 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.uUu ]] 00:23:49.735 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.uUu 00:23:49.735 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.735 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.735 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.735 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:49.735 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.w4S 00:23:49.735 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.735 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.735 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.735 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:23:49.735 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:23:49.735 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:23:49.735 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:49.735 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:49.735 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:49.735 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.735 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.735 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:49.735 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.735 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:49.735 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:49.735 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:49.735 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:23:49.735 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:23:49.735 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:23:49.735 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:49.735 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:49.735 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:49.735 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:23:49.735 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:23:49.735 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:23:49.735 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:49.735 20:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:50.668 Waiting for block devices as requested 00:23:50.668 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:50.668 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:50.926 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:50.926 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:50.926 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:50.926 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:51.185 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:51.185 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:51.185 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:23:51.443 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:51.443 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:51.702 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:51.702 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:51.702 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:51.702 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:51.961 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:51.961 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:52.528 20:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:52.528 20:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:52.528 20:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:23:52.528 20:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:23:52.528 20:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:52.528 20:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:52.528 20:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:23:52.528 20:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:23:52.528 20:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:52.528 No valid GPT data, bailing 00:23:52.528 20:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:52.528 20:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:23:52.528 20:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:23:52.528 20:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:23:52.528 20:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:23:52.528 20:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:52.528 20:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:52.528 20:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:52.528 20:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:23:52.528 20:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:23:52.528 20:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:23:52.528 20:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:23:52.528 20:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:23:52.528 20:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:23:52.528 20:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:23:52.528 20:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:23:52.528 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:52.528 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:23:52.528 00:23:52.528 Discovery Log Number of Records 2, Generation counter 2 00:23:52.528 =====Discovery Log Entry 0====== 00:23:52.528 trtype: tcp 00:23:52.528 adrfam: ipv4 00:23:52.528 subtype: current discovery subsystem 00:23:52.528 treq: not specified, sq flow control disable supported 00:23:52.528 portid: 1 00:23:52.528 trsvcid: 4420 00:23:52.528 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:52.528 traddr: 10.0.0.1 00:23:52.528 eflags: none 00:23:52.528 sectype: none 00:23:52.528 =====Discovery Log Entry 1====== 00:23:52.528 trtype: tcp 00:23:52.528 adrfam: ipv4 00:23:52.528 subtype: nvme subsystem 00:23:52.528 treq: not specified, sq flow control disable supported 00:23:52.528 portid: 1 00:23:52.528 trsvcid: 4420 00:23:52.528 subnqn: nqn.2024-02.io.spdk:cnode0 00:23:52.528 traddr: 10.0.0.1 00:23:52.528 eflags: none 00:23:52.528 sectype: none 00:23:52.528 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:52.528 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:23:52.528 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:52.528 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:52.528 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:52.528 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:52.528 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:52.528 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:52.529 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDkzNDVlY2M2MjgzMjY3YzU0NzNlNjcyZjU5MTA5NWM4OTJkNzA5ZDYwOWNjOWYxSTOurg==: 00:23:52.529 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU2NzVhNThlYzg1YmRmZjk0ZTA1YTYwZTg1NTczMjg3ODAwOTc1N2M3NmJkZWE0kKW4+A==: 00:23:52.529 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:52.529 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:52.529 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDkzNDVlY2M2MjgzMjY3YzU0NzNlNjcyZjU5MTA5NWM4OTJkNzA5ZDYwOWNjOWYxSTOurg==: 00:23:52.529 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU2NzVhNThlYzg1YmRmZjk0ZTA1YTYwZTg1NTczMjg3ODAwOTc1N2M3NmJkZWE0kKW4+A==: ]] 00:23:52.529 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU2NzVhNThlYzg1YmRmZjk0ZTA1YTYwZTg1NTczMjg3ODAwOTc1N2M3NmJkZWE0kKW4+A==: 00:23:52.529 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:52.529 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:23:52.529 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:52.529 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:52.529 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:23:52.529 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:52.529 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:23:52.529 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:52.529 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:52.529 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:52.529 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:52.529 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.529 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.529 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.529 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:52.529 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:52.529 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:52.529 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:52.529 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.529 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.529 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:52.529 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.529 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:52.529 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:52.529 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:52.529 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:52.529 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.529 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.788 nvme0n1 00:23:52.788 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.788 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:52.788 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.788 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:52.788 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.788 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.788 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.788 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:52.788 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.788 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.788 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.788 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:52.788 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:52.788 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:52.788 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:23:52.788 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:52.788 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:52.788 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:52.788 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:52.788 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGRhMzdhMGRiNzA0YWM3NThjZDYwY2I5MmU1NTU0NzHxtfuB: 00:23:52.788 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTMyMDg0NWFlZWRjNGJmZTc4YzkyOTNjNGU5MDBjZTVjZmFhZTBiN2E4YmNlN2Q0ZmY0Yjg0YmRhYzkwMWM5ZE1ffHg=: 00:23:52.788 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:52.788 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:52.788 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGRhMzdhMGRiNzA0YWM3NThjZDYwY2I5MmU1NTU0NzHxtfuB: 00:23:52.788 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTMyMDg0NWFlZWRjNGJmZTc4YzkyOTNjNGU5MDBjZTVjZmFhZTBiN2E4YmNlN2Q0ZmY0Yjg0YmRhYzkwMWM5ZE1ffHg=: ]] 00:23:52.788 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTMyMDg0NWFlZWRjNGJmZTc4YzkyOTNjNGU5MDBjZTVjZmFhZTBiN2E4YmNlN2Q0ZmY0Yjg0YmRhYzkwMWM5ZE1ffHg=: 00:23:52.788 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:23:52.788 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:52.788 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:52.788 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:52.788 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:52.788 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:52.788 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:52.788 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.788 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.788 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.788 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:52.788 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:52.788 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:52.788 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:52.788 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.788 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.788 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:52.788 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.788 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:52.788 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:52.788 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:52.788 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:52.788 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.788 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.048 nvme0n1 00:23:53.048 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.048 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:53.048 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.048 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.048 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:53.048 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.048 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.048 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:53.048 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.048 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.048 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.048 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:53.048 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:53.048 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:53.048 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:53.048 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:53.048 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:53.048 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDkzNDVlY2M2MjgzMjY3YzU0NzNlNjcyZjU5MTA5NWM4OTJkNzA5ZDYwOWNjOWYxSTOurg==: 00:23:53.048 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU2NzVhNThlYzg1YmRmZjk0ZTA1YTYwZTg1NTczMjg3ODAwOTc1N2M3NmJkZWE0kKW4+A==: 00:23:53.048 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:53.048 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:53.048 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDkzNDVlY2M2MjgzMjY3YzU0NzNlNjcyZjU5MTA5NWM4OTJkNzA5ZDYwOWNjOWYxSTOurg==: 00:23:53.048 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU2NzVhNThlYzg1YmRmZjk0ZTA1YTYwZTg1NTczMjg3ODAwOTc1N2M3NmJkZWE0kKW4+A==: ]] 00:23:53.048 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU2NzVhNThlYzg1YmRmZjk0ZTA1YTYwZTg1NTczMjg3ODAwOTc1N2M3NmJkZWE0kKW4+A==: 00:23:53.048 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:23:53.048 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:53.048 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:53.048 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:53.048 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:53.048 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:53.048 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:53.048 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.048 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.048 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.048 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:53.048 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:53.048 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:53.048 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:53.048 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:53.048 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:53.048 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:53.048 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:53.048 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:53.048 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:53.048 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:53.048 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:53.048 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.048 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.048 nvme0n1 00:23:53.048 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.048 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:53.048 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.048 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.048 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:53.048 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTI1NGUzYmUzMTFkNTQxMGFjZDZlMTcxMjA4MWZlYjlr7V90: 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2EzODQ3YTk1MDkzN2YxYzMxMWZlMmYyNTdlMzVkZWJ0mRxz: 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTI1NGUzYmUzMTFkNTQxMGFjZDZlMTcxMjA4MWZlYjlr7V90: 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2EzODQ3YTk1MDkzN2YxYzMxMWZlMmYyNTdlMzVkZWJ0mRxz: ]] 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2EzODQ3YTk1MDkzN2YxYzMxMWZlMmYyNTdlMzVkZWJ0mRxz: 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.307 nvme0n1 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2VlYTY0OThhMmNhMDJhZjQxOGMyYTQ4NWM4NGU2M2JhNzcyMjk0MzdhMDg1Zjg22g6MLQ==: 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2M3ZjM1OGM1ZmYwNDJkZDk5NDUxODM1MDc5ZTMwOWGknZ3H: 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2VlYTY0OThhMmNhMDJhZjQxOGMyYTQ4NWM4NGU2M2JhNzcyMjk0MzdhMDg1Zjg22g6MLQ==: 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2M3ZjM1OGM1ZmYwNDJkZDk5NDUxODM1MDc5ZTMwOWGknZ3H: ]] 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2M3ZjM1OGM1ZmYwNDJkZDk5NDUxODM1MDc5ZTMwOWGknZ3H: 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:53.307 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:53.308 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:53.308 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:53.308 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:53.308 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:53.308 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.308 20:53:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.308 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.566 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:53.566 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:53.566 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:53.566 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:53.566 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:53.566 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:53.566 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:53.566 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:53.566 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:53.566 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:53.566 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:53.566 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:53.566 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.566 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.566 nvme0n1 00:23:53.566 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.566 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:53.566 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:53.566 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.566 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.566 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.566 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.566 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:53.566 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.566 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.566 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.567 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:53.567 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:23:53.567 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:53.567 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:53.567 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:53.567 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:53.567 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTVmYmRiYTE3ZjVmZmE2YWQzYTA2ZWJkNzFkMGY5ZWE4MThkNjI4ZGE5YWI5YjFkY2E3YmNlZDQ1ZmFjZTZmZGROcto=: 00:23:53.567 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:53.567 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:53.567 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:53.567 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTVmYmRiYTE3ZjVmZmE2YWQzYTA2ZWJkNzFkMGY5ZWE4MThkNjI4ZGE5YWI5YjFkY2E3YmNlZDQ1ZmFjZTZmZGROcto=: 00:23:53.567 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:53.567 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:23:53.567 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:53.567 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:53.567 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:53.567 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:53.567 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:53.567 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:53.567 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.567 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.567 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.567 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:53.567 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:53.567 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:53.567 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:53.567 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:53.567 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:53.567 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:53.567 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:53.567 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:53.567 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:53.567 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:53.567 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:53.567 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.567 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.825 nvme0n1 00:23:53.825 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.825 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:53.825 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.825 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:53.825 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.825 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.825 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.825 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:53.825 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.825 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.825 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.825 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:53.825 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:53.825 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:23:53.825 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:53.825 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:53.825 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:53.825 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:53.825 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGRhMzdhMGRiNzA0YWM3NThjZDYwY2I5MmU1NTU0NzHxtfuB: 00:23:53.826 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTMyMDg0NWFlZWRjNGJmZTc4YzkyOTNjNGU5MDBjZTVjZmFhZTBiN2E4YmNlN2Q0ZmY0Yjg0YmRhYzkwMWM5ZE1ffHg=: 00:23:53.826 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:53.826 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:53.826 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGRhMzdhMGRiNzA0YWM3NThjZDYwY2I5MmU1NTU0NzHxtfuB: 00:23:53.826 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTMyMDg0NWFlZWRjNGJmZTc4YzkyOTNjNGU5MDBjZTVjZmFhZTBiN2E4YmNlN2Q0ZmY0Yjg0YmRhYzkwMWM5ZE1ffHg=: ]] 00:23:53.826 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTMyMDg0NWFlZWRjNGJmZTc4YzkyOTNjNGU5MDBjZTVjZmFhZTBiN2E4YmNlN2Q0ZmY0Yjg0YmRhYzkwMWM5ZE1ffHg=: 00:23:53.826 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:23:53.826 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:53.826 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:53.826 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:53.826 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:53.826 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:53.826 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:53.826 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.826 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.826 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.826 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:53.826 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:53.826 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:53.826 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:53.826 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:53.826 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:53.826 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:53.826 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:53.826 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:53.826 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:53.826 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:53.826 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:53.826 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.826 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.084 nvme0n1 00:23:54.084 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.084 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:54.084 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.084 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.084 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:54.084 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.084 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.084 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:54.084 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.084 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.084 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.084 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:54.084 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:23:54.085 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:54.085 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:54.085 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:54.085 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:54.085 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDkzNDVlY2M2MjgzMjY3YzU0NzNlNjcyZjU5MTA5NWM4OTJkNzA5ZDYwOWNjOWYxSTOurg==: 00:23:54.085 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU2NzVhNThlYzg1YmRmZjk0ZTA1YTYwZTg1NTczMjg3ODAwOTc1N2M3NmJkZWE0kKW4+A==: 00:23:54.085 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:54.085 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:54.085 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDkzNDVlY2M2MjgzMjY3YzU0NzNlNjcyZjU5MTA5NWM4OTJkNzA5ZDYwOWNjOWYxSTOurg==: 00:23:54.085 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU2NzVhNThlYzg1YmRmZjk0ZTA1YTYwZTg1NTczMjg3ODAwOTc1N2M3NmJkZWE0kKW4+A==: ]] 00:23:54.085 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU2NzVhNThlYzg1YmRmZjk0ZTA1YTYwZTg1NTczMjg3ODAwOTc1N2M3NmJkZWE0kKW4+A==: 00:23:54.085 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:23:54.085 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:54.085 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:54.085 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:54.085 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:54.085 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:54.085 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:54.085 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.085 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.085 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.085 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:54.085 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:54.085 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:54.085 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:54.085 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:54.085 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:54.085 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:54.085 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:54.085 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:54.085 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:54.085 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:54.085 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:54.085 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.085 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.343 nvme0n1 00:23:54.343 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.343 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:54.343 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:54.344 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.344 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.344 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.344 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.344 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:54.344 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.344 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.344 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.344 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:54.344 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:23:54.344 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:54.344 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:54.344 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:54.344 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:54.344 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTI1NGUzYmUzMTFkNTQxMGFjZDZlMTcxMjA4MWZlYjlr7V90: 00:23:54.344 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2EzODQ3YTk1MDkzN2YxYzMxMWZlMmYyNTdlMzVkZWJ0mRxz: 00:23:54.344 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:54.344 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:54.344 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTI1NGUzYmUzMTFkNTQxMGFjZDZlMTcxMjA4MWZlYjlr7V90: 00:23:54.344 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2EzODQ3YTk1MDkzN2YxYzMxMWZlMmYyNTdlMzVkZWJ0mRxz: ]] 00:23:54.344 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2EzODQ3YTk1MDkzN2YxYzMxMWZlMmYyNTdlMzVkZWJ0mRxz: 00:23:54.344 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:23:54.344 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:54.344 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:54.344 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:54.344 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:54.344 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:54.344 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:54.344 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.344 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.344 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.344 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:54.344 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:54.344 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:54.344 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:54.344 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:54.344 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:54.344 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:54.344 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:54.344 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:54.344 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:54.344 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:54.344 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:54.344 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.344 20:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.601 nvme0n1 00:23:54.601 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.601 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:54.601 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.601 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.601 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:54.601 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.601 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.601 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:54.601 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.601 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.601 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.601 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:54.601 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:23:54.601 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:54.601 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:54.601 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:54.601 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:54.601 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2VlYTY0OThhMmNhMDJhZjQxOGMyYTQ4NWM4NGU2M2JhNzcyMjk0MzdhMDg1Zjg22g6MLQ==: 00:23:54.601 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2M3ZjM1OGM1ZmYwNDJkZDk5NDUxODM1MDc5ZTMwOWGknZ3H: 00:23:54.601 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:54.601 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:54.601 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2VlYTY0OThhMmNhMDJhZjQxOGMyYTQ4NWM4NGU2M2JhNzcyMjk0MzdhMDg1Zjg22g6MLQ==: 00:23:54.601 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2M3ZjM1OGM1ZmYwNDJkZDk5NDUxODM1MDc5ZTMwOWGknZ3H: ]] 00:23:54.601 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2M3ZjM1OGM1ZmYwNDJkZDk5NDUxODM1MDc5ZTMwOWGknZ3H: 00:23:54.601 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:23:54.601 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:54.601 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:54.601 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:54.601 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:54.601 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:54.601 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:54.601 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.601 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.601 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.602 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:54.602 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:54.602 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:54.602 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:54.602 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:54.602 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:54.602 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:54.602 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:54.602 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:54.602 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:54.602 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:54.602 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:54.602 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.602 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.859 nvme0n1 00:23:54.859 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.859 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:54.859 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.859 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.859 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:54.859 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.859 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.859 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:54.860 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.860 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.860 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.860 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:54.860 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:23:54.860 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:54.860 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:54.860 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:54.860 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:54.860 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTVmYmRiYTE3ZjVmZmE2YWQzYTA2ZWJkNzFkMGY5ZWE4MThkNjI4ZGE5YWI5YjFkY2E3YmNlZDQ1ZmFjZTZmZGROcto=: 00:23:54.860 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:54.860 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:54.860 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:54.860 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTVmYmRiYTE3ZjVmZmE2YWQzYTA2ZWJkNzFkMGY5ZWE4MThkNjI4ZGE5YWI5YjFkY2E3YmNlZDQ1ZmFjZTZmZGROcto=: 00:23:54.860 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:54.860 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:23:54.860 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:54.860 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:54.860 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:54.860 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:54.860 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:54.860 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:54.860 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.860 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.860 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.860 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:54.860 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:54.860 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:54.860 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:54.860 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:54.860 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:54.860 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:54.860 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:54.860 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:54.860 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:54.860 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:54.860 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:54.860 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.860 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.119 nvme0n1 00:23:55.119 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.119 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:55.119 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:55.119 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.119 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.119 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.119 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:55.119 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:55.119 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.119 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.119 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.119 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:55.119 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:55.119 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:23:55.119 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:55.119 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:55.119 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:55.119 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:55.119 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGRhMzdhMGRiNzA0YWM3NThjZDYwY2I5MmU1NTU0NzHxtfuB: 00:23:55.119 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTMyMDg0NWFlZWRjNGJmZTc4YzkyOTNjNGU5MDBjZTVjZmFhZTBiN2E4YmNlN2Q0ZmY0Yjg0YmRhYzkwMWM5ZE1ffHg=: 00:23:55.119 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:55.119 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:55.119 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGRhMzdhMGRiNzA0YWM3NThjZDYwY2I5MmU1NTU0NzHxtfuB: 00:23:55.119 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTMyMDg0NWFlZWRjNGJmZTc4YzkyOTNjNGU5MDBjZTVjZmFhZTBiN2E4YmNlN2Q0ZmY0Yjg0YmRhYzkwMWM5ZE1ffHg=: ]] 00:23:55.119 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTMyMDg0NWFlZWRjNGJmZTc4YzkyOTNjNGU5MDBjZTVjZmFhZTBiN2E4YmNlN2Q0ZmY0Yjg0YmRhYzkwMWM5ZE1ffHg=: 00:23:55.119 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:23:55.119 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:55.119 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:55.119 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:55.119 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:55.119 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:55.119 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:55.119 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.119 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.119 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.119 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:55.119 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:55.119 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:55.119 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:55.119 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:55.119 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:55.119 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:55.119 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:55.119 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:55.119 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:55.119 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:55.119 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:55.119 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.119 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.377 nvme0n1 00:23:55.377 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.377 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:55.377 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.377 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:55.377 20:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.377 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.377 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:55.377 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:55.377 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.377 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.377 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.377 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:55.377 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:23:55.377 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:55.377 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:55.377 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:55.377 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:55.377 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDkzNDVlY2M2MjgzMjY3YzU0NzNlNjcyZjU5MTA5NWM4OTJkNzA5ZDYwOWNjOWYxSTOurg==: 00:23:55.377 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU2NzVhNThlYzg1YmRmZjk0ZTA1YTYwZTg1NTczMjg3ODAwOTc1N2M3NmJkZWE0kKW4+A==: 00:23:55.377 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:55.377 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:55.377 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDkzNDVlY2M2MjgzMjY3YzU0NzNlNjcyZjU5MTA5NWM4OTJkNzA5ZDYwOWNjOWYxSTOurg==: 00:23:55.377 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU2NzVhNThlYzg1YmRmZjk0ZTA1YTYwZTg1NTczMjg3ODAwOTc1N2M3NmJkZWE0kKW4+A==: ]] 00:23:55.377 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU2NzVhNThlYzg1YmRmZjk0ZTA1YTYwZTg1NTczMjg3ODAwOTc1N2M3NmJkZWE0kKW4+A==: 00:23:55.377 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:23:55.377 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:55.377 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:55.377 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:55.377 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:55.377 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:55.377 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:55.377 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.377 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.377 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.377 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:55.377 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:55.377 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:55.377 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:55.377 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:55.377 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:55.377 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:55.378 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:55.378 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:55.378 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:55.378 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:55.378 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:55.378 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.378 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.641 nvme0n1 00:23:55.641 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.641 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:55.641 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:55.641 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.641 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.900 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.900 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:55.900 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:55.900 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.900 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.900 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.900 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:55.900 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:23:55.900 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:55.900 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:55.900 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:55.900 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:55.900 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTI1NGUzYmUzMTFkNTQxMGFjZDZlMTcxMjA4MWZlYjlr7V90: 00:23:55.900 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2EzODQ3YTk1MDkzN2YxYzMxMWZlMmYyNTdlMzVkZWJ0mRxz: 00:23:55.900 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:55.900 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:55.900 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTI1NGUzYmUzMTFkNTQxMGFjZDZlMTcxMjA4MWZlYjlr7V90: 00:23:55.900 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2EzODQ3YTk1MDkzN2YxYzMxMWZlMmYyNTdlMzVkZWJ0mRxz: ]] 00:23:55.900 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2EzODQ3YTk1MDkzN2YxYzMxMWZlMmYyNTdlMzVkZWJ0mRxz: 00:23:55.900 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:23:55.900 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:55.900 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:55.900 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:55.900 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:55.900 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:55.900 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:55.900 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.900 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.900 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.900 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:55.900 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:55.900 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:55.900 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:55.900 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:55.900 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:55.900 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:55.900 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:55.900 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:55.900 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:55.900 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:55.901 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:55.901 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.901 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.161 nvme0n1 00:23:56.161 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.161 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:56.161 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.161 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.161 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:56.161 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.161 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:56.161 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:56.161 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.161 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.161 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.161 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:56.161 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:23:56.161 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:56.161 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:56.161 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:56.161 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:56.161 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2VlYTY0OThhMmNhMDJhZjQxOGMyYTQ4NWM4NGU2M2JhNzcyMjk0MzdhMDg1Zjg22g6MLQ==: 00:23:56.161 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2M3ZjM1OGM1ZmYwNDJkZDk5NDUxODM1MDc5ZTMwOWGknZ3H: 00:23:56.161 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:56.161 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:56.161 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2VlYTY0OThhMmNhMDJhZjQxOGMyYTQ4NWM4NGU2M2JhNzcyMjk0MzdhMDg1Zjg22g6MLQ==: 00:23:56.161 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2M3ZjM1OGM1ZmYwNDJkZDk5NDUxODM1MDc5ZTMwOWGknZ3H: ]] 00:23:56.161 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2M3ZjM1OGM1ZmYwNDJkZDk5NDUxODM1MDc5ZTMwOWGknZ3H: 00:23:56.161 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:23:56.161 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:56.161 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:56.161 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:56.161 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:56.161 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:56.161 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:56.161 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.161 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.161 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.161 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:56.161 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:56.161 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:56.161 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:56.161 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:56.161 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:56.161 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:56.161 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:56.161 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:56.161 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:56.161 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:56.161 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:56.161 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.161 20:53:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.419 nvme0n1 00:23:56.419 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.419 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:56.419 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.419 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.419 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:56.419 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.419 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:56.419 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:56.419 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.419 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.419 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.419 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:56.419 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:23:56.419 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:56.419 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:56.419 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:56.419 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:56.419 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTVmYmRiYTE3ZjVmZmE2YWQzYTA2ZWJkNzFkMGY5ZWE4MThkNjI4ZGE5YWI5YjFkY2E3YmNlZDQ1ZmFjZTZmZGROcto=: 00:23:56.419 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:56.419 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:56.419 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:56.419 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTVmYmRiYTE3ZjVmZmE2YWQzYTA2ZWJkNzFkMGY5ZWE4MThkNjI4ZGE5YWI5YjFkY2E3YmNlZDQ1ZmFjZTZmZGROcto=: 00:23:56.419 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:56.419 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:23:56.419 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:56.419 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:56.419 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:56.419 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:56.419 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:56.419 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:56.419 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.419 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.419 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.419 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:56.419 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:56.419 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:56.419 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:56.419 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:56.419 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:56.419 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:56.419 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:56.419 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:56.419 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:56.419 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:56.419 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:56.419 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.419 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.677 nvme0n1 00:23:56.677 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.677 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:56.677 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.677 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.677 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:56.677 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.935 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:56.935 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:56.935 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.935 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.935 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.935 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:56.935 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:56.935 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:23:56.935 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:56.935 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:56.935 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:56.935 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:56.935 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGRhMzdhMGRiNzA0YWM3NThjZDYwY2I5MmU1NTU0NzHxtfuB: 00:23:56.935 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTMyMDg0NWFlZWRjNGJmZTc4YzkyOTNjNGU5MDBjZTVjZmFhZTBiN2E4YmNlN2Q0ZmY0Yjg0YmRhYzkwMWM5ZE1ffHg=: 00:23:56.935 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:56.935 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:56.935 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGRhMzdhMGRiNzA0YWM3NThjZDYwY2I5MmU1NTU0NzHxtfuB: 00:23:56.935 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTMyMDg0NWFlZWRjNGJmZTc4YzkyOTNjNGU5MDBjZTVjZmFhZTBiN2E4YmNlN2Q0ZmY0Yjg0YmRhYzkwMWM5ZE1ffHg=: ]] 00:23:56.935 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTMyMDg0NWFlZWRjNGJmZTc4YzkyOTNjNGU5MDBjZTVjZmFhZTBiN2E4YmNlN2Q0ZmY0Yjg0YmRhYzkwMWM5ZE1ffHg=: 00:23:56.935 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:23:56.935 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:56.935 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:56.935 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:56.935 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:56.935 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:56.935 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:56.935 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.935 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.936 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.936 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:56.936 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:56.936 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:56.936 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:56.936 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:56.936 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:56.936 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:56.936 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:56.936 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:56.936 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:56.936 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:56.936 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:56.936 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.936 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.501 nvme0n1 00:23:57.501 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.501 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:57.501 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:57.501 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.501 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.501 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.501 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:57.501 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:57.501 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.501 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.501 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.501 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:57.501 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:23:57.501 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:57.501 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:57.501 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:57.501 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:57.501 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDkzNDVlY2M2MjgzMjY3YzU0NzNlNjcyZjU5MTA5NWM4OTJkNzA5ZDYwOWNjOWYxSTOurg==: 00:23:57.501 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU2NzVhNThlYzg1YmRmZjk0ZTA1YTYwZTg1NTczMjg3ODAwOTc1N2M3NmJkZWE0kKW4+A==: 00:23:57.501 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:57.501 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:57.501 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDkzNDVlY2M2MjgzMjY3YzU0NzNlNjcyZjU5MTA5NWM4OTJkNzA5ZDYwOWNjOWYxSTOurg==: 00:23:57.501 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU2NzVhNThlYzg1YmRmZjk0ZTA1YTYwZTg1NTczMjg3ODAwOTc1N2M3NmJkZWE0kKW4+A==: ]] 00:23:57.501 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU2NzVhNThlYzg1YmRmZjk0ZTA1YTYwZTg1NTczMjg3ODAwOTc1N2M3NmJkZWE0kKW4+A==: 00:23:57.501 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:23:57.501 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:57.501 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:57.501 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:57.501 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:57.501 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:57.501 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:57.501 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.501 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.501 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.501 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:57.501 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:57.502 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:57.502 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:57.502 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:57.502 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:57.502 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:57.502 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:57.502 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:57.502 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:57.502 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:57.502 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:57.502 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.502 20:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.068 nvme0n1 00:23:58.068 20:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.068 20:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:58.068 20:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.068 20:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.068 20:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:58.068 20:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.068 20:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:58.068 20:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:58.068 20:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.068 20:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.068 20:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.068 20:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:58.068 20:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:23:58.068 20:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:58.068 20:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:58.068 20:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:58.068 20:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:58.068 20:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTI1NGUzYmUzMTFkNTQxMGFjZDZlMTcxMjA4MWZlYjlr7V90: 00:23:58.068 20:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2EzODQ3YTk1MDkzN2YxYzMxMWZlMmYyNTdlMzVkZWJ0mRxz: 00:23:58.068 20:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:58.068 20:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:58.068 20:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTI1NGUzYmUzMTFkNTQxMGFjZDZlMTcxMjA4MWZlYjlr7V90: 00:23:58.068 20:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2EzODQ3YTk1MDkzN2YxYzMxMWZlMmYyNTdlMzVkZWJ0mRxz: ]] 00:23:58.068 20:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2EzODQ3YTk1MDkzN2YxYzMxMWZlMmYyNTdlMzVkZWJ0mRxz: 00:23:58.068 20:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:23:58.068 20:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:58.068 20:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:58.068 20:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:58.068 20:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:58.068 20:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:58.068 20:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:58.068 20:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.068 20:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.068 20:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.068 20:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:58.068 20:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:58.068 20:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:58.068 20:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:58.068 20:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:58.068 20:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:58.068 20:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:58.068 20:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:58.068 20:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:58.068 20:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:58.068 20:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:58.068 20:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:58.068 20:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.068 20:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.326 nvme0n1 00:23:58.326 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.326 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:58.326 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:58.326 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.326 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.585 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.585 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:58.585 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:58.585 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.585 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.585 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.585 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:58.585 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:23:58.585 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:58.585 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:58.585 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:58.585 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:58.585 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2VlYTY0OThhMmNhMDJhZjQxOGMyYTQ4NWM4NGU2M2JhNzcyMjk0MzdhMDg1Zjg22g6MLQ==: 00:23:58.585 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2M3ZjM1OGM1ZmYwNDJkZDk5NDUxODM1MDc5ZTMwOWGknZ3H: 00:23:58.585 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:58.585 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:58.585 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2VlYTY0OThhMmNhMDJhZjQxOGMyYTQ4NWM4NGU2M2JhNzcyMjk0MzdhMDg1Zjg22g6MLQ==: 00:23:58.585 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2M3ZjM1OGM1ZmYwNDJkZDk5NDUxODM1MDc5ZTMwOWGknZ3H: ]] 00:23:58.585 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2M3ZjM1OGM1ZmYwNDJkZDk5NDUxODM1MDc5ZTMwOWGknZ3H: 00:23:58.585 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:23:58.585 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:58.585 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:58.585 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:58.585 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:58.585 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:58.585 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:58.585 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.585 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.585 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.585 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:58.585 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:58.585 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:58.585 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:58.585 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:58.585 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:58.585 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:58.585 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:58.585 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:58.585 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:58.585 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:58.585 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:58.585 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.585 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.152 nvme0n1 00:23:59.152 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.152 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:59.152 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:59.152 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.152 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.152 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.152 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.152 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:59.152 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.152 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.152 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.152 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:59.152 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:23:59.152 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:59.152 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:59.152 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:59.152 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:59.152 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTVmYmRiYTE3ZjVmZmE2YWQzYTA2ZWJkNzFkMGY5ZWE4MThkNjI4ZGE5YWI5YjFkY2E3YmNlZDQ1ZmFjZTZmZGROcto=: 00:23:59.152 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:59.152 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:59.152 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:59.152 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTVmYmRiYTE3ZjVmZmE2YWQzYTA2ZWJkNzFkMGY5ZWE4MThkNjI4ZGE5YWI5YjFkY2E3YmNlZDQ1ZmFjZTZmZGROcto=: 00:23:59.152 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:59.152 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:23:59.152 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:59.152 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:59.152 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:59.152 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:59.152 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:59.152 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:59.152 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.152 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.152 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.152 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:59.152 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:59.152 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:59.153 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:59.153 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.153 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.153 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:59.153 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:59.153 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:59.153 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:59.153 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:59.153 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:59.153 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.153 20:54:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.720 nvme0n1 00:23:59.720 20:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.720 20:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:59.720 20:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.720 20:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.720 20:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:59.720 20:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.720 20:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.720 20:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:59.720 20:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.720 20:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.720 20:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.720 20:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:59.720 20:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:59.720 20:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:23:59.720 20:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:59.720 20:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:59.720 20:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:59.720 20:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:59.720 20:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGRhMzdhMGRiNzA0YWM3NThjZDYwY2I5MmU1NTU0NzHxtfuB: 00:23:59.721 20:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTMyMDg0NWFlZWRjNGJmZTc4YzkyOTNjNGU5MDBjZTVjZmFhZTBiN2E4YmNlN2Q0ZmY0Yjg0YmRhYzkwMWM5ZE1ffHg=: 00:23:59.721 20:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:59.721 20:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:59.721 20:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGRhMzdhMGRiNzA0YWM3NThjZDYwY2I5MmU1NTU0NzHxtfuB: 00:23:59.721 20:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTMyMDg0NWFlZWRjNGJmZTc4YzkyOTNjNGU5MDBjZTVjZmFhZTBiN2E4YmNlN2Q0ZmY0Yjg0YmRhYzkwMWM5ZE1ffHg=: ]] 00:23:59.721 20:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTMyMDg0NWFlZWRjNGJmZTc4YzkyOTNjNGU5MDBjZTVjZmFhZTBiN2E4YmNlN2Q0ZmY0Yjg0YmRhYzkwMWM5ZE1ffHg=: 00:23:59.721 20:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:23:59.721 20:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:59.721 20:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:59.721 20:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:59.721 20:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:59.721 20:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:59.721 20:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:59.721 20:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.721 20:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.721 20:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.721 20:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:59.721 20:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:59.721 20:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:59.721 20:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:59.721 20:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.721 20:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.721 20:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:59.721 20:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:59.721 20:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:59.721 20:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:59.721 20:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:59.721 20:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:59.721 20:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.721 20:54:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.655 nvme0n1 00:24:00.655 20:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.655 20:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:00.655 20:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.655 20:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.655 20:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:00.655 20:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.655 20:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.655 20:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:00.655 20:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.656 20:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.656 20:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.656 20:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:00.656 20:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:24:00.656 20:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:00.656 20:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:00.656 20:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:00.656 20:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:00.656 20:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDkzNDVlY2M2MjgzMjY3YzU0NzNlNjcyZjU5MTA5NWM4OTJkNzA5ZDYwOWNjOWYxSTOurg==: 00:24:00.656 20:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU2NzVhNThlYzg1YmRmZjk0ZTA1YTYwZTg1NTczMjg3ODAwOTc1N2M3NmJkZWE0kKW4+A==: 00:24:00.656 20:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:00.656 20:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:00.656 20:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDkzNDVlY2M2MjgzMjY3YzU0NzNlNjcyZjU5MTA5NWM4OTJkNzA5ZDYwOWNjOWYxSTOurg==: 00:24:00.656 20:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU2NzVhNThlYzg1YmRmZjk0ZTA1YTYwZTg1NTczMjg3ODAwOTc1N2M3NmJkZWE0kKW4+A==: ]] 00:24:00.656 20:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU2NzVhNThlYzg1YmRmZjk0ZTA1YTYwZTg1NTczMjg3ODAwOTc1N2M3NmJkZWE0kKW4+A==: 00:24:00.656 20:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:24:00.656 20:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:00.656 20:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:00.656 20:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:00.656 20:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:00.656 20:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:00.656 20:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:00.656 20:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.656 20:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.656 20:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.656 20:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:00.656 20:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:00.656 20:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:00.656 20:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:00.656 20:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:00.656 20:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:00.656 20:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:00.656 20:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:00.656 20:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:00.656 20:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:00.656 20:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:00.656 20:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:00.656 20:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.656 20:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.655 nvme0n1 00:24:01.655 20:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.655 20:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:01.655 20:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.655 20:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.655 20:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:01.655 20:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.655 20:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.655 20:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:01.655 20:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.655 20:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.655 20:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.655 20:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:01.655 20:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:24:01.655 20:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:01.655 20:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:01.655 20:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:01.655 20:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:01.655 20:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTI1NGUzYmUzMTFkNTQxMGFjZDZlMTcxMjA4MWZlYjlr7V90: 00:24:01.655 20:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2EzODQ3YTk1MDkzN2YxYzMxMWZlMmYyNTdlMzVkZWJ0mRxz: 00:24:01.655 20:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:01.655 20:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:01.655 20:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTI1NGUzYmUzMTFkNTQxMGFjZDZlMTcxMjA4MWZlYjlr7V90: 00:24:01.655 20:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2EzODQ3YTk1MDkzN2YxYzMxMWZlMmYyNTdlMzVkZWJ0mRxz: ]] 00:24:01.655 20:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2EzODQ3YTk1MDkzN2YxYzMxMWZlMmYyNTdlMzVkZWJ0mRxz: 00:24:01.655 20:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:24:01.655 20:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:01.655 20:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:01.655 20:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:01.655 20:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:01.655 20:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:01.655 20:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:01.655 20:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.655 20:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.655 20:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.655 20:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:01.655 20:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:01.655 20:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:01.655 20:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:01.655 20:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:01.655 20:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:01.655 20:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:01.655 20:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:01.655 20:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:01.655 20:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:01.655 20:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:01.655 20:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:01.655 20:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.655 20:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.588 nvme0n1 00:24:02.588 20:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.588 20:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:02.588 20:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.588 20:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.588 20:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:02.588 20:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.588 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:02.588 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:02.588 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.588 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.588 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.588 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:02.588 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:24:02.588 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:02.588 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:02.588 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:02.588 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:02.588 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2VlYTY0OThhMmNhMDJhZjQxOGMyYTQ4NWM4NGU2M2JhNzcyMjk0MzdhMDg1Zjg22g6MLQ==: 00:24:02.588 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2M3ZjM1OGM1ZmYwNDJkZDk5NDUxODM1MDc5ZTMwOWGknZ3H: 00:24:02.588 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:02.588 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:02.588 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2VlYTY0OThhMmNhMDJhZjQxOGMyYTQ4NWM4NGU2M2JhNzcyMjk0MzdhMDg1Zjg22g6MLQ==: 00:24:02.588 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2M3ZjM1OGM1ZmYwNDJkZDk5NDUxODM1MDc5ZTMwOWGknZ3H: ]] 00:24:02.588 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2M3ZjM1OGM1ZmYwNDJkZDk5NDUxODM1MDc5ZTMwOWGknZ3H: 00:24:02.588 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:24:02.588 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:02.588 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:02.588 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:02.588 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:02.588 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:02.588 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:02.588 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.588 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.588 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.588 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:02.588 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:02.588 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:02.588 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:02.588 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:02.588 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:02.588 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:02.588 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:02.588 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:02.588 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:02.588 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:02.588 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:02.588 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.588 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.522 nvme0n1 00:24:03.522 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.522 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:03.522 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.522 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.522 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:03.522 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.522 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.522 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:03.522 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.522 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.522 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.522 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:03.522 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:24:03.522 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:03.522 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:03.522 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:03.522 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:03.522 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTVmYmRiYTE3ZjVmZmE2YWQzYTA2ZWJkNzFkMGY5ZWE4MThkNjI4ZGE5YWI5YjFkY2E3YmNlZDQ1ZmFjZTZmZGROcto=: 00:24:03.522 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:03.522 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:03.522 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:03.522 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTVmYmRiYTE3ZjVmZmE2YWQzYTA2ZWJkNzFkMGY5ZWE4MThkNjI4ZGE5YWI5YjFkY2E3YmNlZDQ1ZmFjZTZmZGROcto=: 00:24:03.522 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:03.522 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:24:03.522 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:03.522 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:03.522 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:03.522 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:03.522 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:03.522 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:03.522 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.522 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.522 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.522 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:03.522 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:03.522 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:03.522 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:03.522 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:03.522 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:03.522 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:03.522 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:03.522 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:03.522 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:03.522 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:03.522 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:03.522 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.522 20:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.456 nvme0n1 00:24:04.456 20:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.456 20:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:04.456 20:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.456 20:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:04.456 20:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.456 20:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.456 20:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:04.456 20:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:04.456 20:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.456 20:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.456 20:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.457 20:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:04.457 20:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:04.457 20:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:04.457 20:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:24:04.457 20:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:04.457 20:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:04.457 20:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:04.457 20:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:04.457 20:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGRhMzdhMGRiNzA0YWM3NThjZDYwY2I5MmU1NTU0NzHxtfuB: 00:24:04.457 20:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTMyMDg0NWFlZWRjNGJmZTc4YzkyOTNjNGU5MDBjZTVjZmFhZTBiN2E4YmNlN2Q0ZmY0Yjg0YmRhYzkwMWM5ZE1ffHg=: 00:24:04.457 20:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:04.457 20:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:04.457 20:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGRhMzdhMGRiNzA0YWM3NThjZDYwY2I5MmU1NTU0NzHxtfuB: 00:24:04.457 20:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTMyMDg0NWFlZWRjNGJmZTc4YzkyOTNjNGU5MDBjZTVjZmFhZTBiN2E4YmNlN2Q0ZmY0Yjg0YmRhYzkwMWM5ZE1ffHg=: ]] 00:24:04.457 20:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTMyMDg0NWFlZWRjNGJmZTc4YzkyOTNjNGU5MDBjZTVjZmFhZTBiN2E4YmNlN2Q0ZmY0Yjg0YmRhYzkwMWM5ZE1ffHg=: 00:24:04.457 20:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:24:04.457 20:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:04.457 20:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:04.457 20:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:04.457 20:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:04.457 20:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:04.457 20:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:04.457 20:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.457 20:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.457 20:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.457 20:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:04.457 20:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:04.457 20:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:04.457 20:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:04.457 20:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:04.457 20:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:04.457 20:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:04.457 20:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:04.457 20:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:04.457 20:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:04.457 20:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:04.457 20:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:04.457 20:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.457 20:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.457 nvme0n1 00:24:04.457 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.457 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:04.457 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.457 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:04.457 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.457 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.457 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:04.457 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:04.457 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.457 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.457 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.457 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:04.457 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:24:04.457 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:04.457 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:04.457 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:04.457 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:04.457 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDkzNDVlY2M2MjgzMjY3YzU0NzNlNjcyZjU5MTA5NWM4OTJkNzA5ZDYwOWNjOWYxSTOurg==: 00:24:04.457 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU2NzVhNThlYzg1YmRmZjk0ZTA1YTYwZTg1NTczMjg3ODAwOTc1N2M3NmJkZWE0kKW4+A==: 00:24:04.457 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:04.457 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:04.457 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDkzNDVlY2M2MjgzMjY3YzU0NzNlNjcyZjU5MTA5NWM4OTJkNzA5ZDYwOWNjOWYxSTOurg==: 00:24:04.457 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU2NzVhNThlYzg1YmRmZjk0ZTA1YTYwZTg1NTczMjg3ODAwOTc1N2M3NmJkZWE0kKW4+A==: ]] 00:24:04.457 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU2NzVhNThlYzg1YmRmZjk0ZTA1YTYwZTg1NTczMjg3ODAwOTc1N2M3NmJkZWE0kKW4+A==: 00:24:04.457 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:24:04.457 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:04.457 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:04.457 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:04.457 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:04.457 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:04.457 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:04.457 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.457 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.457 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.457 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:04.457 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:04.457 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:04.457 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:04.457 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:04.457 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:04.457 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:04.457 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:04.457 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:04.457 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:04.457 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:04.457 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:04.457 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.457 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.715 nvme0n1 00:24:04.715 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.715 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:04.715 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.715 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:04.715 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.715 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.715 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:04.715 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:04.715 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.715 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.715 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.715 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:04.715 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:24:04.715 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:04.715 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:04.715 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:04.715 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:04.715 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTI1NGUzYmUzMTFkNTQxMGFjZDZlMTcxMjA4MWZlYjlr7V90: 00:24:04.715 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2EzODQ3YTk1MDkzN2YxYzMxMWZlMmYyNTdlMzVkZWJ0mRxz: 00:24:04.715 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:04.715 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:04.715 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTI1NGUzYmUzMTFkNTQxMGFjZDZlMTcxMjA4MWZlYjlr7V90: 00:24:04.715 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2EzODQ3YTk1MDkzN2YxYzMxMWZlMmYyNTdlMzVkZWJ0mRxz: ]] 00:24:04.715 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2EzODQ3YTk1MDkzN2YxYzMxMWZlMmYyNTdlMzVkZWJ0mRxz: 00:24:04.716 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:24:04.716 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:04.716 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:04.716 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:04.716 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:04.716 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:04.716 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:04.716 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.716 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.716 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.716 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:04.716 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:04.716 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:04.716 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:04.716 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:04.716 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:04.716 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:04.716 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:04.716 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:04.716 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:04.716 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:04.716 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:04.716 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.716 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.974 nvme0n1 00:24:04.974 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.974 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:04.974 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.975 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:04.975 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.975 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.975 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:04.975 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:04.975 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.975 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.975 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.975 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:04.975 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:24:04.975 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:04.975 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:04.975 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:04.975 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:04.975 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2VlYTY0OThhMmNhMDJhZjQxOGMyYTQ4NWM4NGU2M2JhNzcyMjk0MzdhMDg1Zjg22g6MLQ==: 00:24:04.975 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2M3ZjM1OGM1ZmYwNDJkZDk5NDUxODM1MDc5ZTMwOWGknZ3H: 00:24:04.975 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:04.975 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:04.975 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2VlYTY0OThhMmNhMDJhZjQxOGMyYTQ4NWM4NGU2M2JhNzcyMjk0MzdhMDg1Zjg22g6MLQ==: 00:24:04.975 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2M3ZjM1OGM1ZmYwNDJkZDk5NDUxODM1MDc5ZTMwOWGknZ3H: ]] 00:24:04.975 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2M3ZjM1OGM1ZmYwNDJkZDk5NDUxODM1MDc5ZTMwOWGknZ3H: 00:24:04.975 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:24:04.975 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:04.975 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:04.975 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:04.975 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:04.975 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:04.975 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:04.975 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.975 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.975 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.975 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:04.975 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:04.975 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:04.975 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:04.975 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:04.975 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:04.975 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:04.975 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:04.975 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:04.975 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:04.975 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:04.975 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:04.975 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.975 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.234 nvme0n1 00:24:05.234 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.234 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:05.234 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.234 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:05.234 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.234 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.234 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:05.234 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:05.234 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.234 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.234 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.234 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:05.234 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:24:05.234 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:05.234 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:05.234 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:05.234 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:05.234 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTVmYmRiYTE3ZjVmZmE2YWQzYTA2ZWJkNzFkMGY5ZWE4MThkNjI4ZGE5YWI5YjFkY2E3YmNlZDQ1ZmFjZTZmZGROcto=: 00:24:05.234 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:05.234 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:05.234 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:05.234 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTVmYmRiYTE3ZjVmZmE2YWQzYTA2ZWJkNzFkMGY5ZWE4MThkNjI4ZGE5YWI5YjFkY2E3YmNlZDQ1ZmFjZTZmZGROcto=: 00:24:05.234 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:05.234 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:24:05.234 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:05.234 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:05.234 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:05.234 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:05.234 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:05.234 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:05.234 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.234 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.234 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.234 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:05.234 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:05.234 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:05.234 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:05.234 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:05.234 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:05.234 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:05.234 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:05.234 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:05.234 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:05.234 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:05.234 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:05.234 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.234 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.234 nvme0n1 00:24:05.234 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.234 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:05.234 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:05.234 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.234 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.234 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.493 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:05.493 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:05.493 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.493 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.493 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.493 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:05.493 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:05.493 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:24:05.493 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:05.493 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:05.493 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:05.493 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:05.493 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGRhMzdhMGRiNzA0YWM3NThjZDYwY2I5MmU1NTU0NzHxtfuB: 00:24:05.493 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTMyMDg0NWFlZWRjNGJmZTc4YzkyOTNjNGU5MDBjZTVjZmFhZTBiN2E4YmNlN2Q0ZmY0Yjg0YmRhYzkwMWM5ZE1ffHg=: 00:24:05.493 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:05.493 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:05.493 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGRhMzdhMGRiNzA0YWM3NThjZDYwY2I5MmU1NTU0NzHxtfuB: 00:24:05.493 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTMyMDg0NWFlZWRjNGJmZTc4YzkyOTNjNGU5MDBjZTVjZmFhZTBiN2E4YmNlN2Q0ZmY0Yjg0YmRhYzkwMWM5ZE1ffHg=: ]] 00:24:05.493 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTMyMDg0NWFlZWRjNGJmZTc4YzkyOTNjNGU5MDBjZTVjZmFhZTBiN2E4YmNlN2Q0ZmY0Yjg0YmRhYzkwMWM5ZE1ffHg=: 00:24:05.493 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:24:05.493 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:05.493 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:05.493 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:05.493 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:05.493 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:05.493 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:05.493 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.493 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.493 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.493 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:05.493 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:05.493 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:05.493 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:05.493 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:05.493 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:05.493 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:05.493 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:05.493 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:05.493 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:05.493 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:05.493 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:05.493 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.493 20:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.493 nvme0n1 00:24:05.493 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.493 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:05.493 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.493 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:05.493 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.493 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.751 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:05.751 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:05.751 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.751 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.751 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.751 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:05.751 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:24:05.751 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:05.751 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:05.751 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:05.751 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:05.751 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDkzNDVlY2M2MjgzMjY3YzU0NzNlNjcyZjU5MTA5NWM4OTJkNzA5ZDYwOWNjOWYxSTOurg==: 00:24:05.751 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU2NzVhNThlYzg1YmRmZjk0ZTA1YTYwZTg1NTczMjg3ODAwOTc1N2M3NmJkZWE0kKW4+A==: 00:24:05.751 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:05.751 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:05.751 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDkzNDVlY2M2MjgzMjY3YzU0NzNlNjcyZjU5MTA5NWM4OTJkNzA5ZDYwOWNjOWYxSTOurg==: 00:24:05.751 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU2NzVhNThlYzg1YmRmZjk0ZTA1YTYwZTg1NTczMjg3ODAwOTc1N2M3NmJkZWE0kKW4+A==: ]] 00:24:05.751 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU2NzVhNThlYzg1YmRmZjk0ZTA1YTYwZTg1NTczMjg3ODAwOTc1N2M3NmJkZWE0kKW4+A==: 00:24:05.751 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:24:05.751 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:05.751 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:05.751 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:05.751 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:05.751 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:05.751 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:05.751 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.751 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.751 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.751 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:05.751 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:05.751 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:05.751 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:05.751 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:05.751 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:05.751 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:05.751 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:05.751 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:05.751 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:05.751 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:05.751 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:05.751 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.751 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.751 nvme0n1 00:24:05.751 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.751 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:05.751 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.751 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.751 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:05.751 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.009 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.009 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:06.009 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.009 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.009 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.009 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:06.009 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:24:06.009 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:06.009 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:06.009 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:06.009 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:06.009 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTI1NGUzYmUzMTFkNTQxMGFjZDZlMTcxMjA4MWZlYjlr7V90: 00:24:06.009 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2EzODQ3YTk1MDkzN2YxYzMxMWZlMmYyNTdlMzVkZWJ0mRxz: 00:24:06.009 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:06.009 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:06.010 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTI1NGUzYmUzMTFkNTQxMGFjZDZlMTcxMjA4MWZlYjlr7V90: 00:24:06.010 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2EzODQ3YTk1MDkzN2YxYzMxMWZlMmYyNTdlMzVkZWJ0mRxz: ]] 00:24:06.010 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2EzODQ3YTk1MDkzN2YxYzMxMWZlMmYyNTdlMzVkZWJ0mRxz: 00:24:06.010 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:24:06.010 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:06.010 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:06.010 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:06.010 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:06.010 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:06.010 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:06.010 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.010 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.010 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.010 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:06.010 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:06.010 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:06.010 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:06.010 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:06.010 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:06.010 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:06.010 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:06.010 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:06.010 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:06.010 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:06.010 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:06.010 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.010 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.010 nvme0n1 00:24:06.010 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.010 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:06.010 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.010 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.010 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:06.010 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.268 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.268 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:06.268 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.268 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.268 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.268 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:06.268 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:24:06.268 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:06.268 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:06.268 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:06.268 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:06.268 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2VlYTY0OThhMmNhMDJhZjQxOGMyYTQ4NWM4NGU2M2JhNzcyMjk0MzdhMDg1Zjg22g6MLQ==: 00:24:06.268 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2M3ZjM1OGM1ZmYwNDJkZDk5NDUxODM1MDc5ZTMwOWGknZ3H: 00:24:06.268 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:06.268 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:06.268 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2VlYTY0OThhMmNhMDJhZjQxOGMyYTQ4NWM4NGU2M2JhNzcyMjk0MzdhMDg1Zjg22g6MLQ==: 00:24:06.268 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2M3ZjM1OGM1ZmYwNDJkZDk5NDUxODM1MDc5ZTMwOWGknZ3H: ]] 00:24:06.268 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2M3ZjM1OGM1ZmYwNDJkZDk5NDUxODM1MDc5ZTMwOWGknZ3H: 00:24:06.268 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:24:06.268 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:06.268 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:06.268 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:06.268 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:06.268 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:06.268 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:06.268 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.268 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.268 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.268 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:06.268 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:06.268 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:06.268 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:06.268 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:06.268 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:06.268 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:06.268 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:06.268 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:06.268 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:06.268 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:06.268 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:06.268 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.268 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.268 nvme0n1 00:24:06.268 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.268 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:06.268 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.268 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.268 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:06.268 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.528 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.528 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:06.528 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.528 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.528 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.528 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:06.528 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:24:06.528 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:06.528 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:06.528 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:06.528 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:06.528 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTVmYmRiYTE3ZjVmZmE2YWQzYTA2ZWJkNzFkMGY5ZWE4MThkNjI4ZGE5YWI5YjFkY2E3YmNlZDQ1ZmFjZTZmZGROcto=: 00:24:06.528 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:06.528 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:06.528 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:06.528 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTVmYmRiYTE3ZjVmZmE2YWQzYTA2ZWJkNzFkMGY5ZWE4MThkNjI4ZGE5YWI5YjFkY2E3YmNlZDQ1ZmFjZTZmZGROcto=: 00:24:06.528 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:06.528 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:24:06.528 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:06.528 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:06.528 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:06.528 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:06.528 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:06.529 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:06.529 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.529 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.529 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.529 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:06.529 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:06.529 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:06.529 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:06.529 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:06.529 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:06.529 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:06.529 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:06.529 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:06.529 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:06.529 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:06.529 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:06.529 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.529 20:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.529 nvme0n1 00:24:06.529 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.529 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:06.529 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.529 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:06.529 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.529 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.529 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.529 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:06.529 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.529 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.788 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.788 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:06.788 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:06.788 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:24:06.788 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:06.788 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:06.788 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:06.788 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:06.788 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGRhMzdhMGRiNzA0YWM3NThjZDYwY2I5MmU1NTU0NzHxtfuB: 00:24:06.788 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTMyMDg0NWFlZWRjNGJmZTc4YzkyOTNjNGU5MDBjZTVjZmFhZTBiN2E4YmNlN2Q0ZmY0Yjg0YmRhYzkwMWM5ZE1ffHg=: 00:24:06.788 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:06.788 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:06.788 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGRhMzdhMGRiNzA0YWM3NThjZDYwY2I5MmU1NTU0NzHxtfuB: 00:24:06.788 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTMyMDg0NWFlZWRjNGJmZTc4YzkyOTNjNGU5MDBjZTVjZmFhZTBiN2E4YmNlN2Q0ZmY0Yjg0YmRhYzkwMWM5ZE1ffHg=: ]] 00:24:06.788 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTMyMDg0NWFlZWRjNGJmZTc4YzkyOTNjNGU5MDBjZTVjZmFhZTBiN2E4YmNlN2Q0ZmY0Yjg0YmRhYzkwMWM5ZE1ffHg=: 00:24:06.788 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:24:06.788 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:06.788 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:06.788 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:06.788 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:06.788 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:06.788 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:06.788 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.788 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.788 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.788 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:06.788 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:06.788 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:06.788 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:06.788 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:06.788 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:06.788 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:06.788 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:06.788 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:06.788 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:06.788 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:06.788 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:06.788 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.788 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.046 nvme0n1 00:24:07.046 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.046 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:07.046 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.046 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.046 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:07.046 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.046 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.046 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:07.046 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.046 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.046 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.046 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:07.046 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:24:07.046 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:07.046 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:07.046 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:07.046 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:07.046 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDkzNDVlY2M2MjgzMjY3YzU0NzNlNjcyZjU5MTA5NWM4OTJkNzA5ZDYwOWNjOWYxSTOurg==: 00:24:07.046 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU2NzVhNThlYzg1YmRmZjk0ZTA1YTYwZTg1NTczMjg3ODAwOTc1N2M3NmJkZWE0kKW4+A==: 00:24:07.046 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:07.046 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:07.046 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDkzNDVlY2M2MjgzMjY3YzU0NzNlNjcyZjU5MTA5NWM4OTJkNzA5ZDYwOWNjOWYxSTOurg==: 00:24:07.046 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU2NzVhNThlYzg1YmRmZjk0ZTA1YTYwZTg1NTczMjg3ODAwOTc1N2M3NmJkZWE0kKW4+A==: ]] 00:24:07.046 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU2NzVhNThlYzg1YmRmZjk0ZTA1YTYwZTg1NTczMjg3ODAwOTc1N2M3NmJkZWE0kKW4+A==: 00:24:07.046 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:24:07.046 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:07.046 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:07.046 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:07.046 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:07.046 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:07.046 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:07.046 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.046 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.046 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.046 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:07.046 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:07.046 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:07.046 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:07.046 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:07.046 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:07.046 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:07.046 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:07.046 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:07.046 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:07.046 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:07.046 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:07.046 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.046 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.304 nvme0n1 00:24:07.305 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.305 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:07.305 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.305 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.305 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:07.305 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.305 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.305 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:07.305 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.305 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.305 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.305 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:07.305 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:24:07.305 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:07.305 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:07.305 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:07.305 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:07.305 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTI1NGUzYmUzMTFkNTQxMGFjZDZlMTcxMjA4MWZlYjlr7V90: 00:24:07.305 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2EzODQ3YTk1MDkzN2YxYzMxMWZlMmYyNTdlMzVkZWJ0mRxz: 00:24:07.305 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:07.305 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:07.305 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTI1NGUzYmUzMTFkNTQxMGFjZDZlMTcxMjA4MWZlYjlr7V90: 00:24:07.305 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2EzODQ3YTk1MDkzN2YxYzMxMWZlMmYyNTdlMzVkZWJ0mRxz: ]] 00:24:07.305 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2EzODQ3YTk1MDkzN2YxYzMxMWZlMmYyNTdlMzVkZWJ0mRxz: 00:24:07.305 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:24:07.305 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:07.305 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:07.305 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:07.305 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:07.305 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:07.305 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:07.305 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.305 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.305 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.305 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:07.305 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:07.305 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:07.305 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:07.305 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:07.305 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:07.305 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:07.305 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:07.305 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:07.305 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:07.305 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:07.305 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:07.305 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.305 20:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.564 nvme0n1 00:24:07.564 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.564 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:07.564 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.564 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.564 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:07.564 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.564 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.564 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:07.564 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.564 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.822 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.822 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:07.822 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:24:07.822 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:07.822 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:07.822 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:07.822 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:07.822 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2VlYTY0OThhMmNhMDJhZjQxOGMyYTQ4NWM4NGU2M2JhNzcyMjk0MzdhMDg1Zjg22g6MLQ==: 00:24:07.822 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2M3ZjM1OGM1ZmYwNDJkZDk5NDUxODM1MDc5ZTMwOWGknZ3H: 00:24:07.822 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:07.822 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:07.822 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2VlYTY0OThhMmNhMDJhZjQxOGMyYTQ4NWM4NGU2M2JhNzcyMjk0MzdhMDg1Zjg22g6MLQ==: 00:24:07.822 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2M3ZjM1OGM1ZmYwNDJkZDk5NDUxODM1MDc5ZTMwOWGknZ3H: ]] 00:24:07.822 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2M3ZjM1OGM1ZmYwNDJkZDk5NDUxODM1MDc5ZTMwOWGknZ3H: 00:24:07.822 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:24:07.822 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:07.822 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:07.822 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:07.822 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:07.822 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:07.822 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:07.822 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.822 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.822 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.822 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:07.822 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:07.823 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:07.823 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:07.823 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:07.823 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:07.823 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:07.823 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:07.823 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:07.823 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:07.823 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:07.823 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:07.823 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.823 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.081 nvme0n1 00:24:08.081 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.081 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:08.081 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:08.081 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.081 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.081 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.081 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.081 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:08.081 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.081 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.081 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.081 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:08.081 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:24:08.081 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:08.081 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:08.081 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:08.081 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:08.081 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTVmYmRiYTE3ZjVmZmE2YWQzYTA2ZWJkNzFkMGY5ZWE4MThkNjI4ZGE5YWI5YjFkY2E3YmNlZDQ1ZmFjZTZmZGROcto=: 00:24:08.081 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:08.081 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:08.081 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:08.081 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTVmYmRiYTE3ZjVmZmE2YWQzYTA2ZWJkNzFkMGY5ZWE4MThkNjI4ZGE5YWI5YjFkY2E3YmNlZDQ1ZmFjZTZmZGROcto=: 00:24:08.081 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:08.081 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:24:08.081 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:08.081 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:08.081 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:08.081 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:08.082 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:08.082 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:08.082 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.082 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.082 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.082 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:08.082 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:08.082 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:08.082 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:08.082 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:08.082 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:08.082 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:08.082 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:08.082 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:08.082 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:08.082 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:08.082 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:08.082 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.082 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.341 nvme0n1 00:24:08.341 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.341 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:08.341 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:08.341 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.341 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.341 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.341 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.341 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:08.341 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.341 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.341 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.341 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:08.341 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:08.341 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:24:08.341 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:08.341 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:08.341 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:08.341 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:08.341 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGRhMzdhMGRiNzA0YWM3NThjZDYwY2I5MmU1NTU0NzHxtfuB: 00:24:08.341 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTMyMDg0NWFlZWRjNGJmZTc4YzkyOTNjNGU5MDBjZTVjZmFhZTBiN2E4YmNlN2Q0ZmY0Yjg0YmRhYzkwMWM5ZE1ffHg=: 00:24:08.341 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:08.341 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:08.341 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGRhMzdhMGRiNzA0YWM3NThjZDYwY2I5MmU1NTU0NzHxtfuB: 00:24:08.341 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTMyMDg0NWFlZWRjNGJmZTc4YzkyOTNjNGU5MDBjZTVjZmFhZTBiN2E4YmNlN2Q0ZmY0Yjg0YmRhYzkwMWM5ZE1ffHg=: ]] 00:24:08.341 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTMyMDg0NWFlZWRjNGJmZTc4YzkyOTNjNGU5MDBjZTVjZmFhZTBiN2E4YmNlN2Q0ZmY0Yjg0YmRhYzkwMWM5ZE1ffHg=: 00:24:08.341 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:24:08.341 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:08.341 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:08.341 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:08.341 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:08.341 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:08.341 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:08.341 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.341 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.341 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.341 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:08.341 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:08.341 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:08.341 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:08.341 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:08.341 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:08.341 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:08.341 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:08.341 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:08.341 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:08.342 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:08.342 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:08.342 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.342 20:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.910 nvme0n1 00:24:08.910 20:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.910 20:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:08.910 20:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.910 20:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.910 20:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:08.910 20:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.910 20:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.910 20:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:08.910 20:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.910 20:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.910 20:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.910 20:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:08.910 20:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:24:08.910 20:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:08.910 20:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:08.910 20:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:08.910 20:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:08.910 20:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDkzNDVlY2M2MjgzMjY3YzU0NzNlNjcyZjU5MTA5NWM4OTJkNzA5ZDYwOWNjOWYxSTOurg==: 00:24:08.910 20:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU2NzVhNThlYzg1YmRmZjk0ZTA1YTYwZTg1NTczMjg3ODAwOTc1N2M3NmJkZWE0kKW4+A==: 00:24:08.910 20:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:08.910 20:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:08.910 20:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDkzNDVlY2M2MjgzMjY3YzU0NzNlNjcyZjU5MTA5NWM4OTJkNzA5ZDYwOWNjOWYxSTOurg==: 00:24:08.910 20:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU2NzVhNThlYzg1YmRmZjk0ZTA1YTYwZTg1NTczMjg3ODAwOTc1N2M3NmJkZWE0kKW4+A==: ]] 00:24:08.910 20:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU2NzVhNThlYzg1YmRmZjk0ZTA1YTYwZTg1NTczMjg3ODAwOTc1N2M3NmJkZWE0kKW4+A==: 00:24:08.910 20:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:24:08.910 20:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:08.910 20:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:08.910 20:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:08.910 20:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:08.910 20:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:08.910 20:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:08.910 20:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.910 20:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.910 20:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.910 20:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:08.910 20:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:08.910 20:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:08.910 20:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:08.910 20:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:08.910 20:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:08.910 20:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:08.910 20:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:08.910 20:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:08.910 20:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:08.910 20:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:08.910 20:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:08.910 20:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.910 20:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.476 nvme0n1 00:24:09.476 20:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.476 20:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:09.476 20:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.476 20:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.476 20:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:09.476 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.476 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:09.476 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:09.476 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.476 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.476 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.476 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:09.476 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:24:09.476 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:09.476 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:09.476 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:09.476 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:09.476 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTI1NGUzYmUzMTFkNTQxMGFjZDZlMTcxMjA4MWZlYjlr7V90: 00:24:09.476 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2EzODQ3YTk1MDkzN2YxYzMxMWZlMmYyNTdlMzVkZWJ0mRxz: 00:24:09.476 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:09.476 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:09.476 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTI1NGUzYmUzMTFkNTQxMGFjZDZlMTcxMjA4MWZlYjlr7V90: 00:24:09.476 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2EzODQ3YTk1MDkzN2YxYzMxMWZlMmYyNTdlMzVkZWJ0mRxz: ]] 00:24:09.476 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2EzODQ3YTk1MDkzN2YxYzMxMWZlMmYyNTdlMzVkZWJ0mRxz: 00:24:09.476 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:24:09.476 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:09.476 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:09.476 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:09.476 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:09.476 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:09.476 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:09.476 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.476 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.476 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.476 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:09.476 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:09.476 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:09.476 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:09.476 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:09.476 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:09.476 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:09.476 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:09.476 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:09.476 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:09.476 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:09.476 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:09.476 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.476 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.043 nvme0n1 00:24:10.043 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.043 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:10.043 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.043 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.043 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:10.043 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.043 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.043 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:10.043 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.043 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.043 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.043 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:10.043 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:24:10.043 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:10.043 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:10.043 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:10.043 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:10.043 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2VlYTY0OThhMmNhMDJhZjQxOGMyYTQ4NWM4NGU2M2JhNzcyMjk0MzdhMDg1Zjg22g6MLQ==: 00:24:10.043 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2M3ZjM1OGM1ZmYwNDJkZDk5NDUxODM1MDc5ZTMwOWGknZ3H: 00:24:10.043 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:10.043 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:10.043 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2VlYTY0OThhMmNhMDJhZjQxOGMyYTQ4NWM4NGU2M2JhNzcyMjk0MzdhMDg1Zjg22g6MLQ==: 00:24:10.043 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2M3ZjM1OGM1ZmYwNDJkZDk5NDUxODM1MDc5ZTMwOWGknZ3H: ]] 00:24:10.043 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2M3ZjM1OGM1ZmYwNDJkZDk5NDUxODM1MDc5ZTMwOWGknZ3H: 00:24:10.043 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:24:10.043 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:10.043 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:10.043 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:10.043 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:10.043 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:10.043 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:10.043 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.043 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.043 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.043 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:10.043 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:10.043 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:10.043 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:10.043 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:10.043 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:10.043 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:10.043 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:10.043 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:10.043 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:10.043 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:10.043 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:10.043 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.043 20:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.610 nvme0n1 00:24:10.610 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.610 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:10.610 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.610 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.610 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:10.610 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.610 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.610 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:10.610 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.610 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.610 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.610 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:10.610 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:24:10.611 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:10.611 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:10.611 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:10.611 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:10.611 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTVmYmRiYTE3ZjVmZmE2YWQzYTA2ZWJkNzFkMGY5ZWE4MThkNjI4ZGE5YWI5YjFkY2E3YmNlZDQ1ZmFjZTZmZGROcto=: 00:24:10.611 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:10.611 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:10.611 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:10.611 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTVmYmRiYTE3ZjVmZmE2YWQzYTA2ZWJkNzFkMGY5ZWE4MThkNjI4ZGE5YWI5YjFkY2E3YmNlZDQ1ZmFjZTZmZGROcto=: 00:24:10.611 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:10.611 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:24:10.611 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:10.611 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:10.611 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:10.611 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:10.611 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:10.611 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:10.611 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.611 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.611 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.611 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:10.611 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:10.611 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:10.611 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:10.611 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:10.611 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:10.611 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:10.611 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:10.611 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:10.611 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:10.611 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:10.611 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:10.611 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.611 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.178 nvme0n1 00:24:11.178 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.178 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:11.178 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.178 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.178 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:11.178 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.178 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.178 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.178 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.178 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.178 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.178 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:11.178 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:11.178 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:24:11.178 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:11.178 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:11.178 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:11.178 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:11.178 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGRhMzdhMGRiNzA0YWM3NThjZDYwY2I5MmU1NTU0NzHxtfuB: 00:24:11.178 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTMyMDg0NWFlZWRjNGJmZTc4YzkyOTNjNGU5MDBjZTVjZmFhZTBiN2E4YmNlN2Q0ZmY0Yjg0YmRhYzkwMWM5ZE1ffHg=: 00:24:11.178 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:11.178 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:11.178 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGRhMzdhMGRiNzA0YWM3NThjZDYwY2I5MmU1NTU0NzHxtfuB: 00:24:11.178 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTMyMDg0NWFlZWRjNGJmZTc4YzkyOTNjNGU5MDBjZTVjZmFhZTBiN2E4YmNlN2Q0ZmY0Yjg0YmRhYzkwMWM5ZE1ffHg=: ]] 00:24:11.178 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTMyMDg0NWFlZWRjNGJmZTc4YzkyOTNjNGU5MDBjZTVjZmFhZTBiN2E4YmNlN2Q0ZmY0Yjg0YmRhYzkwMWM5ZE1ffHg=: 00:24:11.178 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:24:11.178 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:11.178 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:11.178 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:11.178 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:11.178 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:11.178 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:11.178 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.178 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.178 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.178 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:11.178 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:11.178 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:11.178 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:11.178 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:11.178 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:11.178 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:11.178 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:11.178 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:11.178 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:11.178 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:11.178 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:11.178 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.178 20:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.112 nvme0n1 00:24:12.112 20:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.112 20:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.112 20:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:12.112 20:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.112 20:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.112 20:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.112 20:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.112 20:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.112 20:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.112 20:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.112 20:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.112 20:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:12.112 20:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:24:12.112 20:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.112 20:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:12.112 20:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:12.112 20:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:12.112 20:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDkzNDVlY2M2MjgzMjY3YzU0NzNlNjcyZjU5MTA5NWM4OTJkNzA5ZDYwOWNjOWYxSTOurg==: 00:24:12.112 20:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU2NzVhNThlYzg1YmRmZjk0ZTA1YTYwZTg1NTczMjg3ODAwOTc1N2M3NmJkZWE0kKW4+A==: 00:24:12.112 20:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:12.112 20:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:12.112 20:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDkzNDVlY2M2MjgzMjY3YzU0NzNlNjcyZjU5MTA5NWM4OTJkNzA5ZDYwOWNjOWYxSTOurg==: 00:24:12.113 20:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU2NzVhNThlYzg1YmRmZjk0ZTA1YTYwZTg1NTczMjg3ODAwOTc1N2M3NmJkZWE0kKW4+A==: ]] 00:24:12.113 20:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU2NzVhNThlYzg1YmRmZjk0ZTA1YTYwZTg1NTczMjg3ODAwOTc1N2M3NmJkZWE0kKW4+A==: 00:24:12.113 20:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:24:12.113 20:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.113 20:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:12.113 20:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:12.113 20:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:12.113 20:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.113 20:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:12.113 20:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.113 20:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.113 20:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.113 20:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.113 20:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:12.113 20:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:12.113 20:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:12.113 20:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.113 20:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.113 20:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:12.113 20:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.113 20:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:12.113 20:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:12.113 20:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:12.113 20:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:12.113 20:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.113 20:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.046 nvme0n1 00:24:13.046 20:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.046 20:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.046 20:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:13.046 20:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.046 20:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.046 20:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.046 20:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.046 20:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.046 20:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.046 20:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.046 20:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.046 20:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:13.046 20:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:24:13.046 20:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.046 20:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:13.046 20:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:13.046 20:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:13.046 20:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTI1NGUzYmUzMTFkNTQxMGFjZDZlMTcxMjA4MWZlYjlr7V90: 00:24:13.046 20:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2EzODQ3YTk1MDkzN2YxYzMxMWZlMmYyNTdlMzVkZWJ0mRxz: 00:24:13.046 20:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:13.046 20:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:13.047 20:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTI1NGUzYmUzMTFkNTQxMGFjZDZlMTcxMjA4MWZlYjlr7V90: 00:24:13.047 20:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2EzODQ3YTk1MDkzN2YxYzMxMWZlMmYyNTdlMzVkZWJ0mRxz: ]] 00:24:13.047 20:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2EzODQ3YTk1MDkzN2YxYzMxMWZlMmYyNTdlMzVkZWJ0mRxz: 00:24:13.047 20:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:24:13.047 20:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.047 20:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:13.047 20:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:13.047 20:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:13.047 20:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.047 20:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:13.047 20:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.047 20:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.047 20:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.047 20:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.047 20:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:13.047 20:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:13.047 20:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:13.047 20:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.047 20:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.047 20:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:13.047 20:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.047 20:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:13.047 20:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:13.047 20:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:13.047 20:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:13.047 20:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.047 20:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.980 nvme0n1 00:24:13.980 20:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.980 20:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.980 20:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:13.980 20:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.980 20:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.980 20:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.980 20:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.980 20:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.980 20:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.980 20:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.980 20:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.980 20:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:13.980 20:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:24:13.980 20:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.980 20:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:13.980 20:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:13.980 20:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:13.980 20:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2VlYTY0OThhMmNhMDJhZjQxOGMyYTQ4NWM4NGU2M2JhNzcyMjk0MzdhMDg1Zjg22g6MLQ==: 00:24:13.980 20:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2M3ZjM1OGM1ZmYwNDJkZDk5NDUxODM1MDc5ZTMwOWGknZ3H: 00:24:13.980 20:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:13.980 20:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:13.980 20:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2VlYTY0OThhMmNhMDJhZjQxOGMyYTQ4NWM4NGU2M2JhNzcyMjk0MzdhMDg1Zjg22g6MLQ==: 00:24:13.980 20:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2M3ZjM1OGM1ZmYwNDJkZDk5NDUxODM1MDc5ZTMwOWGknZ3H: ]] 00:24:13.980 20:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2M3ZjM1OGM1ZmYwNDJkZDk5NDUxODM1MDc5ZTMwOWGknZ3H: 00:24:13.980 20:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:24:13.980 20:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.980 20:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:13.980 20:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:13.980 20:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:13.980 20:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.980 20:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:13.980 20:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.980 20:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.980 20:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.980 20:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.980 20:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:13.980 20:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:13.980 20:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:13.980 20:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.980 20:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.980 20:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:13.980 20:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.980 20:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:13.980 20:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:13.980 20:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:13.980 20:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:13.980 20:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.980 20:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.911 nvme0n1 00:24:14.911 20:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.911 20:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.911 20:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.911 20:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.911 20:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.911 20:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.911 20:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.911 20:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.911 20:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.911 20:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.911 20:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.911 20:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:14.911 20:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:24:14.911 20:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.911 20:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:14.911 20:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:14.911 20:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:14.911 20:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTVmYmRiYTE3ZjVmZmE2YWQzYTA2ZWJkNzFkMGY5ZWE4MThkNjI4ZGE5YWI5YjFkY2E3YmNlZDQ1ZmFjZTZmZGROcto=: 00:24:14.911 20:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:14.911 20:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:14.911 20:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:14.911 20:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTVmYmRiYTE3ZjVmZmE2YWQzYTA2ZWJkNzFkMGY5ZWE4MThkNjI4ZGE5YWI5YjFkY2E3YmNlZDQ1ZmFjZTZmZGROcto=: 00:24:14.911 20:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:14.911 20:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:24:14.911 20:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:14.911 20:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:14.911 20:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:14.911 20:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:14.911 20:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:14.911 20:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:14.911 20:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.911 20:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.911 20:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.911 20:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:14.911 20:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:14.911 20:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:14.911 20:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:14.911 20:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.911 20:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.911 20:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:14.911 20:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.911 20:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:14.911 20:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:14.911 20:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:14.911 20:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:14.911 20:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.911 20:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.841 nvme0n1 00:24:15.841 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.841 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:15.841 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:15.841 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.841 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.841 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.841 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.841 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.841 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.841 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.841 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.841 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:15.841 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:15.841 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:15.841 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:24:15.841 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:15.841 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:15.841 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:15.841 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:15.841 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGRhMzdhMGRiNzA0YWM3NThjZDYwY2I5MmU1NTU0NzHxtfuB: 00:24:15.841 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTMyMDg0NWFlZWRjNGJmZTc4YzkyOTNjNGU5MDBjZTVjZmFhZTBiN2E4YmNlN2Q0ZmY0Yjg0YmRhYzkwMWM5ZE1ffHg=: 00:24:15.841 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:15.841 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:15.841 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGRhMzdhMGRiNzA0YWM3NThjZDYwY2I5MmU1NTU0NzHxtfuB: 00:24:15.841 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTMyMDg0NWFlZWRjNGJmZTc4YzkyOTNjNGU5MDBjZTVjZmFhZTBiN2E4YmNlN2Q0ZmY0Yjg0YmRhYzkwMWM5ZE1ffHg=: ]] 00:24:15.841 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTMyMDg0NWFlZWRjNGJmZTc4YzkyOTNjNGU5MDBjZTVjZmFhZTBiN2E4YmNlN2Q0ZmY0Yjg0YmRhYzkwMWM5ZE1ffHg=: 00:24:15.841 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:24:15.841 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:15.841 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:15.841 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:15.841 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:15.841 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:15.841 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:15.841 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.841 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.841 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.841 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:15.841 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:15.841 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:15.841 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:15.841 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.841 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.841 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:15.841 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:15.841 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:15.841 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:15.841 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:15.841 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:15.841 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.842 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.842 nvme0n1 00:24:15.842 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.842 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:15.842 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:15.842 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.842 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.842 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.842 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.842 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.842 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.842 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.842 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.842 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:15.842 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:24:15.842 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:15.842 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:15.842 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:15.842 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:15.842 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDkzNDVlY2M2MjgzMjY3YzU0NzNlNjcyZjU5MTA5NWM4OTJkNzA5ZDYwOWNjOWYxSTOurg==: 00:24:15.842 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU2NzVhNThlYzg1YmRmZjk0ZTA1YTYwZTg1NTczMjg3ODAwOTc1N2M3NmJkZWE0kKW4+A==: 00:24:15.842 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:15.842 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:15.842 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDkzNDVlY2M2MjgzMjY3YzU0NzNlNjcyZjU5MTA5NWM4OTJkNzA5ZDYwOWNjOWYxSTOurg==: 00:24:15.842 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU2NzVhNThlYzg1YmRmZjk0ZTA1YTYwZTg1NTczMjg3ODAwOTc1N2M3NmJkZWE0kKW4+A==: ]] 00:24:15.842 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU2NzVhNThlYzg1YmRmZjk0ZTA1YTYwZTg1NTczMjg3ODAwOTc1N2M3NmJkZWE0kKW4+A==: 00:24:15.842 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:24:15.842 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:15.842 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:15.842 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:15.842 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:15.842 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:15.842 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:15.842 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.842 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.842 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.842 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:15.842 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:15.842 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:15.842 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:15.842 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.842 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.842 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:15.842 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:15.842 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:15.842 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:15.842 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:15.842 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:15.842 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.842 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.099 nvme0n1 00:24:16.099 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.099 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.099 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:16.099 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.099 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.099 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.099 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.099 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.099 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.099 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.099 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.099 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:16.099 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:24:16.099 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:16.099 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:16.099 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:16.099 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:16.099 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTI1NGUzYmUzMTFkNTQxMGFjZDZlMTcxMjA4MWZlYjlr7V90: 00:24:16.099 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2EzODQ3YTk1MDkzN2YxYzMxMWZlMmYyNTdlMzVkZWJ0mRxz: 00:24:16.099 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:16.099 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:16.099 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTI1NGUzYmUzMTFkNTQxMGFjZDZlMTcxMjA4MWZlYjlr7V90: 00:24:16.099 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2EzODQ3YTk1MDkzN2YxYzMxMWZlMmYyNTdlMzVkZWJ0mRxz: ]] 00:24:16.099 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2EzODQ3YTk1MDkzN2YxYzMxMWZlMmYyNTdlMzVkZWJ0mRxz: 00:24:16.099 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:24:16.099 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:16.099 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:16.099 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:16.099 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:16.099 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:16.099 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:16.099 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.099 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.099 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.099 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:16.099 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:16.099 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:16.099 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:16.099 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.099 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.099 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:16.099 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.099 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:16.099 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:16.099 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:16.099 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:16.099 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.099 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.356 nvme0n1 00:24:16.356 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.356 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.356 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.356 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:16.356 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.356 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.356 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.356 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.356 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.356 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.356 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.356 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:16.356 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:24:16.356 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:16.356 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:16.356 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:16.356 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:16.356 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2VlYTY0OThhMmNhMDJhZjQxOGMyYTQ4NWM4NGU2M2JhNzcyMjk0MzdhMDg1Zjg22g6MLQ==: 00:24:16.356 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2M3ZjM1OGM1ZmYwNDJkZDk5NDUxODM1MDc5ZTMwOWGknZ3H: 00:24:16.356 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:16.357 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:16.357 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2VlYTY0OThhMmNhMDJhZjQxOGMyYTQ4NWM4NGU2M2JhNzcyMjk0MzdhMDg1Zjg22g6MLQ==: 00:24:16.357 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2M3ZjM1OGM1ZmYwNDJkZDk5NDUxODM1MDc5ZTMwOWGknZ3H: ]] 00:24:16.357 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2M3ZjM1OGM1ZmYwNDJkZDk5NDUxODM1MDc5ZTMwOWGknZ3H: 00:24:16.357 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:24:16.357 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:16.357 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:16.357 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:16.357 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:16.357 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:16.357 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:16.357 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.357 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.357 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.357 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:16.357 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:16.357 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:16.357 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:16.357 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.357 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.357 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:16.357 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.357 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:16.357 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:16.357 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:16.357 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:16.357 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.357 20:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.613 nvme0n1 00:24:16.613 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.613 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.613 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.613 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.613 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:16.613 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.613 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.613 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.613 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.613 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.613 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.613 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:16.613 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:24:16.613 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:16.613 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:16.613 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:16.613 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:16.614 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTVmYmRiYTE3ZjVmZmE2YWQzYTA2ZWJkNzFkMGY5ZWE4MThkNjI4ZGE5YWI5YjFkY2E3YmNlZDQ1ZmFjZTZmZGROcto=: 00:24:16.614 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:16.614 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:16.614 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:16.614 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTVmYmRiYTE3ZjVmZmE2YWQzYTA2ZWJkNzFkMGY5ZWE4MThkNjI4ZGE5YWI5YjFkY2E3YmNlZDQ1ZmFjZTZmZGROcto=: 00:24:16.614 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:16.614 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:24:16.614 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:16.614 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:16.614 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:16.614 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:16.614 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:16.614 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:16.614 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.614 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.614 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.614 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:16.614 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:16.614 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:16.614 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:16.614 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.614 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.614 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:16.614 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.614 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:16.614 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:16.614 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:16.614 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:16.614 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.614 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.871 nvme0n1 00:24:16.871 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.871 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.871 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.871 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:16.871 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.871 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.871 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.871 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.871 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.871 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.871 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.871 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:16.871 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:16.871 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:24:16.871 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:16.871 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:16.871 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:16.871 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:16.871 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGRhMzdhMGRiNzA0YWM3NThjZDYwY2I5MmU1NTU0NzHxtfuB: 00:24:16.871 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTMyMDg0NWFlZWRjNGJmZTc4YzkyOTNjNGU5MDBjZTVjZmFhZTBiN2E4YmNlN2Q0ZmY0Yjg0YmRhYzkwMWM5ZE1ffHg=: 00:24:16.871 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:16.871 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:16.871 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGRhMzdhMGRiNzA0YWM3NThjZDYwY2I5MmU1NTU0NzHxtfuB: 00:24:16.871 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTMyMDg0NWFlZWRjNGJmZTc4YzkyOTNjNGU5MDBjZTVjZmFhZTBiN2E4YmNlN2Q0ZmY0Yjg0YmRhYzkwMWM5ZE1ffHg=: ]] 00:24:16.871 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTMyMDg0NWFlZWRjNGJmZTc4YzkyOTNjNGU5MDBjZTVjZmFhZTBiN2E4YmNlN2Q0ZmY0Yjg0YmRhYzkwMWM5ZE1ffHg=: 00:24:16.871 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:24:16.871 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:16.871 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:16.871 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:16.871 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:16.871 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:16.871 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:16.871 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.871 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.871 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.871 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:16.871 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:16.871 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:16.871 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:16.871 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.871 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.871 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:16.871 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.871 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:16.871 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:16.871 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:16.871 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:16.871 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.871 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.129 nvme0n1 00:24:17.129 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.129 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:17.129 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.129 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.129 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.129 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.129 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.129 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.129 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.129 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.129 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.129 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:17.129 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:24:17.129 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:17.129 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:17.129 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:17.129 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:17.129 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDkzNDVlY2M2MjgzMjY3YzU0NzNlNjcyZjU5MTA5NWM4OTJkNzA5ZDYwOWNjOWYxSTOurg==: 00:24:17.129 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU2NzVhNThlYzg1YmRmZjk0ZTA1YTYwZTg1NTczMjg3ODAwOTc1N2M3NmJkZWE0kKW4+A==: 00:24:17.129 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:17.129 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:17.129 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDkzNDVlY2M2MjgzMjY3YzU0NzNlNjcyZjU5MTA5NWM4OTJkNzA5ZDYwOWNjOWYxSTOurg==: 00:24:17.129 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU2NzVhNThlYzg1YmRmZjk0ZTA1YTYwZTg1NTczMjg3ODAwOTc1N2M3NmJkZWE0kKW4+A==: ]] 00:24:17.129 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU2NzVhNThlYzg1YmRmZjk0ZTA1YTYwZTg1NTczMjg3ODAwOTc1N2M3NmJkZWE0kKW4+A==: 00:24:17.129 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:24:17.129 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:17.129 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:17.129 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:17.129 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:17.129 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:17.129 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:17.129 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.129 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.129 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.129 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:17.129 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:17.129 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:17.129 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:17.129 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.129 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.129 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:17.129 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.129 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:17.129 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:17.129 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:17.129 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:17.129 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.129 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.129 nvme0n1 00:24:17.129 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.129 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.129 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.129 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.129 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:17.129 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.388 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.388 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.388 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.388 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.388 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.388 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:17.388 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:24:17.388 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:17.388 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:17.388 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:17.388 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:17.388 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTI1NGUzYmUzMTFkNTQxMGFjZDZlMTcxMjA4MWZlYjlr7V90: 00:24:17.388 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2EzODQ3YTk1MDkzN2YxYzMxMWZlMmYyNTdlMzVkZWJ0mRxz: 00:24:17.388 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:17.388 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:17.388 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTI1NGUzYmUzMTFkNTQxMGFjZDZlMTcxMjA4MWZlYjlr7V90: 00:24:17.388 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2EzODQ3YTk1MDkzN2YxYzMxMWZlMmYyNTdlMzVkZWJ0mRxz: ]] 00:24:17.388 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2EzODQ3YTk1MDkzN2YxYzMxMWZlMmYyNTdlMzVkZWJ0mRxz: 00:24:17.388 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:24:17.388 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:17.388 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:17.388 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:17.388 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:17.388 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:17.388 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:17.388 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.388 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.388 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.388 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:17.388 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:17.388 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:17.388 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:17.388 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.388 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.388 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:17.388 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.388 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:17.388 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:17.388 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:17.388 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:17.389 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.389 20:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.389 nvme0n1 00:24:17.389 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.389 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.389 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:17.389 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.389 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.389 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.647 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.647 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.647 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.647 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.647 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.647 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:17.647 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:24:17.647 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:17.647 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:17.647 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:17.647 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:17.647 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2VlYTY0OThhMmNhMDJhZjQxOGMyYTQ4NWM4NGU2M2JhNzcyMjk0MzdhMDg1Zjg22g6MLQ==: 00:24:17.647 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2M3ZjM1OGM1ZmYwNDJkZDk5NDUxODM1MDc5ZTMwOWGknZ3H: 00:24:17.647 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:17.647 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:17.647 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2VlYTY0OThhMmNhMDJhZjQxOGMyYTQ4NWM4NGU2M2JhNzcyMjk0MzdhMDg1Zjg22g6MLQ==: 00:24:17.648 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2M3ZjM1OGM1ZmYwNDJkZDk5NDUxODM1MDc5ZTMwOWGknZ3H: ]] 00:24:17.648 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2M3ZjM1OGM1ZmYwNDJkZDk5NDUxODM1MDc5ZTMwOWGknZ3H: 00:24:17.648 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:24:17.648 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:17.648 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:17.648 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:17.648 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:17.648 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:17.648 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:17.648 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.648 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.648 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.648 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:17.648 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:17.648 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:17.648 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:17.648 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.648 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.648 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:17.648 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.648 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:17.648 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:17.648 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:17.648 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:17.648 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.648 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.648 nvme0n1 00:24:17.648 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.648 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.648 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:17.648 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.648 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.906 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.906 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.906 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.906 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.906 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.907 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.907 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:17.907 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:24:17.907 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:17.907 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:17.907 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:17.907 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:17.907 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTVmYmRiYTE3ZjVmZmE2YWQzYTA2ZWJkNzFkMGY5ZWE4MThkNjI4ZGE5YWI5YjFkY2E3YmNlZDQ1ZmFjZTZmZGROcto=: 00:24:17.907 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:17.907 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:17.907 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:17.907 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTVmYmRiYTE3ZjVmZmE2YWQzYTA2ZWJkNzFkMGY5ZWE4MThkNjI4ZGE5YWI5YjFkY2E3YmNlZDQ1ZmFjZTZmZGROcto=: 00:24:17.907 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:17.907 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:24:17.907 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:17.907 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:17.907 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:17.907 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:17.907 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:17.907 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:17.907 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.907 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.907 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.907 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:17.907 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:17.907 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:17.907 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:17.907 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.907 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.907 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:17.907 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.907 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:17.907 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:17.907 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:17.907 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:17.907 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.907 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.907 nvme0n1 00:24:17.907 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.907 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.907 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:17.907 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.907 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.907 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.165 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.165 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.165 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.165 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.165 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.165 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:18.165 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:18.165 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:24:18.165 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:18.165 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:18.165 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:18.165 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:18.165 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGRhMzdhMGRiNzA0YWM3NThjZDYwY2I5MmU1NTU0NzHxtfuB: 00:24:18.165 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTMyMDg0NWFlZWRjNGJmZTc4YzkyOTNjNGU5MDBjZTVjZmFhZTBiN2E4YmNlN2Q0ZmY0Yjg0YmRhYzkwMWM5ZE1ffHg=: 00:24:18.165 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:18.165 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:18.165 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGRhMzdhMGRiNzA0YWM3NThjZDYwY2I5MmU1NTU0NzHxtfuB: 00:24:18.165 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTMyMDg0NWFlZWRjNGJmZTc4YzkyOTNjNGU5MDBjZTVjZmFhZTBiN2E4YmNlN2Q0ZmY0Yjg0YmRhYzkwMWM5ZE1ffHg=: ]] 00:24:18.166 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTMyMDg0NWFlZWRjNGJmZTc4YzkyOTNjNGU5MDBjZTVjZmFhZTBiN2E4YmNlN2Q0ZmY0Yjg0YmRhYzkwMWM5ZE1ffHg=: 00:24:18.166 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:24:18.166 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:18.166 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:18.166 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:18.166 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:18.166 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:18.166 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:18.166 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.166 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.166 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.166 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:18.166 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:18.166 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:18.166 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:18.166 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.166 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.166 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:18.166 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:18.166 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:18.166 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:18.166 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:18.166 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:18.166 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.166 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.424 nvme0n1 00:24:18.424 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.424 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.424 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.424 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.424 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:18.424 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.424 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.424 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.424 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.424 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.424 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.424 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:18.424 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:24:18.424 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:18.424 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:18.424 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:18.424 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:18.424 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDkzNDVlY2M2MjgzMjY3YzU0NzNlNjcyZjU5MTA5NWM4OTJkNzA5ZDYwOWNjOWYxSTOurg==: 00:24:18.424 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU2NzVhNThlYzg1YmRmZjk0ZTA1YTYwZTg1NTczMjg3ODAwOTc1N2M3NmJkZWE0kKW4+A==: 00:24:18.424 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:18.424 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:18.424 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDkzNDVlY2M2MjgzMjY3YzU0NzNlNjcyZjU5MTA5NWM4OTJkNzA5ZDYwOWNjOWYxSTOurg==: 00:24:18.424 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU2NzVhNThlYzg1YmRmZjk0ZTA1YTYwZTg1NTczMjg3ODAwOTc1N2M3NmJkZWE0kKW4+A==: ]] 00:24:18.424 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU2NzVhNThlYzg1YmRmZjk0ZTA1YTYwZTg1NTczMjg3ODAwOTc1N2M3NmJkZWE0kKW4+A==: 00:24:18.424 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:24:18.424 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:18.424 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:18.424 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:18.424 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:18.424 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:18.424 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:18.424 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.424 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.424 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.424 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:18.424 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:18.424 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:18.424 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:18.424 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.424 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.424 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:18.424 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:18.424 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:18.424 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:18.424 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:18.424 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:18.424 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.424 20:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.682 nvme0n1 00:24:18.682 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.682 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.682 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.682 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.682 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:18.682 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.682 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.682 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.682 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.682 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.682 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.682 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:18.682 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:24:18.682 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:18.682 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:18.682 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:18.682 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:18.682 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTI1NGUzYmUzMTFkNTQxMGFjZDZlMTcxMjA4MWZlYjlr7V90: 00:24:18.682 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2EzODQ3YTk1MDkzN2YxYzMxMWZlMmYyNTdlMzVkZWJ0mRxz: 00:24:18.683 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:18.683 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:18.683 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTI1NGUzYmUzMTFkNTQxMGFjZDZlMTcxMjA4MWZlYjlr7V90: 00:24:18.683 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2EzODQ3YTk1MDkzN2YxYzMxMWZlMmYyNTdlMzVkZWJ0mRxz: ]] 00:24:18.683 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2EzODQ3YTk1MDkzN2YxYzMxMWZlMmYyNTdlMzVkZWJ0mRxz: 00:24:18.683 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:24:18.683 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:18.683 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:18.683 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:18.683 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:18.683 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:18.683 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:18.683 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.683 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.683 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.683 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:18.683 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:18.683 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:18.683 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:18.683 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.683 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.683 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:18.683 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:18.683 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:18.683 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:18.683 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:18.683 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:18.683 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.683 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.940 nvme0n1 00:24:18.940 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.940 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.940 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.940 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.940 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:18.940 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.198 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.198 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.198 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.198 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.198 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.198 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:19.198 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:24:19.198 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:19.198 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:19.198 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:19.198 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:19.198 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2VlYTY0OThhMmNhMDJhZjQxOGMyYTQ4NWM4NGU2M2JhNzcyMjk0MzdhMDg1Zjg22g6MLQ==: 00:24:19.198 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2M3ZjM1OGM1ZmYwNDJkZDk5NDUxODM1MDc5ZTMwOWGknZ3H: 00:24:19.198 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:19.198 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:19.198 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2VlYTY0OThhMmNhMDJhZjQxOGMyYTQ4NWM4NGU2M2JhNzcyMjk0MzdhMDg1Zjg22g6MLQ==: 00:24:19.198 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2M3ZjM1OGM1ZmYwNDJkZDk5NDUxODM1MDc5ZTMwOWGknZ3H: ]] 00:24:19.198 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2M3ZjM1OGM1ZmYwNDJkZDk5NDUxODM1MDc5ZTMwOWGknZ3H: 00:24:19.198 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:24:19.198 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:19.198 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:19.198 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:19.198 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:19.198 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:19.198 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:19.198 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.198 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.198 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.198 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:19.198 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:19.198 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:19.198 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:19.198 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.198 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.198 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:19.198 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.198 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:19.199 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:19.199 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:19.199 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:19.199 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.199 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.456 nvme0n1 00:24:19.456 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.456 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.456 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.456 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.456 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:19.456 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.456 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.456 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.456 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.456 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.456 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.456 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:19.456 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:24:19.456 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:19.456 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:19.456 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:19.456 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:19.456 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTVmYmRiYTE3ZjVmZmE2YWQzYTA2ZWJkNzFkMGY5ZWE4MThkNjI4ZGE5YWI5YjFkY2E3YmNlZDQ1ZmFjZTZmZGROcto=: 00:24:19.456 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:19.457 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:19.457 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:19.457 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTVmYmRiYTE3ZjVmZmE2YWQzYTA2ZWJkNzFkMGY5ZWE4MThkNjI4ZGE5YWI5YjFkY2E3YmNlZDQ1ZmFjZTZmZGROcto=: 00:24:19.457 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:19.457 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:24:19.457 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:19.457 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:19.457 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:19.457 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:19.457 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:19.457 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:19.457 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.457 20:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.457 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.457 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:19.457 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:19.457 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:19.457 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:19.457 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.457 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.457 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:19.457 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.457 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:19.457 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:19.457 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:19.457 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:19.457 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.457 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.715 nvme0n1 00:24:19.715 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.715 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.715 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.715 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:19.715 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.715 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.715 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.715 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.715 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.715 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.715 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.715 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:19.715 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:19.715 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:24:19.715 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:19.715 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:19.715 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:19.715 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:19.715 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGRhMzdhMGRiNzA0YWM3NThjZDYwY2I5MmU1NTU0NzHxtfuB: 00:24:19.715 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTMyMDg0NWFlZWRjNGJmZTc4YzkyOTNjNGU5MDBjZTVjZmFhZTBiN2E4YmNlN2Q0ZmY0Yjg0YmRhYzkwMWM5ZE1ffHg=: 00:24:19.715 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:19.715 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:19.715 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGRhMzdhMGRiNzA0YWM3NThjZDYwY2I5MmU1NTU0NzHxtfuB: 00:24:19.715 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTMyMDg0NWFlZWRjNGJmZTc4YzkyOTNjNGU5MDBjZTVjZmFhZTBiN2E4YmNlN2Q0ZmY0Yjg0YmRhYzkwMWM5ZE1ffHg=: ]] 00:24:19.715 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTMyMDg0NWFlZWRjNGJmZTc4YzkyOTNjNGU5MDBjZTVjZmFhZTBiN2E4YmNlN2Q0ZmY0Yjg0YmRhYzkwMWM5ZE1ffHg=: 00:24:19.715 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:24:19.715 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:19.715 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:19.715 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:19.715 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:19.715 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:19.715 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:19.715 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.715 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.715 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.715 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:19.715 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:19.715 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:19.715 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:19.715 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.715 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.716 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:19.716 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.716 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:19.716 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:19.716 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:19.716 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:19.716 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.716 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.282 nvme0n1 00:24:20.282 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.282 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.282 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.282 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.282 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:20.282 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.282 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.282 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.282 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.282 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.282 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.282 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:20.282 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:24:20.282 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:20.282 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:20.282 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:20.282 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:20.282 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDkzNDVlY2M2MjgzMjY3YzU0NzNlNjcyZjU5MTA5NWM4OTJkNzA5ZDYwOWNjOWYxSTOurg==: 00:24:20.282 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU2NzVhNThlYzg1YmRmZjk0ZTA1YTYwZTg1NTczMjg3ODAwOTc1N2M3NmJkZWE0kKW4+A==: 00:24:20.282 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:20.282 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:20.282 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDkzNDVlY2M2MjgzMjY3YzU0NzNlNjcyZjU5MTA5NWM4OTJkNzA5ZDYwOWNjOWYxSTOurg==: 00:24:20.282 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU2NzVhNThlYzg1YmRmZjk0ZTA1YTYwZTg1NTczMjg3ODAwOTc1N2M3NmJkZWE0kKW4+A==: ]] 00:24:20.282 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU2NzVhNThlYzg1YmRmZjk0ZTA1YTYwZTg1NTczMjg3ODAwOTc1N2M3NmJkZWE0kKW4+A==: 00:24:20.282 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:24:20.282 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:20.282 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:20.282 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:20.282 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:20.282 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:20.282 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:20.282 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.282 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.282 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.282 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:20.282 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:20.282 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:20.282 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:20.282 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.282 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.282 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:20.282 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:20.282 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:20.282 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:20.282 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:20.282 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:20.282 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.282 20:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.887 nvme0n1 00:24:20.887 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.887 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.887 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.887 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.887 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:20.887 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.887 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.887 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.887 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.887 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.887 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.887 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:20.887 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:24:20.887 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:20.887 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:20.887 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:20.887 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:20.887 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTI1NGUzYmUzMTFkNTQxMGFjZDZlMTcxMjA4MWZlYjlr7V90: 00:24:20.887 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2EzODQ3YTk1MDkzN2YxYzMxMWZlMmYyNTdlMzVkZWJ0mRxz: 00:24:20.887 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:20.887 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:20.887 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTI1NGUzYmUzMTFkNTQxMGFjZDZlMTcxMjA4MWZlYjlr7V90: 00:24:20.887 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2EzODQ3YTk1MDkzN2YxYzMxMWZlMmYyNTdlMzVkZWJ0mRxz: ]] 00:24:20.887 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2EzODQ3YTk1MDkzN2YxYzMxMWZlMmYyNTdlMzVkZWJ0mRxz: 00:24:20.887 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:24:20.887 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:20.887 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:20.887 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:20.887 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:20.887 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:20.887 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:20.887 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.887 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.887 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.887 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:20.887 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:20.887 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:20.887 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:20.887 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.887 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.887 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:20.887 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:20.887 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:20.887 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:20.887 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:20.887 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:20.887 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.887 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.454 nvme0n1 00:24:21.454 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.454 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.454 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:21.454 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.454 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.454 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.454 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.454 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.454 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.454 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.454 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.454 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:21.454 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:24:21.454 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:21.454 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:21.454 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:21.454 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:21.454 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2VlYTY0OThhMmNhMDJhZjQxOGMyYTQ4NWM4NGU2M2JhNzcyMjk0MzdhMDg1Zjg22g6MLQ==: 00:24:21.454 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2M3ZjM1OGM1ZmYwNDJkZDk5NDUxODM1MDc5ZTMwOWGknZ3H: 00:24:21.454 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:21.454 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:21.454 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2VlYTY0OThhMmNhMDJhZjQxOGMyYTQ4NWM4NGU2M2JhNzcyMjk0MzdhMDg1Zjg22g6MLQ==: 00:24:21.454 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2M3ZjM1OGM1ZmYwNDJkZDk5NDUxODM1MDc5ZTMwOWGknZ3H: ]] 00:24:21.454 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2M3ZjM1OGM1ZmYwNDJkZDk5NDUxODM1MDc5ZTMwOWGknZ3H: 00:24:21.454 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:24:21.454 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:21.454 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:21.454 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:21.454 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:21.454 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:21.454 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:21.454 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.454 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.454 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.454 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:21.454 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:21.454 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:21.454 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:21.454 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.454 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.454 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:21.454 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:21.454 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:21.454 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:21.454 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:21.454 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:21.454 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.454 20:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.021 nvme0n1 00:24:22.021 20:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.021 20:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.021 20:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:22.021 20:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.021 20:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.021 20:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.021 20:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.021 20:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.021 20:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.021 20:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.021 20:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.021 20:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:22.021 20:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:24:22.021 20:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.021 20:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:22.021 20:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:22.021 20:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:22.021 20:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTVmYmRiYTE3ZjVmZmE2YWQzYTA2ZWJkNzFkMGY5ZWE4MThkNjI4ZGE5YWI5YjFkY2E3YmNlZDQ1ZmFjZTZmZGROcto=: 00:24:22.022 20:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:22.022 20:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:22.022 20:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:22.022 20:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTVmYmRiYTE3ZjVmZmE2YWQzYTA2ZWJkNzFkMGY5ZWE4MThkNjI4ZGE5YWI5YjFkY2E3YmNlZDQ1ZmFjZTZmZGROcto=: 00:24:22.022 20:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:22.022 20:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:24:22.022 20:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:22.022 20:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:22.022 20:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:22.022 20:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:22.022 20:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:22.022 20:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:22.022 20:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.022 20:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.022 20:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.022 20:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:22.022 20:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:22.022 20:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:22.022 20:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:22.022 20:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.022 20:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.022 20:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:22.022 20:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.022 20:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:22.022 20:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:22.022 20:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:22.022 20:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:22.022 20:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.022 20:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.587 nvme0n1 00:24:22.587 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.587 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.587 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.587 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:22.587 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.587 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.587 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.587 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.587 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.587 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.587 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.587 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:22.587 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:22.587 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:24:22.587 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.587 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:22.587 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:22.587 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:22.587 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGRhMzdhMGRiNzA0YWM3NThjZDYwY2I5MmU1NTU0NzHxtfuB: 00:24:22.587 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTMyMDg0NWFlZWRjNGJmZTc4YzkyOTNjNGU5MDBjZTVjZmFhZTBiN2E4YmNlN2Q0ZmY0Yjg0YmRhYzkwMWM5ZE1ffHg=: 00:24:22.587 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:22.587 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:22.587 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGRhMzdhMGRiNzA0YWM3NThjZDYwY2I5MmU1NTU0NzHxtfuB: 00:24:22.587 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTMyMDg0NWFlZWRjNGJmZTc4YzkyOTNjNGU5MDBjZTVjZmFhZTBiN2E4YmNlN2Q0ZmY0Yjg0YmRhYzkwMWM5ZE1ffHg=: ]] 00:24:22.587 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTMyMDg0NWFlZWRjNGJmZTc4YzkyOTNjNGU5MDBjZTVjZmFhZTBiN2E4YmNlN2Q0ZmY0Yjg0YmRhYzkwMWM5ZE1ffHg=: 00:24:22.587 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:24:22.587 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:22.587 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:22.587 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:22.587 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:22.587 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:22.587 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:22.587 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.587 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.587 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.587 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:22.587 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:22.587 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:22.587 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:22.587 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.587 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.587 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:22.587 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.587 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:22.587 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:22.587 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:22.587 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:22.587 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.587 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.518 nvme0n1 00:24:23.518 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.518 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.518 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.518 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:23.518 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.518 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.518 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.518 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.518 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.518 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.518 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.518 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:23.518 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:24:23.518 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.518 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:23.518 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:23.518 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:23.518 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDkzNDVlY2M2MjgzMjY3YzU0NzNlNjcyZjU5MTA5NWM4OTJkNzA5ZDYwOWNjOWYxSTOurg==: 00:24:23.518 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU2NzVhNThlYzg1YmRmZjk0ZTA1YTYwZTg1NTczMjg3ODAwOTc1N2M3NmJkZWE0kKW4+A==: 00:24:23.518 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:23.518 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:23.518 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDkzNDVlY2M2MjgzMjY3YzU0NzNlNjcyZjU5MTA5NWM4OTJkNzA5ZDYwOWNjOWYxSTOurg==: 00:24:23.518 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU2NzVhNThlYzg1YmRmZjk0ZTA1YTYwZTg1NTczMjg3ODAwOTc1N2M3NmJkZWE0kKW4+A==: ]] 00:24:23.518 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU2NzVhNThlYzg1YmRmZjk0ZTA1YTYwZTg1NTczMjg3ODAwOTc1N2M3NmJkZWE0kKW4+A==: 00:24:23.518 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:24:23.518 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:23.518 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:23.518 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:23.518 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:23.518 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.518 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:23.518 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.518 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.518 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.518 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:23.518 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:23.518 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:23.518 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:23.518 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.518 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.518 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:23.518 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.518 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:23.518 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:23.518 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:23.518 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:23.518 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.518 20:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.451 nvme0n1 00:24:24.451 20:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.451 20:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.451 20:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.451 20:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.451 20:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:24.451 20:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.451 20:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.451 20:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.451 20:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.451 20:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.451 20:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.451 20:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:24.451 20:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:24:24.451 20:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.451 20:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:24.451 20:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:24.451 20:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:24.451 20:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTI1NGUzYmUzMTFkNTQxMGFjZDZlMTcxMjA4MWZlYjlr7V90: 00:24:24.451 20:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2EzODQ3YTk1MDkzN2YxYzMxMWZlMmYyNTdlMzVkZWJ0mRxz: 00:24:24.451 20:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:24.451 20:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:24.451 20:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTI1NGUzYmUzMTFkNTQxMGFjZDZlMTcxMjA4MWZlYjlr7V90: 00:24:24.451 20:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2EzODQ3YTk1MDkzN2YxYzMxMWZlMmYyNTdlMzVkZWJ0mRxz: ]] 00:24:24.451 20:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2EzODQ3YTk1MDkzN2YxYzMxMWZlMmYyNTdlMzVkZWJ0mRxz: 00:24:24.451 20:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:24:24.451 20:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:24.451 20:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:24.451 20:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:24.451 20:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:24.451 20:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.451 20:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:24.451 20:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.451 20:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.451 20:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.451 20:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:24.451 20:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:24.451 20:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:24.451 20:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:24.451 20:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.451 20:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.451 20:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:24.451 20:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.451 20:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:24.451 20:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:24.451 20:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:24.451 20:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:24.451 20:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.451 20:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.017 nvme0n1 00:24:25.276 20:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.276 20:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.276 20:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:25.276 20:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.276 20:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.276 20:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.276 20:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.276 20:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.276 20:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.276 20:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.276 20:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.276 20:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:25.276 20:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:24:25.276 20:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:25.276 20:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:25.276 20:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:25.276 20:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:25.276 20:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2VlYTY0OThhMmNhMDJhZjQxOGMyYTQ4NWM4NGU2M2JhNzcyMjk0MzdhMDg1Zjg22g6MLQ==: 00:24:25.276 20:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2M3ZjM1OGM1ZmYwNDJkZDk5NDUxODM1MDc5ZTMwOWGknZ3H: 00:24:25.276 20:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:25.276 20:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:25.276 20:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2VlYTY0OThhMmNhMDJhZjQxOGMyYTQ4NWM4NGU2M2JhNzcyMjk0MzdhMDg1Zjg22g6MLQ==: 00:24:25.276 20:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2M3ZjM1OGM1ZmYwNDJkZDk5NDUxODM1MDc5ZTMwOWGknZ3H: ]] 00:24:25.276 20:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2M3ZjM1OGM1ZmYwNDJkZDk5NDUxODM1MDc5ZTMwOWGknZ3H: 00:24:25.276 20:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:24:25.276 20:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:25.276 20:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:25.276 20:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:25.276 20:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:25.276 20:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:25.276 20:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:25.276 20:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.276 20:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.276 20:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.276 20:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:25.276 20:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:25.276 20:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:25.276 20:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:25.276 20:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.276 20:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.276 20:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:25.276 20:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.276 20:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:25.276 20:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:25.276 20:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:25.276 20:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:25.276 20:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.276 20:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.210 nvme0n1 00:24:26.210 20:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.210 20:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.210 20:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.210 20:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.210 20:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:26.210 20:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.210 20:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.210 20:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.210 20:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.210 20:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.210 20:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.210 20:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:26.210 20:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:24:26.210 20:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:26.210 20:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:26.210 20:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:26.210 20:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:26.211 20:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTVmYmRiYTE3ZjVmZmE2YWQzYTA2ZWJkNzFkMGY5ZWE4MThkNjI4ZGE5YWI5YjFkY2E3YmNlZDQ1ZmFjZTZmZGROcto=: 00:24:26.211 20:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:26.211 20:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:26.211 20:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:26.211 20:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTVmYmRiYTE3ZjVmZmE2YWQzYTA2ZWJkNzFkMGY5ZWE4MThkNjI4ZGE5YWI5YjFkY2E3YmNlZDQ1ZmFjZTZmZGROcto=: 00:24:26.211 20:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:26.211 20:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:24:26.211 20:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:26.211 20:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:26.211 20:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:26.211 20:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:26.211 20:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:26.211 20:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:26.211 20:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.211 20:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.211 20:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.211 20:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:26.211 20:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:26.211 20:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:26.211 20:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:26.211 20:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.211 20:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.211 20:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:26.211 20:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.211 20:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:26.211 20:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:26.211 20:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:26.211 20:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:26.211 20:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.211 20:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.147 nvme0n1 00:24:27.147 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.147 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.147 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.147 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.147 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:27.147 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.147 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.147 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.147 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.147 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.147 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.147 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:27.147 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:27.147 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:27.147 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:27.147 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:27.147 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDkzNDVlY2M2MjgzMjY3YzU0NzNlNjcyZjU5MTA5NWM4OTJkNzA5ZDYwOWNjOWYxSTOurg==: 00:24:27.147 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU2NzVhNThlYzg1YmRmZjk0ZTA1YTYwZTg1NTczMjg3ODAwOTc1N2M3NmJkZWE0kKW4+A==: 00:24:27.147 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:27.147 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:27.147 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDkzNDVlY2M2MjgzMjY3YzU0NzNlNjcyZjU5MTA5NWM4OTJkNzA5ZDYwOWNjOWYxSTOurg==: 00:24:27.147 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU2NzVhNThlYzg1YmRmZjk0ZTA1YTYwZTg1NTczMjg3ODAwOTc1N2M3NmJkZWE0kKW4+A==: ]] 00:24:27.147 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU2NzVhNThlYzg1YmRmZjk0ZTA1YTYwZTg1NTczMjg3ODAwOTc1N2M3NmJkZWE0kKW4+A==: 00:24:27.147 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:27.147 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.147 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.147 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.147 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:24:27.147 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:27.148 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:27.148 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:27.148 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.148 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.148 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:27.148 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.148 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:27.148 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:27.148 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:27.148 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:27.148 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:27.148 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:27.148 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:27.148 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:27.148 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:27.148 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:27.148 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:27.148 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.148 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.148 request: 00:24:27.148 { 00:24:27.148 "name": "nvme0", 00:24:27.148 "trtype": "tcp", 00:24:27.148 "traddr": "10.0.0.1", 00:24:27.148 "adrfam": "ipv4", 00:24:27.148 "trsvcid": "4420", 00:24:27.148 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:27.148 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:27.148 "prchk_reftag": false, 00:24:27.148 "prchk_guard": false, 00:24:27.148 "hdgst": false, 00:24:27.148 "ddgst": false, 00:24:27.148 "allow_unrecognized_csi": false, 00:24:27.148 "method": "bdev_nvme_attach_controller", 00:24:27.148 "req_id": 1 00:24:27.148 } 00:24:27.148 Got JSON-RPC error response 00:24:27.148 response: 00:24:27.148 { 00:24:27.148 "code": -5, 00:24:27.148 "message": "Input/output error" 00:24:27.148 } 00:24:27.148 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:27.148 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:27.148 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:27.148 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:27.148 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:27.148 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.148 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.148 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:24:27.148 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.148 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.148 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:24:27.148 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:24:27.148 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:27.148 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:27.148 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:27.148 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.148 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.148 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:27.148 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.148 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:27.148 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:27.148 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:27.148 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:27.148 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:27.148 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:27.148 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:27.148 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:27.148 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:27.148 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:27.148 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:27.148 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.148 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.148 request: 00:24:27.148 { 00:24:27.148 "name": "nvme0", 00:24:27.148 "trtype": "tcp", 00:24:27.148 "traddr": "10.0.0.1", 00:24:27.148 "adrfam": "ipv4", 00:24:27.148 "trsvcid": "4420", 00:24:27.148 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:27.148 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:27.148 "prchk_reftag": false, 00:24:27.148 "prchk_guard": false, 00:24:27.148 "hdgst": false, 00:24:27.148 "ddgst": false, 00:24:27.148 "dhchap_key": "key2", 00:24:27.148 "allow_unrecognized_csi": false, 00:24:27.148 "method": "bdev_nvme_attach_controller", 00:24:27.148 "req_id": 1 00:24:27.148 } 00:24:27.148 Got JSON-RPC error response 00:24:27.148 response: 00:24:27.148 { 00:24:27.148 "code": -5, 00:24:27.148 "message": "Input/output error" 00:24:27.148 } 00:24:27.148 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:27.148 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:27.148 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:27.148 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:27.148 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:27.148 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.148 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:24:27.148 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.148 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.148 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.407 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:24:27.407 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:24:27.407 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:27.407 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:27.407 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:27.407 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.407 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.407 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:27.407 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.407 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:27.407 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:27.407 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:27.407 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:27.407 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:27.407 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:27.407 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:27.407 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:27.407 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:27.407 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:27.407 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:27.407 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.407 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.407 request: 00:24:27.407 { 00:24:27.407 "name": "nvme0", 00:24:27.407 "trtype": "tcp", 00:24:27.407 "traddr": "10.0.0.1", 00:24:27.407 "adrfam": "ipv4", 00:24:27.407 "trsvcid": "4420", 00:24:27.407 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:27.407 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:27.407 "prchk_reftag": false, 00:24:27.407 "prchk_guard": false, 00:24:27.407 "hdgst": false, 00:24:27.407 "ddgst": false, 00:24:27.407 "dhchap_key": "key1", 00:24:27.407 "dhchap_ctrlr_key": "ckey2", 00:24:27.407 "allow_unrecognized_csi": false, 00:24:27.407 "method": "bdev_nvme_attach_controller", 00:24:27.407 "req_id": 1 00:24:27.407 } 00:24:27.407 Got JSON-RPC error response 00:24:27.407 response: 00:24:27.407 { 00:24:27.407 "code": -5, 00:24:27.407 "message": "Input/output error" 00:24:27.407 } 00:24:27.407 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:27.407 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:27.407 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:27.407 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:27.407 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:27.407 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:24:27.407 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:27.407 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:27.407 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:27.407 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.407 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.407 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:27.407 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.407 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:27.407 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:27.407 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:27.407 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:27.407 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.407 20:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.407 nvme0n1 00:24:27.407 20:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.407 20:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:27.407 20:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:27.407 20:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:27.408 20:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:27.408 20:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:27.408 20:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTI1NGUzYmUzMTFkNTQxMGFjZDZlMTcxMjA4MWZlYjlr7V90: 00:24:27.408 20:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2EzODQ3YTk1MDkzN2YxYzMxMWZlMmYyNTdlMzVkZWJ0mRxz: 00:24:27.408 20:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:27.408 20:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:27.408 20:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTI1NGUzYmUzMTFkNTQxMGFjZDZlMTcxMjA4MWZlYjlr7V90: 00:24:27.408 20:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2EzODQ3YTk1MDkzN2YxYzMxMWZlMmYyNTdlMzVkZWJ0mRxz: ]] 00:24:27.408 20:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2EzODQ3YTk1MDkzN2YxYzMxMWZlMmYyNTdlMzVkZWJ0mRxz: 00:24:27.408 20:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:27.408 20:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.408 20:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.666 20:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.666 20:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.666 20:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.666 20:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:24:27.666 20:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.666 20:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.666 20:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.666 20:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:27.666 20:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:27.666 20:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:27.666 20:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:27.666 20:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:27.666 20:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:27.666 20:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:27.666 20:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:27.666 20:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.666 20:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.666 request: 00:24:27.666 { 00:24:27.666 "name": "nvme0", 00:24:27.666 "dhchap_key": "key1", 00:24:27.666 "dhchap_ctrlr_key": "ckey2", 00:24:27.666 "method": "bdev_nvme_set_keys", 00:24:27.666 "req_id": 1 00:24:27.666 } 00:24:27.666 Got JSON-RPC error response 00:24:27.666 response: 00:24:27.666 { 00:24:27.666 "code": -13, 00:24:27.666 "message": "Permission denied" 00:24:27.666 } 00:24:27.666 20:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:27.667 20:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:27.667 20:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:27.667 20:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:27.667 20:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:27.667 20:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.667 20:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:24:27.667 20:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.667 20:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.667 20:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.667 20:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:24:27.667 20:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:24:28.600 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.600 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:24:28.600 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.600 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.600 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.859 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:24:28.859 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:28.859 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:28.859 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:28.859 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:28.859 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:28.859 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDkzNDVlY2M2MjgzMjY3YzU0NzNlNjcyZjU5MTA5NWM4OTJkNzA5ZDYwOWNjOWYxSTOurg==: 00:24:28.859 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGU2NzVhNThlYzg1YmRmZjk0ZTA1YTYwZTg1NTczMjg3ODAwOTc1N2M3NmJkZWE0kKW4+A==: 00:24:28.859 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:28.859 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:28.859 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDkzNDVlY2M2MjgzMjY3YzU0NzNlNjcyZjU5MTA5NWM4OTJkNzA5ZDYwOWNjOWYxSTOurg==: 00:24:28.859 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGU2NzVhNThlYzg1YmRmZjk0ZTA1YTYwZTg1NTczMjg3ODAwOTc1N2M3NmJkZWE0kKW4+A==: ]] 00:24:28.859 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGU2NzVhNThlYzg1YmRmZjk0ZTA1YTYwZTg1NTczMjg3ODAwOTc1N2M3NmJkZWE0kKW4+A==: 00:24:28.859 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:24:28.859 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:28.859 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:28.859 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:28.859 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.859 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.859 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:28.859 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.859 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:28.859 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:28.859 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:28.859 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:28.859 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.859 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.859 nvme0n1 00:24:28.859 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.859 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:28.859 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:28.859 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:28.859 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:28.859 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:28.859 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTI1NGUzYmUzMTFkNTQxMGFjZDZlMTcxMjA4MWZlYjlr7V90: 00:24:28.859 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2EzODQ3YTk1MDkzN2YxYzMxMWZlMmYyNTdlMzVkZWJ0mRxz: 00:24:28.859 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:28.859 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:28.859 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTI1NGUzYmUzMTFkNTQxMGFjZDZlMTcxMjA4MWZlYjlr7V90: 00:24:28.859 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2EzODQ3YTk1MDkzN2YxYzMxMWZlMmYyNTdlMzVkZWJ0mRxz: ]] 00:24:28.859 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2EzODQ3YTk1MDkzN2YxYzMxMWZlMmYyNTdlMzVkZWJ0mRxz: 00:24:28.859 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:24:28.859 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:28.859 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:24:28.859 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:28.859 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:28.859 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:28.859 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:28.859 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:24:28.859 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.859 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.859 request: 00:24:28.859 { 00:24:28.859 "name": "nvme0", 00:24:28.859 "dhchap_key": "key2", 00:24:28.859 "dhchap_ctrlr_key": "ckey1", 00:24:28.859 "method": "bdev_nvme_set_keys", 00:24:28.859 "req_id": 1 00:24:28.859 } 00:24:28.859 Got JSON-RPC error response 00:24:28.859 response: 00:24:28.859 { 00:24:28.859 "code": -13, 00:24:28.859 "message": "Permission denied" 00:24:28.859 } 00:24:28.859 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:28.859 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:28.859 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:28.859 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:28.859 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:28.859 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.859 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:24:28.859 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.859 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.118 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.118 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:24:29.118 20:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:24:30.053 20:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.053 20:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:24:30.053 20:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.053 20:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.053 20:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.053 20:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:24:30.053 20:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:24:30.053 20:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:24:30.053 20:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:24:30.053 20:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:30.053 20:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:24:30.053 20:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:30.053 20:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:24:30.053 20:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:30.053 20:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:30.053 rmmod nvme_tcp 00:24:30.053 rmmod nvme_fabrics 00:24:30.053 20:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:30.053 20:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:24:30.053 20:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:24:30.053 20:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 1752229 ']' 00:24:30.053 20:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 1752229 00:24:30.053 20:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 1752229 ']' 00:24:30.053 20:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 1752229 00:24:30.053 20:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:24:30.053 20:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:30.053 20:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1752229 00:24:30.053 20:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:30.053 20:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:30.053 20:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1752229' 00:24:30.053 killing process with pid 1752229 00:24:30.053 20:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 1752229 00:24:30.053 20:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 1752229 00:24:30.313 20:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:30.313 20:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:30.313 20:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:30.313 20:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:24:30.313 20:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:24:30.313 20:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:30.313 20:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:24:30.313 20:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:30.313 20:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:30.313 20:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.313 20:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:30.313 20:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:32.849 20:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:32.849 20:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:32.849 20:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:32.849 20:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:24:32.849 20:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:24:32.849 20:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:24:32.849 20:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:32.849 20:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:32.849 20:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:32.849 20:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:32.849 20:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:24:32.849 20:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:24:32.849 20:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:33.785 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:33.785 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:33.785 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:33.785 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:33.785 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:33.785 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:33.785 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:33.785 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:33.785 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:33.785 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:33.785 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:33.785 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:33.785 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:33.785 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:33.785 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:33.785 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:34.723 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:24:34.983 20:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Azs /tmp/spdk.key-null.2Ir /tmp/spdk.key-sha256.JOB /tmp/spdk.key-sha384.PFs /tmp/spdk.key-sha512.w4S /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:24:34.983 20:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:36.360 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:24:36.360 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:24:36.360 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:24:36.360 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:24:36.360 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:24:36.360 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:24:36.360 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:24:36.360 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:24:36.360 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:24:36.360 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:24:36.360 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:24:36.360 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:24:36.360 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:24:36.360 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:24:36.360 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:24:36.360 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:24:36.360 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:24:36.360 00:24:36.360 real 0m50.217s 00:24:36.360 user 0m47.716s 00:24:36.360 sys 0m6.105s 00:24:36.360 20:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:36.360 20:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.360 ************************************ 00:24:36.360 END TEST nvmf_auth_host 00:24:36.360 ************************************ 00:24:36.360 20:54:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:24:36.360 20:54:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:36.360 20:54:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:36.360 20:54:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:36.360 20:54:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.360 ************************************ 00:24:36.360 START TEST nvmf_digest 00:24:36.360 ************************************ 00:24:36.360 20:54:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:36.360 * Looking for test storage... 00:24:36.360 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:36.360 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:36.360 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:24:36.360 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:36.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.622 --rc genhtml_branch_coverage=1 00:24:36.622 --rc genhtml_function_coverage=1 00:24:36.622 --rc genhtml_legend=1 00:24:36.622 --rc geninfo_all_blocks=1 00:24:36.622 --rc geninfo_unexecuted_blocks=1 00:24:36.622 00:24:36.622 ' 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:36.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.622 --rc genhtml_branch_coverage=1 00:24:36.622 --rc genhtml_function_coverage=1 00:24:36.622 --rc genhtml_legend=1 00:24:36.622 --rc geninfo_all_blocks=1 00:24:36.622 --rc geninfo_unexecuted_blocks=1 00:24:36.622 00:24:36.622 ' 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:36.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.622 --rc genhtml_branch_coverage=1 00:24:36.622 --rc genhtml_function_coverage=1 00:24:36.622 --rc genhtml_legend=1 00:24:36.622 --rc geninfo_all_blocks=1 00:24:36.622 --rc geninfo_unexecuted_blocks=1 00:24:36.622 00:24:36.622 ' 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:36.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.622 --rc genhtml_branch_coverage=1 00:24:36.622 --rc genhtml_function_coverage=1 00:24:36.622 --rc genhtml_legend=1 00:24:36.622 --rc geninfo_all_blocks=1 00:24:36.622 --rc geninfo_unexecuted_blocks=1 00:24:36.622 00:24:36.622 ' 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:36.622 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:24:36.622 20:54:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:38.526 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:38.526 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:24:38.526 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:38.526 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:38.526 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:38.526 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:38.526 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:38.526 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:24:38.526 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:38.526 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:24:38.526 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:24:38.526 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:24:38.526 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:24:38.526 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:24:38.526 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:24:38.526 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:38.527 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:38.527 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:38.527 Found net devices under 0000:09:00.0: cvl_0_0 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:38.527 Found net devices under 0000:09:00.1: cvl_0_1 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:38.527 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:38.786 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:38.786 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:38.786 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:38.786 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:38.786 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:38.786 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:38.786 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:38.786 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:38.786 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:38.786 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:24:38.786 00:24:38.786 --- 10.0.0.2 ping statistics --- 00:24:38.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.786 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:24:38.786 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:38.786 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:38.786 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:24:38.786 00:24:38.786 --- 10.0.0.1 ping statistics --- 00:24:38.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.786 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:24:38.786 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:38.786 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:24:38.786 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:38.786 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:38.786 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:38.786 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:38.786 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:38.786 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:38.786 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:38.786 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:38.786 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:24:38.786 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:24:38.786 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:38.786 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:38.786 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:38.786 ************************************ 00:24:38.786 START TEST nvmf_digest_clean 00:24:38.786 ************************************ 00:24:38.786 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:24:38.786 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:24:38.786 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:24:38.786 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:24:38.786 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:24:38.786 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:24:38.786 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:38.786 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:38.786 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:38.786 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=1762334 00:24:38.786 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:38.786 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 1762334 00:24:38.786 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1762334 ']' 00:24:38.786 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:38.786 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:38.786 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:38.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:38.786 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:38.786 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:38.786 [2024-11-26 20:54:42.401497] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:24:38.786 [2024-11-26 20:54:42.401591] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:38.786 [2024-11-26 20:54:42.473143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.044 [2024-11-26 20:54:42.530979] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:39.044 [2024-11-26 20:54:42.531028] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:39.044 [2024-11-26 20:54:42.531056] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:39.044 [2024-11-26 20:54:42.531068] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:39.044 [2024-11-26 20:54:42.531077] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:39.044 [2024-11-26 20:54:42.531735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:39.044 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:39.044 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:24:39.044 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:39.044 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:39.044 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:39.044 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:39.044 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:24:39.044 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:24:39.044 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:24:39.044 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.044 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:39.302 null0 00:24:39.303 [2024-11-26 20:54:42.771199] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:39.303 [2024-11-26 20:54:42.795437] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:39.303 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.303 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:24:39.303 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:39.303 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:39.303 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:39.303 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:39.303 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:39.303 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:39.303 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1762362 00:24:39.303 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:39.303 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1762362 /var/tmp/bperf.sock 00:24:39.303 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1762362 ']' 00:24:39.303 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:39.303 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:39.303 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:39.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:39.303 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:39.303 20:54:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:39.303 [2024-11-26 20:54:42.845104] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:24:39.303 [2024-11-26 20:54:42.845179] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1762362 ] 00:24:39.303 [2024-11-26 20:54:42.914788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.303 [2024-11-26 20:54:42.974056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:39.561 20:54:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:39.561 20:54:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:24:39.561 20:54:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:39.561 20:54:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:39.561 20:54:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:39.819 20:54:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:39.820 20:54:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:40.385 nvme0n1 00:24:40.385 20:54:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:40.385 20:54:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:40.385 Running I/O for 2 seconds... 00:24:42.691 18916.00 IOPS, 73.89 MiB/s [2024-11-26T19:54:46.388Z] 18910.50 IOPS, 73.87 MiB/s 00:24:42.691 Latency(us) 00:24:42.691 [2024-11-26T19:54:46.388Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:42.691 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:42.691 nvme0n1 : 2.01 18909.63 73.87 0.00 0.00 6758.61 3349.62 13010.11 00:24:42.691 [2024-11-26T19:54:46.388Z] =================================================================================================================== 00:24:42.691 [2024-11-26T19:54:46.388Z] Total : 18909.63 73.87 0.00 0.00 6758.61 3349.62 13010.11 00:24:42.691 { 00:24:42.691 "results": [ 00:24:42.691 { 00:24:42.691 "job": "nvme0n1", 00:24:42.691 "core_mask": "0x2", 00:24:42.691 "workload": "randread", 00:24:42.691 "status": "finished", 00:24:42.691 "queue_depth": 128, 00:24:42.691 "io_size": 4096, 00:24:42.691 "runtime": 2.006861, 00:24:42.691 "iops": 18909.630512526775, 00:24:42.691 "mibps": 73.86574418955772, 00:24:42.691 "io_failed": 0, 00:24:42.691 "io_timeout": 0, 00:24:42.691 "avg_latency_us": 6758.6128704899265, 00:24:42.691 "min_latency_us": 3349.617777777778, 00:24:42.691 "max_latency_us": 13010.10962962963 00:24:42.691 } 00:24:42.691 ], 00:24:42.691 "core_count": 1 00:24:42.691 } 00:24:42.691 20:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:42.691 20:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:42.691 20:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:42.691 20:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:42.691 20:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:42.691 | select(.opcode=="crc32c") 00:24:42.691 | "\(.module_name) \(.executed)"' 00:24:42.691 20:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:42.691 20:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:42.691 20:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:42.691 20:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:42.691 20:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1762362 00:24:42.691 20:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1762362 ']' 00:24:42.691 20:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1762362 00:24:42.691 20:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:24:42.691 20:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:42.691 20:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1762362 00:24:42.691 20:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:42.691 20:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:42.691 20:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1762362' 00:24:42.691 killing process with pid 1762362 00:24:42.691 20:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1762362 00:24:42.691 Received shutdown signal, test time was about 2.000000 seconds 00:24:42.691 00:24:42.691 Latency(us) 00:24:42.691 [2024-11-26T19:54:46.388Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:42.691 [2024-11-26T19:54:46.388Z] =================================================================================================================== 00:24:42.691 [2024-11-26T19:54:46.388Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:42.691 20:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1762362 00:24:42.993 20:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:24:42.993 20:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:42.993 20:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:42.993 20:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:42.993 20:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:42.993 20:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:42.993 20:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:42.993 20:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1762884 00:24:42.993 20:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:42.993 20:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1762884 /var/tmp/bperf.sock 00:24:42.993 20:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1762884 ']' 00:24:42.993 20:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:42.993 20:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:42.993 20:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:42.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:42.993 20:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:42.993 20:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:42.993 [2024-11-26 20:54:46.653419] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:24:42.993 [2024-11-26 20:54:46.653505] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1762884 ] 00:24:42.993 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:42.993 Zero copy mechanism will not be used. 00:24:43.269 [2024-11-26 20:54:46.719819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:43.269 [2024-11-26 20:54:46.777196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:43.269 20:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:43.269 20:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:24:43.269 20:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:43.269 20:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:43.269 20:54:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:43.835 20:54:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:43.835 20:54:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:44.092 nvme0n1 00:24:44.092 20:54:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:44.092 20:54:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:44.092 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:44.092 Zero copy mechanism will not be used. 00:24:44.092 Running I/O for 2 seconds... 00:24:46.400 5670.00 IOPS, 708.75 MiB/s [2024-11-26T19:54:50.097Z] 5764.00 IOPS, 720.50 MiB/s 00:24:46.400 Latency(us) 00:24:46.400 [2024-11-26T19:54:50.097Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:46.400 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:46.400 nvme0n1 : 2.00 5761.49 720.19 0.00 0.00 2772.95 591.64 11116.85 00:24:46.400 [2024-11-26T19:54:50.097Z] =================================================================================================================== 00:24:46.400 [2024-11-26T19:54:50.097Z] Total : 5761.49 720.19 0.00 0.00 2772.95 591.64 11116.85 00:24:46.400 { 00:24:46.400 "results": [ 00:24:46.400 { 00:24:46.400 "job": "nvme0n1", 00:24:46.400 "core_mask": "0x2", 00:24:46.400 "workload": "randread", 00:24:46.400 "status": "finished", 00:24:46.400 "queue_depth": 16, 00:24:46.400 "io_size": 131072, 00:24:46.400 "runtime": 2.00365, 00:24:46.400 "iops": 5761.4852893469415, 00:24:46.400 "mibps": 720.1856611683677, 00:24:46.400 "io_failed": 0, 00:24:46.400 "io_timeout": 0, 00:24:46.400 "avg_latency_us": 2772.952759939427, 00:24:46.400 "min_latency_us": 591.6444444444444, 00:24:46.400 "max_latency_us": 11116.847407407407 00:24:46.400 } 00:24:46.400 ], 00:24:46.400 "core_count": 1 00:24:46.400 } 00:24:46.400 20:54:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:46.400 20:54:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:46.400 20:54:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:46.400 20:54:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:46.400 20:54:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:46.400 | select(.opcode=="crc32c") 00:24:46.400 | "\(.module_name) \(.executed)"' 00:24:46.400 20:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:46.400 20:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:46.400 20:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:46.400 20:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:46.400 20:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1762884 00:24:46.400 20:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1762884 ']' 00:24:46.400 20:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1762884 00:24:46.400 20:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:24:46.400 20:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:46.400 20:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1762884 00:24:46.400 20:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:46.400 20:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:46.400 20:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1762884' 00:24:46.400 killing process with pid 1762884 00:24:46.400 20:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1762884 00:24:46.400 Received shutdown signal, test time was about 2.000000 seconds 00:24:46.400 00:24:46.400 Latency(us) 00:24:46.400 [2024-11-26T19:54:50.097Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:46.400 [2024-11-26T19:54:50.097Z] =================================================================================================================== 00:24:46.400 [2024-11-26T19:54:50.097Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:46.400 20:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1762884 00:24:46.658 20:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:24:46.658 20:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:46.658 20:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:46.658 20:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:46.658 20:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:46.658 20:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:46.658 20:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:46.658 20:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1763299 00:24:46.658 20:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:46.658 20:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1763299 /var/tmp/bperf.sock 00:24:46.658 20:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1763299 ']' 00:24:46.658 20:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:46.658 20:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:46.658 20:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:46.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:46.658 20:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:46.658 20:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:46.917 [2024-11-26 20:54:50.369084] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:24:46.917 [2024-11-26 20:54:50.369181] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1763299 ] 00:24:46.917 [2024-11-26 20:54:50.436058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:46.917 [2024-11-26 20:54:50.491956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:46.917 20:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:46.917 20:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:24:46.917 20:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:46.917 20:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:46.917 20:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:47.483 20:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:47.483 20:54:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:47.740 nvme0n1 00:24:47.740 20:54:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:47.740 20:54:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:47.740 Running I/O for 2 seconds... 00:24:50.043 20252.00 IOPS, 79.11 MiB/s [2024-11-26T19:54:53.740Z] 19426.00 IOPS, 75.88 MiB/s 00:24:50.043 Latency(us) 00:24:50.043 [2024-11-26T19:54:53.740Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:50.043 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:50.043 nvme0n1 : 2.01 19425.28 75.88 0.00 0.00 6574.32 2682.12 11456.66 00:24:50.043 [2024-11-26T19:54:53.740Z] =================================================================================================================== 00:24:50.043 [2024-11-26T19:54:53.740Z] Total : 19425.28 75.88 0.00 0.00 6574.32 2682.12 11456.66 00:24:50.043 { 00:24:50.043 "results": [ 00:24:50.043 { 00:24:50.043 "job": "nvme0n1", 00:24:50.043 "core_mask": "0x2", 00:24:50.043 "workload": "randwrite", 00:24:50.043 "status": "finished", 00:24:50.043 "queue_depth": 128, 00:24:50.043 "io_size": 4096, 00:24:50.043 "runtime": 2.008311, 00:24:50.043 "iops": 19425.278256206333, 00:24:50.043 "mibps": 75.87999318830599, 00:24:50.043 "io_failed": 0, 00:24:50.043 "io_timeout": 0, 00:24:50.043 "avg_latency_us": 6574.321018034338, 00:24:50.043 "min_latency_us": 2682.1214814814816, 00:24:50.043 "max_latency_us": 11456.663703703704 00:24:50.043 } 00:24:50.043 ], 00:24:50.043 "core_count": 1 00:24:50.043 } 00:24:50.043 20:54:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:50.043 20:54:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:50.043 20:54:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:50.043 20:54:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:50.043 | select(.opcode=="crc32c") 00:24:50.043 | "\(.module_name) \(.executed)"' 00:24:50.043 20:54:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:50.043 20:54:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:50.043 20:54:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:50.043 20:54:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:50.043 20:54:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:50.043 20:54:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1763299 00:24:50.043 20:54:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1763299 ']' 00:24:50.043 20:54:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1763299 00:24:50.043 20:54:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:24:50.043 20:54:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:50.301 20:54:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1763299 00:24:50.301 20:54:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:50.301 20:54:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:50.301 20:54:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1763299' 00:24:50.301 killing process with pid 1763299 00:24:50.301 20:54:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1763299 00:24:50.301 Received shutdown signal, test time was about 2.000000 seconds 00:24:50.301 00:24:50.301 Latency(us) 00:24:50.301 [2024-11-26T19:54:53.998Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:50.301 [2024-11-26T19:54:53.998Z] =================================================================================================================== 00:24:50.301 [2024-11-26T19:54:53.998Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:50.301 20:54:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1763299 00:24:50.558 20:54:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:24:50.558 20:54:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:50.558 20:54:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:50.558 20:54:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:50.558 20:54:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:50.558 20:54:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:50.558 20:54:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:50.558 20:54:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1763711 00:24:50.558 20:54:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:50.558 20:54:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1763711 /var/tmp/bperf.sock 00:24:50.558 20:54:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1763711 ']' 00:24:50.558 20:54:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:50.558 20:54:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:50.558 20:54:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:50.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:50.558 20:54:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:50.558 20:54:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:50.558 [2024-11-26 20:54:54.053008] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:24:50.558 [2024-11-26 20:54:54.053090] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1763711 ] 00:24:50.558 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:50.558 Zero copy mechanism will not be used. 00:24:50.558 [2024-11-26 20:54:54.117462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.558 [2024-11-26 20:54:54.172006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:50.816 20:54:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:50.816 20:54:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:24:50.816 20:54:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:50.816 20:54:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:50.816 20:54:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:51.073 20:54:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:51.073 20:54:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:51.331 nvme0n1 00:24:51.589 20:54:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:51.589 20:54:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:51.589 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:51.589 Zero copy mechanism will not be used. 00:24:51.589 Running I/O for 2 seconds... 00:24:53.896 6063.00 IOPS, 757.88 MiB/s [2024-11-26T19:54:57.593Z] 6120.50 IOPS, 765.06 MiB/s 00:24:53.896 Latency(us) 00:24:53.896 [2024-11-26T19:54:57.593Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:53.896 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:53.896 nvme0n1 : 2.00 6117.46 764.68 0.00 0.00 2608.49 2063.17 9951.76 00:24:53.896 [2024-11-26T19:54:57.593Z] =================================================================================================================== 00:24:53.896 [2024-11-26T19:54:57.593Z] Total : 6117.46 764.68 0.00 0.00 2608.49 2063.17 9951.76 00:24:53.896 { 00:24:53.896 "results": [ 00:24:53.896 { 00:24:53.896 "job": "nvme0n1", 00:24:53.896 "core_mask": "0x2", 00:24:53.896 "workload": "randwrite", 00:24:53.896 "status": "finished", 00:24:53.896 "queue_depth": 16, 00:24:53.896 "io_size": 131072, 00:24:53.896 "runtime": 2.004264, 00:24:53.896 "iops": 6117.457580438505, 00:24:53.896 "mibps": 764.6821975548131, 00:24:53.896 "io_failed": 0, 00:24:53.896 "io_timeout": 0, 00:24:53.896 "avg_latency_us": 2608.4930335571685, 00:24:53.896 "min_latency_us": 2063.17037037037, 00:24:53.896 "max_latency_us": 9951.762962962963 00:24:53.896 } 00:24:53.896 ], 00:24:53.896 "core_count": 1 00:24:53.896 } 00:24:53.896 20:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:53.896 20:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:53.896 20:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:53.896 20:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:53.896 | select(.opcode=="crc32c") 00:24:53.896 | "\(.module_name) \(.executed)"' 00:24:53.896 20:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:53.896 20:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:53.896 20:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:53.896 20:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:53.896 20:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:53.896 20:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1763711 00:24:53.896 20:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1763711 ']' 00:24:53.896 20:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1763711 00:24:53.896 20:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:24:53.896 20:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:53.896 20:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1763711 00:24:53.896 20:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:53.896 20:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:53.896 20:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1763711' 00:24:53.896 killing process with pid 1763711 00:24:53.896 20:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1763711 00:24:53.896 Received shutdown signal, test time was about 2.000000 seconds 00:24:53.896 00:24:53.896 Latency(us) 00:24:53.896 [2024-11-26T19:54:57.593Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:53.896 [2024-11-26T19:54:57.594Z] =================================================================================================================== 00:24:53.897 [2024-11-26T19:54:57.594Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:53.897 20:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1763711 00:24:54.155 20:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1762334 00:24:54.155 20:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1762334 ']' 00:24:54.155 20:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1762334 00:24:54.155 20:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:24:54.155 20:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:54.155 20:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1762334 00:24:54.155 20:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:54.155 20:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:54.155 20:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1762334' 00:24:54.155 killing process with pid 1762334 00:24:54.155 20:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1762334 00:24:54.155 20:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1762334 00:24:54.415 00:24:54.415 real 0m15.624s 00:24:54.415 user 0m31.234s 00:24:54.415 sys 0m4.345s 00:24:54.415 20:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:54.415 20:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:54.415 ************************************ 00:24:54.415 END TEST nvmf_digest_clean 00:24:54.415 ************************************ 00:24:54.415 20:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:24:54.415 20:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:54.415 20:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:54.415 20:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:54.415 ************************************ 00:24:54.415 START TEST nvmf_digest_error 00:24:54.415 ************************************ 00:24:54.415 20:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:24:54.415 20:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:24:54.415 20:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:54.415 20:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:54.415 20:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:54.415 20:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=1764267 00:24:54.415 20:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:54.415 20:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 1764267 00:24:54.415 20:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1764267 ']' 00:24:54.415 20:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:54.415 20:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:54.415 20:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:54.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:54.415 20:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:54.415 20:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:54.415 [2024-11-26 20:54:58.086534] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:24:54.415 [2024-11-26 20:54:58.086631] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:54.674 [2024-11-26 20:54:58.155041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:54.674 [2024-11-26 20:54:58.206812] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:54.674 [2024-11-26 20:54:58.206872] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:54.674 [2024-11-26 20:54:58.206899] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:54.674 [2024-11-26 20:54:58.206910] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:54.674 [2024-11-26 20:54:58.206919] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:54.674 [2024-11-26 20:54:58.207477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:54.674 20:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:54.674 20:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:24:54.674 20:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:54.674 20:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:54.674 20:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:54.674 20:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:54.674 20:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:24:54.674 20:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.674 20:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:54.674 [2024-11-26 20:54:58.336176] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:24:54.674 20:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.674 20:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:24:54.674 20:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:24:54.674 20:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.674 20:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:54.933 null0 00:24:54.933 [2024-11-26 20:54:58.458627] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:54.933 [2024-11-26 20:54:58.482849] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:54.933 20:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.933 20:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:24:54.933 20:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:54.933 20:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:54.933 20:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:54.933 20:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:54.933 20:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1764287 00:24:54.933 20:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:24:54.933 20:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1764287 /var/tmp/bperf.sock 00:24:54.933 20:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1764287 ']' 00:24:54.933 20:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:54.933 20:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:54.933 20:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:54.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:54.933 20:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:54.933 20:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:54.933 [2024-11-26 20:54:58.530820] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:24:54.933 [2024-11-26 20:54:58.530892] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1764287 ] 00:24:54.933 [2024-11-26 20:54:58.595237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:55.192 [2024-11-26 20:54:58.654238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:55.192 20:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:55.192 20:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:24:55.192 20:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:55.192 20:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:55.450 20:54:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:55.450 20:54:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.450 20:54:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:55.450 20:54:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.450 20:54:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:55.450 20:54:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:56.035 nvme0n1 00:24:56.035 20:54:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:56.035 20:54:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.035 20:54:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:56.035 20:54:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.035 20:54:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:56.035 20:54:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:56.035 Running I/O for 2 seconds... 00:24:56.035 [2024-11-26 20:54:59.568934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.035 [2024-11-26 20:54:59.568988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.035 [2024-11-26 20:54:59.569008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.035 [2024-11-26 20:54:59.583493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.035 [2024-11-26 20:54:59.583527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:17147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.035 [2024-11-26 20:54:59.583559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.035 [2024-11-26 20:54:59.599402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.035 [2024-11-26 20:54:59.599432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.035 [2024-11-26 20:54:59.599464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.035 [2024-11-26 20:54:59.611695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.035 [2024-11-26 20:54:59.611726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.035 [2024-11-26 20:54:59.611759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.035 [2024-11-26 20:54:59.622402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.035 [2024-11-26 20:54:59.622431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.035 [2024-11-26 20:54:59.622464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.035 [2024-11-26 20:54:59.636193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.035 [2024-11-26 20:54:59.636224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.035 [2024-11-26 20:54:59.636242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.035 [2024-11-26 20:54:59.650591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.036 [2024-11-26 20:54:59.650620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.036 [2024-11-26 20:54:59.650651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.036 [2024-11-26 20:54:59.665438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.036 [2024-11-26 20:54:59.665469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:15257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.036 [2024-11-26 20:54:59.665486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.036 [2024-11-26 20:54:59.677561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.036 [2024-11-26 20:54:59.677605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:10032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.036 [2024-11-26 20:54:59.677621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.036 [2024-11-26 20:54:59.694065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.036 [2024-11-26 20:54:59.694093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.036 [2024-11-26 20:54:59.694124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.036 [2024-11-26 20:54:59.708728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.036 [2024-11-26 20:54:59.708761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.036 [2024-11-26 20:54:59.708779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.036 [2024-11-26 20:54:59.720277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.036 [2024-11-26 20:54:59.720327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.036 [2024-11-26 20:54:59.720347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.294 [2024-11-26 20:54:59.734549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.294 [2024-11-26 20:54:59.734593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.294 [2024-11-26 20:54:59.734610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.294 [2024-11-26 20:54:59.747211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.294 [2024-11-26 20:54:59.747242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:8304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.294 [2024-11-26 20:54:59.747260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.294 [2024-11-26 20:54:59.758536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.294 [2024-11-26 20:54:59.758564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:4301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.294 [2024-11-26 20:54:59.758580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.294 [2024-11-26 20:54:59.774924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.294 [2024-11-26 20:54:59.774952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.294 [2024-11-26 20:54:59.774968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.294 [2024-11-26 20:54:59.790257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.294 [2024-11-26 20:54:59.790285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.294 [2024-11-26 20:54:59.790329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.294 [2024-11-26 20:54:59.805106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.294 [2024-11-26 20:54:59.805137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.294 [2024-11-26 20:54:59.805155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.294 [2024-11-26 20:54:59.816102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.294 [2024-11-26 20:54:59.816129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:24474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.294 [2024-11-26 20:54:59.816161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.294 [2024-11-26 20:54:59.831747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.294 [2024-11-26 20:54:59.831776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:20145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.294 [2024-11-26 20:54:59.831807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.294 [2024-11-26 20:54:59.848622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.294 [2024-11-26 20:54:59.848655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.294 [2024-11-26 20:54:59.848673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.294 [2024-11-26 20:54:59.859090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.295 [2024-11-26 20:54:59.859122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:9607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.295 [2024-11-26 20:54:59.859155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.295 [2024-11-26 20:54:59.873713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.295 [2024-11-26 20:54:59.873741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:17598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.295 [2024-11-26 20:54:59.873771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.295 [2024-11-26 20:54:59.888283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.295 [2024-11-26 20:54:59.888324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.295 [2024-11-26 20:54:59.888343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.295 [2024-11-26 20:54:59.899188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.295 [2024-11-26 20:54:59.899218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.295 [2024-11-26 20:54:59.899248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.295 [2024-11-26 20:54:59.915120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.295 [2024-11-26 20:54:59.915158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:19591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.295 [2024-11-26 20:54:59.915176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.295 [2024-11-26 20:54:59.928362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.295 [2024-11-26 20:54:59.928394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:23590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.295 [2024-11-26 20:54:59.928411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.295 [2024-11-26 20:54:59.943115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.295 [2024-11-26 20:54:59.943143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.295 [2024-11-26 20:54:59.943159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.295 [2024-11-26 20:54:59.955010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.295 [2024-11-26 20:54:59.955065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.295 [2024-11-26 20:54:59.955083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.295 [2024-11-26 20:54:59.970056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.295 [2024-11-26 20:54:59.970083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.295 [2024-11-26 20:54:59.970115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.295 [2024-11-26 20:54:59.982850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.295 [2024-11-26 20:54:59.982878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.295 [2024-11-26 20:54:59.982908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.554 [2024-11-26 20:54:59.995786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.554 [2024-11-26 20:54:59.995814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.554 [2024-11-26 20:54:59.995845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.554 [2024-11-26 20:55:00.010484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.554 [2024-11-26 20:55:00.010525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.554 [2024-11-26 20:55:00.010544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.554 [2024-11-26 20:55:00.028328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.554 [2024-11-26 20:55:00.028374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.554 [2024-11-26 20:55:00.028416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.554 [2024-11-26 20:55:00.041628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.554 [2024-11-26 20:55:00.041674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.554 [2024-11-26 20:55:00.041714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.554 [2024-11-26 20:55:00.058255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.554 [2024-11-26 20:55:00.058296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:22837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.554 [2024-11-26 20:55:00.058346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.554 [2024-11-26 20:55:00.071140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.554 [2024-11-26 20:55:00.071178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.554 [2024-11-26 20:55:00.071219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.554 [2024-11-26 20:55:00.088190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.554 [2024-11-26 20:55:00.088237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.554 [2024-11-26 20:55:00.088254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.554 [2024-11-26 20:55:00.102873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.554 [2024-11-26 20:55:00.102903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:10230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.554 [2024-11-26 20:55:00.102936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.554 [2024-11-26 20:55:00.117909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.554 [2024-11-26 20:55:00.117942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:25580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.554 [2024-11-26 20:55:00.117960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.554 [2024-11-26 20:55:00.129009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.554 [2024-11-26 20:55:00.129054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.554 [2024-11-26 20:55:00.129071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.554 [2024-11-26 20:55:00.144797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.554 [2024-11-26 20:55:00.144829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:8540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.554 [2024-11-26 20:55:00.144847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.554 [2024-11-26 20:55:00.159670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.554 [2024-11-26 20:55:00.159706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:16735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.554 [2024-11-26 20:55:00.159738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.554 [2024-11-26 20:55:00.175271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.554 [2024-11-26 20:55:00.175300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.554 [2024-11-26 20:55:00.175340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.554 [2024-11-26 20:55:00.191078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.554 [2024-11-26 20:55:00.191105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.554 [2024-11-26 20:55:00.191136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.554 [2024-11-26 20:55:00.201843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.554 [2024-11-26 20:55:00.201871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:25245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.554 [2024-11-26 20:55:00.201901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.554 [2024-11-26 20:55:00.215094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.554 [2024-11-26 20:55:00.215122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.554 [2024-11-26 20:55:00.215151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.554 [2024-11-26 20:55:00.228918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.554 [2024-11-26 20:55:00.228949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.554 [2024-11-26 20:55:00.228965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.554 [2024-11-26 20:55:00.239609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.554 [2024-11-26 20:55:00.239637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.554 [2024-11-26 20:55:00.239668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.811 [2024-11-26 20:55:00.255281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.812 [2024-11-26 20:55:00.255333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.812 [2024-11-26 20:55:00.255350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.812 [2024-11-26 20:55:00.269985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.812 [2024-11-26 20:55:00.270030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:13122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.812 [2024-11-26 20:55:00.270045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.812 [2024-11-26 20:55:00.281221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.812 [2024-11-26 20:55:00.281249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:14334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.812 [2024-11-26 20:55:00.281279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.812 [2024-11-26 20:55:00.294283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.812 [2024-11-26 20:55:00.294319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.812 [2024-11-26 20:55:00.294351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.812 [2024-11-26 20:55:00.310642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.812 [2024-11-26 20:55:00.310670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.812 [2024-11-26 20:55:00.310701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.812 [2024-11-26 20:55:00.323995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.812 [2024-11-26 20:55:00.324026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.812 [2024-11-26 20:55:00.324043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.812 [2024-11-26 20:55:00.337087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.812 [2024-11-26 20:55:00.337118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.812 [2024-11-26 20:55:00.337136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.812 [2024-11-26 20:55:00.351022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.812 [2024-11-26 20:55:00.351052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.812 [2024-11-26 20:55:00.351070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.812 [2024-11-26 20:55:00.361718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.812 [2024-11-26 20:55:00.361764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:10314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.812 [2024-11-26 20:55:00.361781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.812 [2024-11-26 20:55:00.377059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.812 [2024-11-26 20:55:00.377087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.812 [2024-11-26 20:55:00.377118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.812 [2024-11-26 20:55:00.389871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.812 [2024-11-26 20:55:00.389898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.812 [2024-11-26 20:55:00.389934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.812 [2024-11-26 20:55:00.403467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.812 [2024-11-26 20:55:00.403494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.812 [2024-11-26 20:55:00.403527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.812 [2024-11-26 20:55:00.418530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.812 [2024-11-26 20:55:00.418559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.812 [2024-11-26 20:55:00.418591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.812 [2024-11-26 20:55:00.433945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.812 [2024-11-26 20:55:00.433972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.812 [2024-11-26 20:55:00.434002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.812 [2024-11-26 20:55:00.446929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.812 [2024-11-26 20:55:00.446960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.812 [2024-11-26 20:55:00.446977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.812 [2024-11-26 20:55:00.459136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.812 [2024-11-26 20:55:00.459168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:25571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.812 [2024-11-26 20:55:00.459186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.812 [2024-11-26 20:55:00.474022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.812 [2024-11-26 20:55:00.474050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.812 [2024-11-26 20:55:00.474066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.812 [2024-11-26 20:55:00.487738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.812 [2024-11-26 20:55:00.487766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.812 [2024-11-26 20:55:00.487797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:56.812 [2024-11-26 20:55:00.500441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:56.812 [2024-11-26 20:55:00.500471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.812 [2024-11-26 20:55:00.500504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.070 [2024-11-26 20:55:00.511458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.070 [2024-11-26 20:55:00.511493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:21400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.070 [2024-11-26 20:55:00.511510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.070 [2024-11-26 20:55:00.524787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.070 [2024-11-26 20:55:00.524816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:14935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.070 [2024-11-26 20:55:00.524849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.070 [2024-11-26 20:55:00.537699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.070 [2024-11-26 20:55:00.537729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.070 [2024-11-26 20:55:00.537746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.070 [2024-11-26 20:55:00.551547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.070 [2024-11-26 20:55:00.551578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.070 [2024-11-26 20:55:00.551595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.070 18405.00 IOPS, 71.89 MiB/s [2024-11-26T19:55:00.767Z] [2024-11-26 20:55:00.563962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.070 [2024-11-26 20:55:00.563993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:7615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.070 [2024-11-26 20:55:00.564010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.070 [2024-11-26 20:55:00.575486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.070 [2024-11-26 20:55:00.575517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.070 [2024-11-26 20:55:00.575535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.070 [2024-11-26 20:55:00.588108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.070 [2024-11-26 20:55:00.588138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:16476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.070 [2024-11-26 20:55:00.588156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.070 [2024-11-26 20:55:00.603069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.070 [2024-11-26 20:55:00.603096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.070 [2024-11-26 20:55:00.603126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.070 [2024-11-26 20:55:00.616794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.070 [2024-11-26 20:55:00.616829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.070 [2024-11-26 20:55:00.616852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.070 [2024-11-26 20:55:00.627539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.070 [2024-11-26 20:55:00.627567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:17676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.070 [2024-11-26 20:55:00.627583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.070 [2024-11-26 20:55:00.642506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.070 [2024-11-26 20:55:00.642548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.070 [2024-11-26 20:55:00.642565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.070 [2024-11-26 20:55:00.659486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.070 [2024-11-26 20:55:00.659517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.070 [2024-11-26 20:55:00.659534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.070 [2024-11-26 20:55:00.672790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.070 [2024-11-26 20:55:00.672820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.070 [2024-11-26 20:55:00.672837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.070 [2024-11-26 20:55:00.686174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.070 [2024-11-26 20:55:00.686206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:13336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.071 [2024-11-26 20:55:00.686223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.071 [2024-11-26 20:55:00.697434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.071 [2024-11-26 20:55:00.697465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.071 [2024-11-26 20:55:00.697481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.071 [2024-11-26 20:55:00.711847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.071 [2024-11-26 20:55:00.711877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:8669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.071 [2024-11-26 20:55:00.711893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.071 [2024-11-26 20:55:00.728111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.071 [2024-11-26 20:55:00.728142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.071 [2024-11-26 20:55:00.728159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.071 [2024-11-26 20:55:00.743687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.071 [2024-11-26 20:55:00.743721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.071 [2024-11-26 20:55:00.743738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.071 [2024-11-26 20:55:00.756810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.071 [2024-11-26 20:55:00.756840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.071 [2024-11-26 20:55:00.756857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.329 [2024-11-26 20:55:00.771673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.330 [2024-11-26 20:55:00.771702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.330 [2024-11-26 20:55:00.771718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.330 [2024-11-26 20:55:00.785479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.330 [2024-11-26 20:55:00.785509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:1761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.330 [2024-11-26 20:55:00.785525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.330 [2024-11-26 20:55:00.800300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.330 [2024-11-26 20:55:00.800338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:15345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.330 [2024-11-26 20:55:00.800355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.330 [2024-11-26 20:55:00.818093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.330 [2024-11-26 20:55:00.818124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.330 [2024-11-26 20:55:00.818141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.330 [2024-11-26 20:55:00.831510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.330 [2024-11-26 20:55:00.831542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:1326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.330 [2024-11-26 20:55:00.831560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.330 [2024-11-26 20:55:00.845596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.330 [2024-11-26 20:55:00.845627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.330 [2024-11-26 20:55:00.845645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.330 [2024-11-26 20:55:00.858268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.330 [2024-11-26 20:55:00.858300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.330 [2024-11-26 20:55:00.858327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.330 [2024-11-26 20:55:00.869669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.330 [2024-11-26 20:55:00.869700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.330 [2024-11-26 20:55:00.869717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.330 [2024-11-26 20:55:00.883713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.330 [2024-11-26 20:55:00.883745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.330 [2024-11-26 20:55:00.883763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.330 [2024-11-26 20:55:00.895278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.330 [2024-11-26 20:55:00.895329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.330 [2024-11-26 20:55:00.895346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.330 [2024-11-26 20:55:00.909065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.330 [2024-11-26 20:55:00.909112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.330 [2024-11-26 20:55:00.909129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.330 [2024-11-26 20:55:00.922530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.330 [2024-11-26 20:55:00.922570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.330 [2024-11-26 20:55:00.922587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.330 [2024-11-26 20:55:00.935233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.330 [2024-11-26 20:55:00.935266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:24756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.330 [2024-11-26 20:55:00.935283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.330 [2024-11-26 20:55:00.947066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.330 [2024-11-26 20:55:00.947095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:22020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.330 [2024-11-26 20:55:00.947111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.330 [2024-11-26 20:55:00.962600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.330 [2024-11-26 20:55:00.962631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:23505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.330 [2024-11-26 20:55:00.962664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.330 [2024-11-26 20:55:00.976525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.330 [2024-11-26 20:55:00.976556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:24057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.330 [2024-11-26 20:55:00.976579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.330 [2024-11-26 20:55:00.991658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.330 [2024-11-26 20:55:00.991688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.330 [2024-11-26 20:55:00.991704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.330 [2024-11-26 20:55:01.004740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.330 [2024-11-26 20:55:01.004769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:9718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.330 [2024-11-26 20:55:01.004800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.330 [2024-11-26 20:55:01.021217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.330 [2024-11-26 20:55:01.021248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.330 [2024-11-26 20:55:01.021264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.589 [2024-11-26 20:55:01.034265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.589 [2024-11-26 20:55:01.034295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.589 [2024-11-26 20:55:01.034333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.589 [2024-11-26 20:55:01.047099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.589 [2024-11-26 20:55:01.047129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:8043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.589 [2024-11-26 20:55:01.047146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.589 [2024-11-26 20:55:01.063485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.589 [2024-11-26 20:55:01.063515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:3258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.589 [2024-11-26 20:55:01.063531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.589 [2024-11-26 20:55:01.078115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.589 [2024-11-26 20:55:01.078147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.589 [2024-11-26 20:55:01.078164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.589 [2024-11-26 20:55:01.092600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.589 [2024-11-26 20:55:01.092633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:18708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.589 [2024-11-26 20:55:01.092650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.589 [2024-11-26 20:55:01.104578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.589 [2024-11-26 20:55:01.104632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.589 [2024-11-26 20:55:01.104650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.589 [2024-11-26 20:55:01.118737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.589 [2024-11-26 20:55:01.118769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.589 [2024-11-26 20:55:01.118787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.589 [2024-11-26 20:55:01.134635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.589 [2024-11-26 20:55:01.134679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.589 [2024-11-26 20:55:01.134695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.589 [2024-11-26 20:55:01.150850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.589 [2024-11-26 20:55:01.150879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.589 [2024-11-26 20:55:01.150910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.590 [2024-11-26 20:55:01.165669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.590 [2024-11-26 20:55:01.165698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.590 [2024-11-26 20:55:01.165730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.590 [2024-11-26 20:55:01.178367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.590 [2024-11-26 20:55:01.178397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.590 [2024-11-26 20:55:01.178414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.590 [2024-11-26 20:55:01.192864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.590 [2024-11-26 20:55:01.192892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.590 [2024-11-26 20:55:01.192922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.590 [2024-11-26 20:55:01.207642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.590 [2024-11-26 20:55:01.207672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:6689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.590 [2024-11-26 20:55:01.207690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.590 [2024-11-26 20:55:01.218530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.590 [2024-11-26 20:55:01.218568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.590 [2024-11-26 20:55:01.218583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.590 [2024-11-26 20:55:01.234774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.590 [2024-11-26 20:55:01.234803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.590 [2024-11-26 20:55:01.234834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.590 [2024-11-26 20:55:01.250897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.590 [2024-11-26 20:55:01.250925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:23162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.590 [2024-11-26 20:55:01.250956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.590 [2024-11-26 20:55:01.265762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.590 [2024-11-26 20:55:01.265790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.590 [2024-11-26 20:55:01.265820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.590 [2024-11-26 20:55:01.280865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.590 [2024-11-26 20:55:01.280897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.590 [2024-11-26 20:55:01.280915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.849 [2024-11-26 20:55:01.296060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.849 [2024-11-26 20:55:01.296089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.849 [2024-11-26 20:55:01.296120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.849 [2024-11-26 20:55:01.309192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.849 [2024-11-26 20:55:01.309220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:18816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.849 [2024-11-26 20:55:01.309250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.849 [2024-11-26 20:55:01.323990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.849 [2024-11-26 20:55:01.324020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.849 [2024-11-26 20:55:01.324037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.849 [2024-11-26 20:55:01.340750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.849 [2024-11-26 20:55:01.340778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:11108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.849 [2024-11-26 20:55:01.340809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.849 [2024-11-26 20:55:01.355556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.849 [2024-11-26 20:55:01.355586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.849 [2024-11-26 20:55:01.355612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.849 [2024-11-26 20:55:01.366733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.849 [2024-11-26 20:55:01.366762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:11666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.850 [2024-11-26 20:55:01.366794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.850 [2024-11-26 20:55:01.380647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.850 [2024-11-26 20:55:01.380676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:9763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.850 [2024-11-26 20:55:01.380708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.850 [2024-11-26 20:55:01.393386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.850 [2024-11-26 20:55:01.393416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.850 [2024-11-26 20:55:01.393433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.850 [2024-11-26 20:55:01.409512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.850 [2024-11-26 20:55:01.409558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.850 [2024-11-26 20:55:01.409576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.850 [2024-11-26 20:55:01.423772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.850 [2024-11-26 20:55:01.423803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.850 [2024-11-26 20:55:01.423820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.850 [2024-11-26 20:55:01.434658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.850 [2024-11-26 20:55:01.434687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.850 [2024-11-26 20:55:01.434718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.850 [2024-11-26 20:55:01.448557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.850 [2024-11-26 20:55:01.448587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.850 [2024-11-26 20:55:01.448604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.850 [2024-11-26 20:55:01.464544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.850 [2024-11-26 20:55:01.464573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.850 [2024-11-26 20:55:01.464589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.850 [2024-11-26 20:55:01.475618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.850 [2024-11-26 20:55:01.475646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.850 [2024-11-26 20:55:01.475676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.850 [2024-11-26 20:55:01.491909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.850 [2024-11-26 20:55:01.491939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:8454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.850 [2024-11-26 20:55:01.491972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.850 [2024-11-26 20:55:01.505324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.850 [2024-11-26 20:55:01.505355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:19172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.850 [2024-11-26 20:55:01.505372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.850 [2024-11-26 20:55:01.518499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.850 [2024-11-26 20:55:01.518529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:18884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.850 [2024-11-26 20:55:01.518561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:57.850 [2024-11-26 20:55:01.529739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:57.850 [2024-11-26 20:55:01.529767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:25268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.850 [2024-11-26 20:55:01.529798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.109 [2024-11-26 20:55:01.545164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:58.109 [2024-11-26 20:55:01.545195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.109 [2024-11-26 20:55:01.545212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.109 18325.00 IOPS, 71.58 MiB/s [2024-11-26T19:55:01.806Z] [2024-11-26 20:55:01.560335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2501880) 00:24:58.109 [2024-11-26 20:55:01.560366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.109 [2024-11-26 20:55:01.560399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:58.109 00:24:58.109 Latency(us) 00:24:58.109 [2024-11-26T19:55:01.806Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:58.109 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:58.109 nvme0n1 : 2.05 17973.68 70.21 0.00 0.00 6973.51 3495.25 48933.55 00:24:58.109 [2024-11-26T19:55:01.806Z] =================================================================================================================== 00:24:58.109 [2024-11-26T19:55:01.807Z] Total : 17973.68 70.21 0.00 0.00 6973.51 3495.25 48933.55 00:24:58.110 { 00:24:58.110 "results": [ 00:24:58.110 { 00:24:58.110 "job": "nvme0n1", 00:24:58.110 "core_mask": "0x2", 00:24:58.110 "workload": "randread", 00:24:58.110 "status": "finished", 00:24:58.110 "queue_depth": 128, 00:24:58.110 "io_size": 4096, 00:24:58.110 "runtime": 2.046214, 00:24:58.110 "iops": 17973.682127089345, 00:24:58.110 "mibps": 70.20969580894275, 00:24:58.110 "io_failed": 0, 00:24:58.110 "io_timeout": 0, 00:24:58.110 "avg_latency_us": 6973.505028247562, 00:24:58.110 "min_latency_us": 3495.2533333333336, 00:24:58.110 "max_latency_us": 48933.54666666667 00:24:58.110 } 00:24:58.110 ], 00:24:58.110 "core_count": 1 00:24:58.110 } 00:24:58.110 20:55:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:58.110 20:55:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:58.110 20:55:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:58.110 | .driver_specific 00:24:58.110 | .nvme_error 00:24:58.110 | .status_code 00:24:58.110 | .command_transient_transport_error' 00:24:58.110 20:55:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:58.368 20:55:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 144 > 0 )) 00:24:58.368 20:55:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1764287 00:24:58.368 20:55:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1764287 ']' 00:24:58.368 20:55:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1764287 00:24:58.368 20:55:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:24:58.368 20:55:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:58.368 20:55:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1764287 00:24:58.368 20:55:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:58.368 20:55:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:58.368 20:55:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1764287' 00:24:58.368 killing process with pid 1764287 00:24:58.368 20:55:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1764287 00:24:58.368 Received shutdown signal, test time was about 2.000000 seconds 00:24:58.368 00:24:58.368 Latency(us) 00:24:58.368 [2024-11-26T19:55:02.065Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:58.368 [2024-11-26T19:55:02.065Z] =================================================================================================================== 00:24:58.368 [2024-11-26T19:55:02.065Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:58.368 20:55:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1764287 00:24:58.626 20:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:24:58.626 20:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:58.626 20:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:58.626 20:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:58.626 20:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:58.626 20:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1764704 00:24:58.626 20:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:24:58.626 20:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1764704 /var/tmp/bperf.sock 00:24:58.626 20:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1764704 ']' 00:24:58.626 20:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:58.626 20:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:58.626 20:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:58.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:58.626 20:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:58.626 20:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:58.626 [2024-11-26 20:55:02.202789] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:24:58.626 [2024-11-26 20:55:02.202874] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1764704 ] 00:24:58.626 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:58.626 Zero copy mechanism will not be used. 00:24:58.626 [2024-11-26 20:55:02.272248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:58.885 [2024-11-26 20:55:02.330905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:58.885 20:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:58.885 20:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:24:58.885 20:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:58.885 20:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:59.143 20:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:59.143 20:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.143 20:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:59.143 20:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.143 20:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:59.143 20:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:59.400 nvme0n1 00:24:59.400 20:55:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:59.400 20:55:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.400 20:55:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:59.400 20:55:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.400 20:55:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:59.400 20:55:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:59.659 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:59.659 Zero copy mechanism will not be used. 00:24:59.659 Running I/O for 2 seconds... 00:24:59.659 [2024-11-26 20:55:03.160668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.659 [2024-11-26 20:55:03.160722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.659 [2024-11-26 20:55:03.160752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:59.659 [2024-11-26 20:55:03.166845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.659 [2024-11-26 20:55:03.166880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.659 [2024-11-26 20:55:03.166898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:59.659 [2024-11-26 20:55:03.172914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.659 [2024-11-26 20:55:03.172947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.659 [2024-11-26 20:55:03.172965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:59.659 [2024-11-26 20:55:03.176906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.659 [2024-11-26 20:55:03.176938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.659 [2024-11-26 20:55:03.176956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:59.659 [2024-11-26 20:55:03.181027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.659 [2024-11-26 20:55:03.181060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.659 [2024-11-26 20:55:03.181078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:59.659 [2024-11-26 20:55:03.186240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.659 [2024-11-26 20:55:03.186272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.659 [2024-11-26 20:55:03.186291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:59.659 [2024-11-26 20:55:03.192002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.659 [2024-11-26 20:55:03.192034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.659 [2024-11-26 20:55:03.192052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:59.659 [2024-11-26 20:55:03.197993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.659 [2024-11-26 20:55:03.198025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.660 [2024-11-26 20:55:03.198045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:59.660 [2024-11-26 20:55:03.203842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.660 [2024-11-26 20:55:03.203875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.660 [2024-11-26 20:55:03.203893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:59.660 [2024-11-26 20:55:03.209278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.660 [2024-11-26 20:55:03.209327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.660 [2024-11-26 20:55:03.209347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:59.660 [2024-11-26 20:55:03.215346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.660 [2024-11-26 20:55:03.215378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.660 [2024-11-26 20:55:03.215396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:59.660 [2024-11-26 20:55:03.221463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.660 [2024-11-26 20:55:03.221496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.660 [2024-11-26 20:55:03.221515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:59.660 [2024-11-26 20:55:03.227428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.660 [2024-11-26 20:55:03.227461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.660 [2024-11-26 20:55:03.227494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:59.660 [2024-11-26 20:55:03.233017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.660 [2024-11-26 20:55:03.233048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.660 [2024-11-26 20:55:03.233066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:59.660 [2024-11-26 20:55:03.238115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.660 [2024-11-26 20:55:03.238146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.660 [2024-11-26 20:55:03.238163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:59.660 [2024-11-26 20:55:03.244463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.660 [2024-11-26 20:55:03.244509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.660 [2024-11-26 20:55:03.244527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:59.660 [2024-11-26 20:55:03.249947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.660 [2024-11-26 20:55:03.249979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.660 [2024-11-26 20:55:03.249996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:59.660 [2024-11-26 20:55:03.255081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.660 [2024-11-26 20:55:03.255113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.660 [2024-11-26 20:55:03.255130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:59.660 [2024-11-26 20:55:03.260127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.660 [2024-11-26 20:55:03.260160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.660 [2024-11-26 20:55:03.260179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:59.660 [2024-11-26 20:55:03.266778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.660 [2024-11-26 20:55:03.266810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.660 [2024-11-26 20:55:03.266843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:59.660 [2024-11-26 20:55:03.272727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.660 [2024-11-26 20:55:03.272774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.660 [2024-11-26 20:55:03.272793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:59.660 [2024-11-26 20:55:03.278193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.660 [2024-11-26 20:55:03.278225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.660 [2024-11-26 20:55:03.278243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:59.660 [2024-11-26 20:55:03.283879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.660 [2024-11-26 20:55:03.283911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.660 [2024-11-26 20:55:03.283929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:59.660 [2024-11-26 20:55:03.289360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.660 [2024-11-26 20:55:03.289393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.660 [2024-11-26 20:55:03.289410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:59.660 [2024-11-26 20:55:03.295433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.660 [2024-11-26 20:55:03.295466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.660 [2024-11-26 20:55:03.295485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:59.660 [2024-11-26 20:55:03.302287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.660 [2024-11-26 20:55:03.302327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.660 [2024-11-26 20:55:03.302346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:59.660 [2024-11-26 20:55:03.309960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.660 [2024-11-26 20:55:03.309993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.660 [2024-11-26 20:55:03.310027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:59.660 [2024-11-26 20:55:03.318386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.660 [2024-11-26 20:55:03.318419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.660 [2024-11-26 20:55:03.318436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:59.660 [2024-11-26 20:55:03.325708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.660 [2024-11-26 20:55:03.325741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.660 [2024-11-26 20:55:03.325760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:59.660 [2024-11-26 20:55:03.331250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.660 [2024-11-26 20:55:03.331282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.660 [2024-11-26 20:55:03.331300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:59.660 [2024-11-26 20:55:03.335865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.660 [2024-11-26 20:55:03.335896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.660 [2024-11-26 20:55:03.335914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:59.660 [2024-11-26 20:55:03.340471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.660 [2024-11-26 20:55:03.340501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.660 [2024-11-26 20:55:03.340519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:59.660 [2024-11-26 20:55:03.345358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.660 [2024-11-26 20:55:03.345389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.660 [2024-11-26 20:55:03.345406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:59.660 [2024-11-26 20:55:03.349902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.661 [2024-11-26 20:55:03.349933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.661 [2024-11-26 20:55:03.349951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:59.919 [2024-11-26 20:55:03.354342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.919 [2024-11-26 20:55:03.354372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.919 [2024-11-26 20:55:03.354389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:59.919 [2024-11-26 20:55:03.358759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.919 [2024-11-26 20:55:03.358788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.919 [2024-11-26 20:55:03.358805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:59.919 [2024-11-26 20:55:03.363209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.919 [2024-11-26 20:55:03.363239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.919 [2024-11-26 20:55:03.363255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:59.919 [2024-11-26 20:55:03.368003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.919 [2024-11-26 20:55:03.368034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.919 [2024-11-26 20:55:03.368053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:59.919 [2024-11-26 20:55:03.372799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.919 [2024-11-26 20:55:03.372830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.919 [2024-11-26 20:55:03.372848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:59.919 [2024-11-26 20:55:03.378707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.919 [2024-11-26 20:55:03.378739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.919 [2024-11-26 20:55:03.378757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:59.919 [2024-11-26 20:55:03.384624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.919 [2024-11-26 20:55:03.384656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.919 [2024-11-26 20:55:03.384673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:59.919 [2024-11-26 20:55:03.390985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.919 [2024-11-26 20:55:03.391017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.919 [2024-11-26 20:55:03.391035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:59.919 [2024-11-26 20:55:03.397292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.919 [2024-11-26 20:55:03.397333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.919 [2024-11-26 20:55:03.397352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:59.919 [2024-11-26 20:55:03.403634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.919 [2024-11-26 20:55:03.403667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.919 [2024-11-26 20:55:03.403692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:59.920 [2024-11-26 20:55:03.409134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.920 [2024-11-26 20:55:03.409166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.920 [2024-11-26 20:55:03.409184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:59.920 [2024-11-26 20:55:03.414419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.920 [2024-11-26 20:55:03.414450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.920 [2024-11-26 20:55:03.414468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:59.920 [2024-11-26 20:55:03.420037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.920 [2024-11-26 20:55:03.420068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.920 [2024-11-26 20:55:03.420086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:59.920 [2024-11-26 20:55:03.425580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.920 [2024-11-26 20:55:03.425612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.920 [2024-11-26 20:55:03.425630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:59.920 [2024-11-26 20:55:03.428953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.920 [2024-11-26 20:55:03.428984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.920 [2024-11-26 20:55:03.429002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:59.920 [2024-11-26 20:55:03.434754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.920 [2024-11-26 20:55:03.434786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.920 [2024-11-26 20:55:03.434804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:59.920 [2024-11-26 20:55:03.440208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.920 [2024-11-26 20:55:03.440238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.920 [2024-11-26 20:55:03.440271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:59.920 [2024-11-26 20:55:03.446217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.920 [2024-11-26 20:55:03.446263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.920 [2024-11-26 20:55:03.446282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:59.920 [2024-11-26 20:55:03.451720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.920 [2024-11-26 20:55:03.451759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.920 [2024-11-26 20:55:03.451778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:59.920 [2024-11-26 20:55:03.456970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.920 [2024-11-26 20:55:03.457001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.920 [2024-11-26 20:55:03.457019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:59.920 [2024-11-26 20:55:03.461770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.920 [2024-11-26 20:55:03.461801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.920 [2024-11-26 20:55:03.461820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:59.920 [2024-11-26 20:55:03.466520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.920 [2024-11-26 20:55:03.466551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.920 [2024-11-26 20:55:03.466568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:59.920 [2024-11-26 20:55:03.471709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.920 [2024-11-26 20:55:03.471742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.920 [2024-11-26 20:55:03.471760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:59.920 [2024-11-26 20:55:03.476870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.920 [2024-11-26 20:55:03.476901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.920 [2024-11-26 20:55:03.476919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:59.920 [2024-11-26 20:55:03.481618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.920 [2024-11-26 20:55:03.481649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.920 [2024-11-26 20:55:03.481667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:59.920 [2024-11-26 20:55:03.487748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.920 [2024-11-26 20:55:03.487779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.920 [2024-11-26 20:55:03.487797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:59.920 [2024-11-26 20:55:03.493881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.920 [2024-11-26 20:55:03.493914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.920 [2024-11-26 20:55:03.493932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:59.920 [2024-11-26 20:55:03.501103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.920 [2024-11-26 20:55:03.501135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.920 [2024-11-26 20:55:03.501153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:59.920 [2024-11-26 20:55:03.509356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.920 [2024-11-26 20:55:03.509388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.920 [2024-11-26 20:55:03.509406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:59.920 [2024-11-26 20:55:03.516797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.920 [2024-11-26 20:55:03.516829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.920 [2024-11-26 20:55:03.516848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:59.920 [2024-11-26 20:55:03.524547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.920 [2024-11-26 20:55:03.524579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.920 [2024-11-26 20:55:03.524598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:59.920 [2024-11-26 20:55:03.532186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.920 [2024-11-26 20:55:03.532218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.920 [2024-11-26 20:55:03.532236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:59.920 [2024-11-26 20:55:03.539818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.920 [2024-11-26 20:55:03.539849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.920 [2024-11-26 20:55:03.539867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:59.920 [2024-11-26 20:55:03.547399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.920 [2024-11-26 20:55:03.547431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.920 [2024-11-26 20:55:03.547450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:59.920 [2024-11-26 20:55:03.552267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.920 [2024-11-26 20:55:03.552299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.920 [2024-11-26 20:55:03.552327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:59.920 [2024-11-26 20:55:03.558278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.920 [2024-11-26 20:55:03.558318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.920 [2024-11-26 20:55:03.558354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:59.920 [2024-11-26 20:55:03.565961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.920 [2024-11-26 20:55:03.566008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.920 [2024-11-26 20:55:03.566025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:59.920 [2024-11-26 20:55:03.573672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.921 [2024-11-26 20:55:03.573704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.921 [2024-11-26 20:55:03.573736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:59.921 [2024-11-26 20:55:03.581381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.921 [2024-11-26 20:55:03.581413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.921 [2024-11-26 20:55:03.581431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:59.921 [2024-11-26 20:55:03.589055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.921 [2024-11-26 20:55:03.589087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.921 [2024-11-26 20:55:03.589105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:59.921 [2024-11-26 20:55:03.596537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.921 [2024-11-26 20:55:03.596575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.921 [2024-11-26 20:55:03.596593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:59.921 [2024-11-26 20:55:03.604564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.921 [2024-11-26 20:55:03.604601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.921 [2024-11-26 20:55:03.604643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:59.921 [2024-11-26 20:55:03.612190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:24:59.921 [2024-11-26 20:55:03.612221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.921 [2024-11-26 20:55:03.612239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:00.179 [2024-11-26 20:55:03.618348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.179 [2024-11-26 20:55:03.618383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.179 [2024-11-26 20:55:03.618401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:00.179 [2024-11-26 20:55:03.623265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.179 [2024-11-26 20:55:03.623296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.179 [2024-11-26 20:55:03.623325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:00.179 [2024-11-26 20:55:03.628349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.179 [2024-11-26 20:55:03.628380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.179 [2024-11-26 20:55:03.628397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:00.179 [2024-11-26 20:55:03.634761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.179 [2024-11-26 20:55:03.634793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.179 [2024-11-26 20:55:03.634810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:00.179 [2024-11-26 20:55:03.640344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.179 [2024-11-26 20:55:03.640376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.179 [2024-11-26 20:55:03.640394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:00.179 [2024-11-26 20:55:03.644887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.179 [2024-11-26 20:55:03.644919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.180 [2024-11-26 20:55:03.644937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:00.180 [2024-11-26 20:55:03.649540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.180 [2024-11-26 20:55:03.649572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.180 [2024-11-26 20:55:03.649589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:00.180 [2024-11-26 20:55:03.653141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.180 [2024-11-26 20:55:03.653170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.180 [2024-11-26 20:55:03.653188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:00.180 [2024-11-26 20:55:03.658760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.180 [2024-11-26 20:55:03.658791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.180 [2024-11-26 20:55:03.658808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:00.180 [2024-11-26 20:55:03.665824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.180 [2024-11-26 20:55:03.665853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.180 [2024-11-26 20:55:03.665877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:00.180 [2024-11-26 20:55:03.671003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.180 [2024-11-26 20:55:03.671035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.180 [2024-11-26 20:55:03.671052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:00.180 [2024-11-26 20:55:03.675871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.180 [2024-11-26 20:55:03.675902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.180 [2024-11-26 20:55:03.675920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:00.180 [2024-11-26 20:55:03.681143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.180 [2024-11-26 20:55:03.681174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.180 [2024-11-26 20:55:03.681192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:00.180 [2024-11-26 20:55:03.686927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.180 [2024-11-26 20:55:03.686959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.180 [2024-11-26 20:55:03.686977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:00.180 [2024-11-26 20:55:03.694449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.180 [2024-11-26 20:55:03.694482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.180 [2024-11-26 20:55:03.694500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:00.180 [2024-11-26 20:55:03.700766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.180 [2024-11-26 20:55:03.700797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.180 [2024-11-26 20:55:03.700814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:00.180 [2024-11-26 20:55:03.707796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.180 [2024-11-26 20:55:03.707842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.180 [2024-11-26 20:55:03.707859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:00.180 [2024-11-26 20:55:03.714129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.180 [2024-11-26 20:55:03.714175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.180 [2024-11-26 20:55:03.714193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:00.180 [2024-11-26 20:55:03.721131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.180 [2024-11-26 20:55:03.721184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.180 [2024-11-26 20:55:03.721202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:00.180 [2024-11-26 20:55:03.727805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.180 [2024-11-26 20:55:03.727836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.180 [2024-11-26 20:55:03.727855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:00.180 [2024-11-26 20:55:03.734255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.180 [2024-11-26 20:55:03.734287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.180 [2024-11-26 20:55:03.734334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:00.180 [2024-11-26 20:55:03.740839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.180 [2024-11-26 20:55:03.740871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.180 [2024-11-26 20:55:03.740889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:00.180 [2024-11-26 20:55:03.746953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.180 [2024-11-26 20:55:03.746985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.180 [2024-11-26 20:55:03.747003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:00.180 [2024-11-26 20:55:03.753269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.180 [2024-11-26 20:55:03.753300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.180 [2024-11-26 20:55:03.753328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:00.180 [2024-11-26 20:55:03.759606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.180 [2024-11-26 20:55:03.759637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.180 [2024-11-26 20:55:03.759655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:00.180 [2024-11-26 20:55:03.765957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.180 [2024-11-26 20:55:03.765989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.180 [2024-11-26 20:55:03.766006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:00.180 [2024-11-26 20:55:03.772227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.181 [2024-11-26 20:55:03.772258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.181 [2024-11-26 20:55:03.772276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:00.181 [2024-11-26 20:55:03.778638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.181 [2024-11-26 20:55:03.778669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.181 [2024-11-26 20:55:03.778687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:00.181 [2024-11-26 20:55:03.782085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.181 [2024-11-26 20:55:03.782115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.181 [2024-11-26 20:55:03.782132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:00.181 [2024-11-26 20:55:03.787436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.181 [2024-11-26 20:55:03.787468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.181 [2024-11-26 20:55:03.787485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:00.181 [2024-11-26 20:55:03.794595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.181 [2024-11-26 20:55:03.794627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.181 [2024-11-26 20:55:03.794644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:00.181 [2024-11-26 20:55:03.801009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.181 [2024-11-26 20:55:03.801041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.181 [2024-11-26 20:55:03.801058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:00.181 [2024-11-26 20:55:03.808165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.181 [2024-11-26 20:55:03.808196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.181 [2024-11-26 20:55:03.808230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:00.181 [2024-11-26 20:55:03.815111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.181 [2024-11-26 20:55:03.815142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.181 [2024-11-26 20:55:03.815173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:00.181 [2024-11-26 20:55:03.823227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.181 [2024-11-26 20:55:03.823273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.181 [2024-11-26 20:55:03.823291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:00.181 [2024-11-26 20:55:03.829445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.181 [2024-11-26 20:55:03.829477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.181 [2024-11-26 20:55:03.829501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:00.181 [2024-11-26 20:55:03.834510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.181 [2024-11-26 20:55:03.834541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.181 [2024-11-26 20:55:03.834559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:00.181 [2024-11-26 20:55:03.840100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.181 [2024-11-26 20:55:03.840131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.181 [2024-11-26 20:55:03.840149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:00.181 [2024-11-26 20:55:03.844743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.181 [2024-11-26 20:55:03.844774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.181 [2024-11-26 20:55:03.844792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:00.181 [2024-11-26 20:55:03.849332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.181 [2024-11-26 20:55:03.849362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.181 [2024-11-26 20:55:03.849379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:00.181 [2024-11-26 20:55:03.853811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.181 [2024-11-26 20:55:03.853842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.181 [2024-11-26 20:55:03.853873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:00.181 [2024-11-26 20:55:03.858590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.181 [2024-11-26 20:55:03.858621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.181 [2024-11-26 20:55:03.858638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:00.181 [2024-11-26 20:55:03.863164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.181 [2024-11-26 20:55:03.863194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.181 [2024-11-26 20:55:03.863211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:00.181 [2024-11-26 20:55:03.867742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.181 [2024-11-26 20:55:03.867772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.181 [2024-11-26 20:55:03.867788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:00.181 [2024-11-26 20:55:03.872368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.181 [2024-11-26 20:55:03.872404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.181 [2024-11-26 20:55:03.872422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:00.440 [2024-11-26 20:55:03.877541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.440 [2024-11-26 20:55:03.877571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.440 [2024-11-26 20:55:03.877587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:00.440 [2024-11-26 20:55:03.882727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.440 [2024-11-26 20:55:03.882758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.440 [2024-11-26 20:55:03.882775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:00.440 [2024-11-26 20:55:03.887374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.440 [2024-11-26 20:55:03.887404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.440 [2024-11-26 20:55:03.887422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:00.440 [2024-11-26 20:55:03.891394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.440 [2024-11-26 20:55:03.891425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.440 [2024-11-26 20:55:03.891443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:00.440 [2024-11-26 20:55:03.894875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.440 [2024-11-26 20:55:03.894904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.440 [2024-11-26 20:55:03.894938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:00.440 [2024-11-26 20:55:03.900285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.440 [2024-11-26 20:55:03.900343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.440 [2024-11-26 20:55:03.900372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:00.440 [2024-11-26 20:55:03.905442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.440 [2024-11-26 20:55:03.905473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.440 [2024-11-26 20:55:03.905491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:00.441 [2024-11-26 20:55:03.912081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.441 [2024-11-26 20:55:03.912111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.441 [2024-11-26 20:55:03.912143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:00.441 [2024-11-26 20:55:03.917585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.441 [2024-11-26 20:55:03.917616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.441 [2024-11-26 20:55:03.917648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:00.441 [2024-11-26 20:55:03.922250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.441 [2024-11-26 20:55:03.922281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.441 [2024-11-26 20:55:03.922298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:00.441 [2024-11-26 20:55:03.926659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.441 [2024-11-26 20:55:03.926689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.441 [2024-11-26 20:55:03.926721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:00.441 [2024-11-26 20:55:03.931213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.441 [2024-11-26 20:55:03.931244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.441 [2024-11-26 20:55:03.931260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:00.441 [2024-11-26 20:55:03.935760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.441 [2024-11-26 20:55:03.935791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.441 [2024-11-26 20:55:03.935809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:00.441 [2024-11-26 20:55:03.940346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.441 [2024-11-26 20:55:03.940377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.441 [2024-11-26 20:55:03.940393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:00.441 [2024-11-26 20:55:03.945012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.441 [2024-11-26 20:55:03.945057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.441 [2024-11-26 20:55:03.945074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:00.441 [2024-11-26 20:55:03.949891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.441 [2024-11-26 20:55:03.949938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.441 [2024-11-26 20:55:03.949955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:00.441 [2024-11-26 20:55:03.954722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.441 [2024-11-26 20:55:03.954753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.441 [2024-11-26 20:55:03.954793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:00.441 [2024-11-26 20:55:03.959612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.441 [2024-11-26 20:55:03.959643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.441 [2024-11-26 20:55:03.959660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:00.441 [2024-11-26 20:55:03.964187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.441 [2024-11-26 20:55:03.964220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.441 [2024-11-26 20:55:03.964238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:00.441 [2024-11-26 20:55:03.969555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.441 [2024-11-26 20:55:03.969592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.441 [2024-11-26 20:55:03.969609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:00.441 [2024-11-26 20:55:03.974919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.441 [2024-11-26 20:55:03.974950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.441 [2024-11-26 20:55:03.974967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:00.441 [2024-11-26 20:55:03.981093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.441 [2024-11-26 20:55:03.981138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.441 [2024-11-26 20:55:03.981155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:00.441 [2024-11-26 20:55:03.988793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.441 [2024-11-26 20:55:03.988823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.441 [2024-11-26 20:55:03.988839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:00.441 [2024-11-26 20:55:03.995352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.441 [2024-11-26 20:55:03.995386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.441 [2024-11-26 20:55:03.995418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:00.441 [2024-11-26 20:55:04.001989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.441 [2024-11-26 20:55:04.002034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.441 [2024-11-26 20:55:04.002051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:00.441 [2024-11-26 20:55:04.008927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.441 [2024-11-26 20:55:04.008958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.441 [2024-11-26 20:55:04.008976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:00.441 [2024-11-26 20:55:04.014895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.441 [2024-11-26 20:55:04.014926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.441 [2024-11-26 20:55:04.014944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:00.441 [2024-11-26 20:55:04.020502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.441 [2024-11-26 20:55:04.020534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.441 [2024-11-26 20:55:04.020552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:00.442 [2024-11-26 20:55:04.026742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.442 [2024-11-26 20:55:04.026786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.442 [2024-11-26 20:55:04.026802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:00.442 [2024-11-26 20:55:04.032994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.442 [2024-11-26 20:55:04.033026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.442 [2024-11-26 20:55:04.033058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:00.442 [2024-11-26 20:55:04.038450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.442 [2024-11-26 20:55:04.038481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.442 [2024-11-26 20:55:04.038498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:00.442 [2024-11-26 20:55:04.043329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.442 [2024-11-26 20:55:04.043359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.442 [2024-11-26 20:55:04.043376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:00.442 [2024-11-26 20:55:04.048272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.442 [2024-11-26 20:55:04.048326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.442 [2024-11-26 20:55:04.048346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:00.442 [2024-11-26 20:55:04.053706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.442 [2024-11-26 20:55:04.053738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.442 [2024-11-26 20:55:04.053761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:00.442 [2024-11-26 20:55:04.058870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.442 [2024-11-26 20:55:04.058902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.442 [2024-11-26 20:55:04.058919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:00.442 [2024-11-26 20:55:04.063711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.442 [2024-11-26 20:55:04.063742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.442 [2024-11-26 20:55:04.063759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:00.442 [2024-11-26 20:55:04.069088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.442 [2024-11-26 20:55:04.069119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.442 [2024-11-26 20:55:04.069151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:00.442 [2024-11-26 20:55:04.075258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.442 [2024-11-26 20:55:04.075312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.442 [2024-11-26 20:55:04.075332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:00.442 [2024-11-26 20:55:04.081067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.442 [2024-11-26 20:55:04.081099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.442 [2024-11-26 20:55:04.081116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:00.442 [2024-11-26 20:55:04.086709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.442 [2024-11-26 20:55:04.086740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.442 [2024-11-26 20:55:04.086758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:00.442 [2024-11-26 20:55:04.092852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.442 [2024-11-26 20:55:04.092883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.442 [2024-11-26 20:55:04.092902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:00.442 [2024-11-26 20:55:04.098422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.442 [2024-11-26 20:55:04.098454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.442 [2024-11-26 20:55:04.098472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:00.442 [2024-11-26 20:55:04.103986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.442 [2024-11-26 20:55:04.104024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.442 [2024-11-26 20:55:04.104042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:00.442 [2024-11-26 20:55:04.109965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.442 [2024-11-26 20:55:04.109996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.442 [2024-11-26 20:55:04.110028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:00.442 [2024-11-26 20:55:04.116468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.442 [2024-11-26 20:55:04.116498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.442 [2024-11-26 20:55:04.116516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:00.442 [2024-11-26 20:55:04.121734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.442 [2024-11-26 20:55:04.121765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.442 [2024-11-26 20:55:04.121783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:00.442 [2024-11-26 20:55:04.127831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.442 [2024-11-26 20:55:04.127863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.442 [2024-11-26 20:55:04.127880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:00.442 [2024-11-26 20:55:04.133939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.442 [2024-11-26 20:55:04.133969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.442 [2024-11-26 20:55:04.133987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:00.701 [2024-11-26 20:55:04.140582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.701 [2024-11-26 20:55:04.140613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.701 [2024-11-26 20:55:04.140631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:00.701 [2024-11-26 20:55:04.145896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.701 [2024-11-26 20:55:04.145927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.701 [2024-11-26 20:55:04.145944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:00.701 [2024-11-26 20:55:04.150858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.701 [2024-11-26 20:55:04.150890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.701 [2024-11-26 20:55:04.150907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:00.701 [2024-11-26 20:55:04.155900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.701 [2024-11-26 20:55:04.155930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.701 [2024-11-26 20:55:04.155949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:00.701 5389.00 IOPS, 673.62 MiB/s [2024-11-26T19:55:04.398Z] [2024-11-26 20:55:04.161767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.701 [2024-11-26 20:55:04.161799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.701 [2024-11-26 20:55:04.161816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:00.701 [2024-11-26 20:55:04.166286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.701 [2024-11-26 20:55:04.166324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.701 [2024-11-26 20:55:04.166343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:00.701 [2024-11-26 20:55:04.170980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.701 [2024-11-26 20:55:04.171011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.701 [2024-11-26 20:55:04.171028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:00.701 [2024-11-26 20:55:04.175407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.701 [2024-11-26 20:55:04.175437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.701 [2024-11-26 20:55:04.175454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:00.701 [2024-11-26 20:55:04.179782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.701 [2024-11-26 20:55:04.179812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.701 [2024-11-26 20:55:04.179829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:00.701 [2024-11-26 20:55:04.184291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.701 [2024-11-26 20:55:04.184328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.701 [2024-11-26 20:55:04.184345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:00.701 [2024-11-26 20:55:04.188789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.701 [2024-11-26 20:55:04.188819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.701 [2024-11-26 20:55:04.188836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:00.701 [2024-11-26 20:55:04.193472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.701 [2024-11-26 20:55:04.193501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.701 [2024-11-26 20:55:04.193525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:00.701 [2024-11-26 20:55:04.198389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.701 [2024-11-26 20:55:04.198420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.701 [2024-11-26 20:55:04.198437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:00.701 [2024-11-26 20:55:04.203191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.701 [2024-11-26 20:55:04.203221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.701 [2024-11-26 20:55:04.203237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:00.701 [2024-11-26 20:55:04.207934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.701 [2024-11-26 20:55:04.207964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.701 [2024-11-26 20:55:04.207980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:00.701 [2024-11-26 20:55:04.212638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.701 [2024-11-26 20:55:04.212668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.701 [2024-11-26 20:55:04.212686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:00.701 [2024-11-26 20:55:04.217765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.701 [2024-11-26 20:55:04.217796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.701 [2024-11-26 20:55:04.217814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:00.701 [2024-11-26 20:55:04.222987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.701 [2024-11-26 20:55:04.223018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.702 [2024-11-26 20:55:04.223035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:00.702 [2024-11-26 20:55:04.228997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.702 [2024-11-26 20:55:04.229028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.702 [2024-11-26 20:55:04.229045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:00.702 [2024-11-26 20:55:04.236729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.702 [2024-11-26 20:55:04.236761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.702 [2024-11-26 20:55:04.236779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:00.702 [2024-11-26 20:55:04.242923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.702 [2024-11-26 20:55:04.242954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.702 [2024-11-26 20:55:04.242972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:00.702 [2024-11-26 20:55:04.246582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.702 [2024-11-26 20:55:04.246613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.702 [2024-11-26 20:55:04.246631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:00.702 [2024-11-26 20:55:04.253105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.702 [2024-11-26 20:55:04.253135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.702 [2024-11-26 20:55:04.253166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:00.702 [2024-11-26 20:55:04.259048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.702 [2024-11-26 20:55:04.259093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.702 [2024-11-26 20:55:04.259111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:00.702 [2024-11-26 20:55:04.264695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.702 [2024-11-26 20:55:04.264727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.702 [2024-11-26 20:55:04.264760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:00.702 [2024-11-26 20:55:04.269898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.702 [2024-11-26 20:55:04.269928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.702 [2024-11-26 20:55:04.269946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:00.702 [2024-11-26 20:55:04.276418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.702 [2024-11-26 20:55:04.276450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.702 [2024-11-26 20:55:04.276468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:00.702 [2024-11-26 20:55:04.281335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.702 [2024-11-26 20:55:04.281380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.702 [2024-11-26 20:55:04.281398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:00.702 [2024-11-26 20:55:04.287324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.702 [2024-11-26 20:55:04.287355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.702 [2024-11-26 20:55:04.287378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:00.702 [2024-11-26 20:55:04.292859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.702 [2024-11-26 20:55:04.292891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.702 [2024-11-26 20:55:04.292909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:00.702 [2024-11-26 20:55:04.298528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.702 [2024-11-26 20:55:04.298559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.702 [2024-11-26 20:55:04.298591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:00.702 [2024-11-26 20:55:04.304428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.702 [2024-11-26 20:55:04.304460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.702 [2024-11-26 20:55:04.304479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:00.702 [2024-11-26 20:55:04.310074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.702 [2024-11-26 20:55:04.310104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.702 [2024-11-26 20:55:04.310121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:00.702 [2024-11-26 20:55:04.316065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.702 [2024-11-26 20:55:04.316097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.702 [2024-11-26 20:55:04.316114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:00.702 [2024-11-26 20:55:04.322893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.702 [2024-11-26 20:55:04.322939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.702 [2024-11-26 20:55:04.322958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:00.702 [2024-11-26 20:55:04.330058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.702 [2024-11-26 20:55:04.330089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.702 [2024-11-26 20:55:04.330107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:00.702 [2024-11-26 20:55:04.336962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.702 [2024-11-26 20:55:04.336994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.702 [2024-11-26 20:55:04.337012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:00.702 [2024-11-26 20:55:04.342977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.702 [2024-11-26 20:55:04.343013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.702 [2024-11-26 20:55:04.343046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:00.702 [2024-11-26 20:55:04.349461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.702 [2024-11-26 20:55:04.349505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.702 [2024-11-26 20:55:04.349522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:00.702 [2024-11-26 20:55:04.353921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.702 [2024-11-26 20:55:04.353966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.702 [2024-11-26 20:55:04.353984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:00.702 [2024-11-26 20:55:04.358552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.702 [2024-11-26 20:55:04.358582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.702 [2024-11-26 20:55:04.358600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:00.702 [2024-11-26 20:55:04.363177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.702 [2024-11-26 20:55:04.363207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.702 [2024-11-26 20:55:04.363224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:00.702 [2024-11-26 20:55:04.367983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.702 [2024-11-26 20:55:04.368013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.702 [2024-11-26 20:55:04.368030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:00.702 [2024-11-26 20:55:04.373350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.702 [2024-11-26 20:55:04.373380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.702 [2024-11-26 20:55:04.373398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:00.702 [2024-11-26 20:55:04.378523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.702 [2024-11-26 20:55:04.378555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.702 [2024-11-26 20:55:04.378572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:00.703 [2024-11-26 20:55:04.384576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.703 [2024-11-26 20:55:04.384607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.703 [2024-11-26 20:55:04.384624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:00.703 [2024-11-26 20:55:04.390809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.703 [2024-11-26 20:55:04.390840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.703 [2024-11-26 20:55:04.390857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:00.961 [2024-11-26 20:55:04.397922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.961 [2024-11-26 20:55:04.397954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.961 [2024-11-26 20:55:04.397972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:00.961 [2024-11-26 20:55:04.405286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.961 [2024-11-26 20:55:04.405326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.961 [2024-11-26 20:55:04.405345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:00.961 [2024-11-26 20:55:04.412440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.961 [2024-11-26 20:55:04.412473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.961 [2024-11-26 20:55:04.412490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:00.961 [2024-11-26 20:55:04.419762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.961 [2024-11-26 20:55:04.419794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.961 [2024-11-26 20:55:04.419811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:00.961 [2024-11-26 20:55:04.426894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.961 [2024-11-26 20:55:04.426926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.961 [2024-11-26 20:55:04.426944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:00.961 [2024-11-26 20:55:04.430624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.961 [2024-11-26 20:55:04.430656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.961 [2024-11-26 20:55:04.430673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:00.961 [2024-11-26 20:55:04.437575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.961 [2024-11-26 20:55:04.437607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.961 [2024-11-26 20:55:04.437625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:00.962 [2024-11-26 20:55:04.444672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.962 [2024-11-26 20:55:04.444705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.962 [2024-11-26 20:55:04.444729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:00.962 [2024-11-26 20:55:04.450768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.962 [2024-11-26 20:55:04.450800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.962 [2024-11-26 20:55:04.450832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:00.962 [2024-11-26 20:55:04.456739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.962 [2024-11-26 20:55:04.456779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.962 [2024-11-26 20:55:04.456796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:00.962 [2024-11-26 20:55:04.462780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.962 [2024-11-26 20:55:04.462811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.962 [2024-11-26 20:55:04.462829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:00.962 [2024-11-26 20:55:04.468764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.962 [2024-11-26 20:55:04.468811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.962 [2024-11-26 20:55:04.468829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:00.962 [2024-11-26 20:55:04.473580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.962 [2024-11-26 20:55:04.473611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.962 [2024-11-26 20:55:04.473628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:00.962 [2024-11-26 20:55:04.478768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.962 [2024-11-26 20:55:04.478800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.962 [2024-11-26 20:55:04.478818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:00.962 [2024-11-26 20:55:04.483883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.962 [2024-11-26 20:55:04.483914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.962 [2024-11-26 20:55:04.483931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:00.962 [2024-11-26 20:55:04.488945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.962 [2024-11-26 20:55:04.488977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.962 [2024-11-26 20:55:04.488995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:00.962 [2024-11-26 20:55:04.493517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.962 [2024-11-26 20:55:04.493553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.962 [2024-11-26 20:55:04.493571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:00.962 [2024-11-26 20:55:04.498181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.962 [2024-11-26 20:55:04.498212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.962 [2024-11-26 20:55:04.498229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:00.962 [2024-11-26 20:55:04.502806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.962 [2024-11-26 20:55:04.502836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.962 [2024-11-26 20:55:04.502853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:00.962 [2024-11-26 20:55:04.507357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.962 [2024-11-26 20:55:04.507386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.962 [2024-11-26 20:55:04.507403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:00.962 [2024-11-26 20:55:04.511900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.962 [2024-11-26 20:55:04.511929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.962 [2024-11-26 20:55:04.511947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:00.962 [2024-11-26 20:55:04.516501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.962 [2024-11-26 20:55:04.516530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.962 [2024-11-26 20:55:04.516547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:00.962 [2024-11-26 20:55:04.521131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.962 [2024-11-26 20:55:04.521161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.962 [2024-11-26 20:55:04.521179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:00.962 [2024-11-26 20:55:04.526668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.962 [2024-11-26 20:55:04.526699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.962 [2024-11-26 20:55:04.526717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:00.962 [2024-11-26 20:55:04.531536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.962 [2024-11-26 20:55:04.531567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.962 [2024-11-26 20:55:04.531585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:00.962 [2024-11-26 20:55:04.536338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.962 [2024-11-26 20:55:04.536369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.962 [2024-11-26 20:55:04.536386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:00.962 [2024-11-26 20:55:04.541184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.962 [2024-11-26 20:55:04.541216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.962 [2024-11-26 20:55:04.541233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:00.962 [2024-11-26 20:55:04.546980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.962 [2024-11-26 20:55:04.547012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.962 [2024-11-26 20:55:04.547029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:00.962 [2024-11-26 20:55:04.554626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.962 [2024-11-26 20:55:04.554657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.962 [2024-11-26 20:55:04.554674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:00.962 [2024-11-26 20:55:04.560746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.962 [2024-11-26 20:55:04.560779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.962 [2024-11-26 20:55:04.560796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:00.962 [2024-11-26 20:55:04.566554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.962 [2024-11-26 20:55:04.566586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.962 [2024-11-26 20:55:04.566603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:00.962 [2024-11-26 20:55:04.572795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.962 [2024-11-26 20:55:04.572827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.962 [2024-11-26 20:55:04.572844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:00.962 [2024-11-26 20:55:04.579184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.962 [2024-11-26 20:55:04.579216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.962 [2024-11-26 20:55:04.579234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:00.962 [2024-11-26 20:55:04.585048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.963 [2024-11-26 20:55:04.585079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.963 [2024-11-26 20:55:04.585104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:00.963 [2024-11-26 20:55:04.590903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.963 [2024-11-26 20:55:04.590935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.963 [2024-11-26 20:55:04.590953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:00.963 [2024-11-26 20:55:04.597228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.963 [2024-11-26 20:55:04.597260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.963 [2024-11-26 20:55:04.597278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:00.963 [2024-11-26 20:55:04.603842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.963 [2024-11-26 20:55:04.603874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.963 [2024-11-26 20:55:04.603892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:00.963 [2024-11-26 20:55:04.609900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.963 [2024-11-26 20:55:04.609932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.963 [2024-11-26 20:55:04.609950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:00.963 [2024-11-26 20:55:04.615835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.963 [2024-11-26 20:55:04.615867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.963 [2024-11-26 20:55:04.615885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:00.963 [2024-11-26 20:55:04.621513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.963 [2024-11-26 20:55:04.621545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.963 [2024-11-26 20:55:04.621563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:00.963 [2024-11-26 20:55:04.627506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.963 [2024-11-26 20:55:04.627539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.963 [2024-11-26 20:55:04.627556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:00.963 [2024-11-26 20:55:04.633792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.963 [2024-11-26 20:55:04.633824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.963 [2024-11-26 20:55:04.633841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:00.963 [2024-11-26 20:55:04.639900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.963 [2024-11-26 20:55:04.639932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.963 [2024-11-26 20:55:04.639950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:00.963 [2024-11-26 20:55:04.645878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.963 [2024-11-26 20:55:04.645910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.963 [2024-11-26 20:55:04.645928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:00.963 [2024-11-26 20:55:04.652033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:00.963 [2024-11-26 20:55:04.652065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:00.963 [2024-11-26 20:55:04.652083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:01.222 [2024-11-26 20:55:04.658133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.222 [2024-11-26 20:55:04.658167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.222 [2024-11-26 20:55:04.658185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:01.222 [2024-11-26 20:55:04.663950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.222 [2024-11-26 20:55:04.663981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.222 [2024-11-26 20:55:04.664000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:01.222 [2024-11-26 20:55:04.669581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.222 [2024-11-26 20:55:04.669614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.222 [2024-11-26 20:55:04.669631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:01.222 [2024-11-26 20:55:04.676396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.222 [2024-11-26 20:55:04.676430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.222 [2024-11-26 20:55:04.676449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:01.222 [2024-11-26 20:55:04.679397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.222 [2024-11-26 20:55:04.679428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.222 [2024-11-26 20:55:04.679446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:01.222 [2024-11-26 20:55:04.684316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.222 [2024-11-26 20:55:04.684358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.222 [2024-11-26 20:55:04.684386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:01.222 [2024-11-26 20:55:04.689366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.222 [2024-11-26 20:55:04.689398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.222 [2024-11-26 20:55:04.689416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:01.222 [2024-11-26 20:55:04.694644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.222 [2024-11-26 20:55:04.694676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.222 [2024-11-26 20:55:04.694693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:01.222 [2024-11-26 20:55:04.699142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.222 [2024-11-26 20:55:04.699173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.222 [2024-11-26 20:55:04.699192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:01.222 [2024-11-26 20:55:04.703415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.222 [2024-11-26 20:55:04.703446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.222 [2024-11-26 20:55:04.703464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:01.222 [2024-11-26 20:55:04.708009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.222 [2024-11-26 20:55:04.708039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.222 [2024-11-26 20:55:04.708056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:01.222 [2024-11-26 20:55:04.712591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.222 [2024-11-26 20:55:04.712631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.222 [2024-11-26 20:55:04.712647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:01.222 [2024-11-26 20:55:04.717076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.222 [2024-11-26 20:55:04.717116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.222 [2024-11-26 20:55:04.717133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:01.222 [2024-11-26 20:55:04.721700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.222 [2024-11-26 20:55:04.721729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.222 [2024-11-26 20:55:04.721761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:01.222 [2024-11-26 20:55:04.726346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.222 [2024-11-26 20:55:04.726384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.222 [2024-11-26 20:55:04.726402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:01.223 [2024-11-26 20:55:04.730992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.223 [2024-11-26 20:55:04.731036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.223 [2024-11-26 20:55:04.731052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:01.223 [2024-11-26 20:55:04.735530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.223 [2024-11-26 20:55:04.735560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.223 [2024-11-26 20:55:04.735576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:01.223 [2024-11-26 20:55:04.740181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.223 [2024-11-26 20:55:04.740227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.223 [2024-11-26 20:55:04.740243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:01.223 [2024-11-26 20:55:04.744748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.223 [2024-11-26 20:55:04.744793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.223 [2024-11-26 20:55:04.744809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:01.223 [2024-11-26 20:55:04.749394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.223 [2024-11-26 20:55:04.749425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.223 [2024-11-26 20:55:04.749443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:01.223 [2024-11-26 20:55:04.754690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.223 [2024-11-26 20:55:04.754737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.223 [2024-11-26 20:55:04.754755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:01.223 [2024-11-26 20:55:04.760466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.223 [2024-11-26 20:55:04.760497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.223 [2024-11-26 20:55:04.760514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:01.223 [2024-11-26 20:55:04.766938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.223 [2024-11-26 20:55:04.766969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.223 [2024-11-26 20:55:04.766988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:01.223 [2024-11-26 20:55:04.772736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.223 [2024-11-26 20:55:04.772782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.223 [2024-11-26 20:55:04.772800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:01.223 [2024-11-26 20:55:04.777992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.223 [2024-11-26 20:55:04.778022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.223 [2024-11-26 20:55:04.778040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:01.223 [2024-11-26 20:55:04.782801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.223 [2024-11-26 20:55:04.782833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.223 [2024-11-26 20:55:04.782851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:01.223 [2024-11-26 20:55:04.787436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.223 [2024-11-26 20:55:04.787467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.223 [2024-11-26 20:55:04.787485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:01.223 [2024-11-26 20:55:04.792404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.223 [2024-11-26 20:55:04.792435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.223 [2024-11-26 20:55:04.792453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:01.223 [2024-11-26 20:55:04.798200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.223 [2024-11-26 20:55:04.798232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.223 [2024-11-26 20:55:04.798250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:01.223 [2024-11-26 20:55:04.804163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.223 [2024-11-26 20:55:04.804193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.223 [2024-11-26 20:55:04.804211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:01.223 [2024-11-26 20:55:04.810163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.223 [2024-11-26 20:55:04.810194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.223 [2024-11-26 20:55:04.810213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:01.223 [2024-11-26 20:55:04.816080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.223 [2024-11-26 20:55:04.816111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.223 [2024-11-26 20:55:04.816136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:01.223 [2024-11-26 20:55:04.821766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.223 [2024-11-26 20:55:04.821797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.223 [2024-11-26 20:55:04.821815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:01.223 [2024-11-26 20:55:04.827895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.223 [2024-11-26 20:55:04.827926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.223 [2024-11-26 20:55:04.827945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:01.223 [2024-11-26 20:55:04.833956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.223 [2024-11-26 20:55:04.833987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.223 [2024-11-26 20:55:04.834005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:01.223 [2024-11-26 20:55:04.839478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.223 [2024-11-26 20:55:04.839510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.223 [2024-11-26 20:55:04.839527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:01.223 [2024-11-26 20:55:04.845017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.223 [2024-11-26 20:55:04.845049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.223 [2024-11-26 20:55:04.845066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:01.223 [2024-11-26 20:55:04.852108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.223 [2024-11-26 20:55:04.852140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.223 [2024-11-26 20:55:04.852172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:01.223 [2024-11-26 20:55:04.857422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.223 [2024-11-26 20:55:04.857453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.223 [2024-11-26 20:55:04.857470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:01.223 [2024-11-26 20:55:04.862225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.223 [2024-11-26 20:55:04.862256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.223 [2024-11-26 20:55:04.862273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:01.223 [2024-11-26 20:55:04.866788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.223 [2024-11-26 20:55:04.866819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.223 [2024-11-26 20:55:04.866835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:01.223 [2024-11-26 20:55:04.871456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.223 [2024-11-26 20:55:04.871487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.223 [2024-11-26 20:55:04.871504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:01.224 [2024-11-26 20:55:04.876903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.224 [2024-11-26 20:55:04.876934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.224 [2024-11-26 20:55:04.876951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:01.224 [2024-11-26 20:55:04.881553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.224 [2024-11-26 20:55:04.881598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.224 [2024-11-26 20:55:04.881616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:01.224 [2024-11-26 20:55:04.886626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.224 [2024-11-26 20:55:04.886657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.224 [2024-11-26 20:55:04.886675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:01.224 [2024-11-26 20:55:04.890790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.224 [2024-11-26 20:55:04.890821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.224 [2024-11-26 20:55:04.890839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:01.224 [2024-11-26 20:55:04.895287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.224 [2024-11-26 20:55:04.895325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.224 [2024-11-26 20:55:04.895343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:01.224 [2024-11-26 20:55:04.899774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.224 [2024-11-26 20:55:04.899807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.224 [2024-11-26 20:55:04.899824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:01.224 [2024-11-26 20:55:04.904337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.224 [2024-11-26 20:55:04.904367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.224 [2024-11-26 20:55:04.904391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:01.224 [2024-11-26 20:55:04.908911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.224 [2024-11-26 20:55:04.908941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.224 [2024-11-26 20:55:04.908958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:01.224 [2024-11-26 20:55:04.913448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.224 [2024-11-26 20:55:04.913478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.224 [2024-11-26 20:55:04.913495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:01.486 [2024-11-26 20:55:04.918156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.486 [2024-11-26 20:55:04.918187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.486 [2024-11-26 20:55:04.918204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:01.486 [2024-11-26 20:55:04.922796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.486 [2024-11-26 20:55:04.922827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.486 [2024-11-26 20:55:04.922843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:01.486 [2024-11-26 20:55:04.927382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.486 [2024-11-26 20:55:04.927412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.486 [2024-11-26 20:55:04.927429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:01.486 [2024-11-26 20:55:04.931930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.486 [2024-11-26 20:55:04.931960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.486 [2024-11-26 20:55:04.931977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:01.486 [2024-11-26 20:55:04.936356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.486 [2024-11-26 20:55:04.936385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.486 [2024-11-26 20:55:04.936402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:01.486 [2024-11-26 20:55:04.940650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.486 [2024-11-26 20:55:04.940680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.486 [2024-11-26 20:55:04.940697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:01.486 [2024-11-26 20:55:04.945128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.486 [2024-11-26 20:55:04.945164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.486 [2024-11-26 20:55:04.945181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:01.486 [2024-11-26 20:55:04.948428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.486 [2024-11-26 20:55:04.948458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.486 [2024-11-26 20:55:04.948474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:01.486 [2024-11-26 20:55:04.952029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.486 [2024-11-26 20:55:04.952059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.486 [2024-11-26 20:55:04.952076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:01.486 [2024-11-26 20:55:04.956565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.486 [2024-11-26 20:55:04.956596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.486 [2024-11-26 20:55:04.956612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:01.486 [2024-11-26 20:55:04.962245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.486 [2024-11-26 20:55:04.962277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.486 [2024-11-26 20:55:04.962295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:01.486 [2024-11-26 20:55:04.969611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.486 [2024-11-26 20:55:04.969645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.486 [2024-11-26 20:55:04.969663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:01.487 [2024-11-26 20:55:04.977216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.487 [2024-11-26 20:55:04.977249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.487 [2024-11-26 20:55:04.977266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:01.487 [2024-11-26 20:55:04.984801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.487 [2024-11-26 20:55:04.984834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.487 [2024-11-26 20:55:04.984852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:01.487 [2024-11-26 20:55:04.992600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.487 [2024-11-26 20:55:04.992632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.487 [2024-11-26 20:55:04.992649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:01.487 [2024-11-26 20:55:05.000200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.487 [2024-11-26 20:55:05.000232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.487 [2024-11-26 20:55:05.000250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:01.487 [2024-11-26 20:55:05.007849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.487 [2024-11-26 20:55:05.007879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.487 [2024-11-26 20:55:05.007896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:01.487 [2024-11-26 20:55:05.015619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.487 [2024-11-26 20:55:05.015651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.487 [2024-11-26 20:55:05.015687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:01.487 [2024-11-26 20:55:05.023403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.487 [2024-11-26 20:55:05.023434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.487 [2024-11-26 20:55:05.023453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:01.487 [2024-11-26 20:55:05.030933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.487 [2024-11-26 20:55:05.030966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.487 [2024-11-26 20:55:05.030984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:01.487 [2024-11-26 20:55:05.038549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.487 [2024-11-26 20:55:05.038581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.487 [2024-11-26 20:55:05.038613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:01.487 [2024-11-26 20:55:05.046096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.487 [2024-11-26 20:55:05.046129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.487 [2024-11-26 20:55:05.046147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:01.487 [2024-11-26 20:55:05.053740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.487 [2024-11-26 20:55:05.053787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.487 [2024-11-26 20:55:05.053804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:01.487 [2024-11-26 20:55:05.061400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.487 [2024-11-26 20:55:05.061432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.487 [2024-11-26 20:55:05.061457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:01.487 [2024-11-26 20:55:05.068977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.487 [2024-11-26 20:55:05.069010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.487 [2024-11-26 20:55:05.069027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:01.487 [2024-11-26 20:55:05.076506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.487 [2024-11-26 20:55:05.076539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.487 [2024-11-26 20:55:05.076571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:01.487 [2024-11-26 20:55:05.082883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.487 [2024-11-26 20:55:05.082914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.487 [2024-11-26 20:55:05.082932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:01.487 [2024-11-26 20:55:05.088040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.487 [2024-11-26 20:55:05.088070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.487 [2024-11-26 20:55:05.088101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:01.487 [2024-11-26 20:55:05.093854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.487 [2024-11-26 20:55:05.093886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.487 [2024-11-26 20:55:05.093903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:01.487 [2024-11-26 20:55:05.100120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.487 [2024-11-26 20:55:05.100167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.487 [2024-11-26 20:55:05.100185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:01.487 [2024-11-26 20:55:05.105842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.487 [2024-11-26 20:55:05.105873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.487 [2024-11-26 20:55:05.105906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:01.487 [2024-11-26 20:55:05.112019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.487 [2024-11-26 20:55:05.112065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.487 [2024-11-26 20:55:05.112082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:01.487 [2024-11-26 20:55:05.118645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.487 [2024-11-26 20:55:05.118683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.487 [2024-11-26 20:55:05.118702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:01.487 [2024-11-26 20:55:05.124780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.488 [2024-11-26 20:55:05.124811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.488 [2024-11-26 20:55:05.124829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:01.488 [2024-11-26 20:55:05.131398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.488 [2024-11-26 20:55:05.131429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.488 [2024-11-26 20:55:05.131461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:01.488 [2024-11-26 20:55:05.139260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.488 [2024-11-26 20:55:05.139292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.488 [2024-11-26 20:55:05.139335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:01.488 [2024-11-26 20:55:05.147412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.488 [2024-11-26 20:55:05.147444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.488 [2024-11-26 20:55:05.147461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:01.488 [2024-11-26 20:55:05.155002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.488 [2024-11-26 20:55:05.155033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.488 [2024-11-26 20:55:05.155065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:01.488 [2024-11-26 20:55:05.160982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19d6dc0) 00:25:01.488 [2024-11-26 20:55:05.161013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.488 [2024-11-26 20:55:05.161045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:01.488 5441.50 IOPS, 680.19 MiB/s 00:25:01.488 Latency(us) 00:25:01.488 [2024-11-26T19:55:05.185Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:01.488 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:01.488 nvme0n1 : 2.00 5440.44 680.06 0.00 0.00 2936.85 716.04 8932.31 00:25:01.488 [2024-11-26T19:55:05.185Z] =================================================================================================================== 00:25:01.488 [2024-11-26T19:55:05.185Z] Total : 5440.44 680.06 0.00 0.00 2936.85 716.04 8932.31 00:25:01.488 { 00:25:01.488 "results": [ 00:25:01.488 { 00:25:01.488 "job": "nvme0n1", 00:25:01.488 "core_mask": "0x2", 00:25:01.488 "workload": "randread", 00:25:01.488 "status": "finished", 00:25:01.488 "queue_depth": 16, 00:25:01.488 "io_size": 131072, 00:25:01.488 "runtime": 2.003513, 00:25:01.488 "iops": 5440.443860359279, 00:25:01.488 "mibps": 680.0554825449099, 00:25:01.488 "io_failed": 0, 00:25:01.488 "io_timeout": 0, 00:25:01.488 "avg_latency_us": 2936.8523199456336, 00:25:01.488 "min_latency_us": 716.0414814814815, 00:25:01.488 "max_latency_us": 8932.314074074075 00:25:01.488 } 00:25:01.488 ], 00:25:01.488 "core_count": 1 00:25:01.488 } 00:25:01.793 20:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:01.793 20:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:01.793 20:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:01.793 | .driver_specific 00:25:01.793 | .nvme_error 00:25:01.793 | .status_code 00:25:01.793 | .command_transient_transport_error' 00:25:01.793 20:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:01.793 20:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 352 > 0 )) 00:25:01.793 20:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1764704 00:25:01.793 20:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1764704 ']' 00:25:01.793 20:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1764704 00:25:01.793 20:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:01.793 20:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:01.793 20:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1764704 00:25:02.052 20:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:02.052 20:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:02.052 20:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1764704' 00:25:02.052 killing process with pid 1764704 00:25:02.052 20:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1764704 00:25:02.052 Received shutdown signal, test time was about 2.000000 seconds 00:25:02.052 00:25:02.052 Latency(us) 00:25:02.052 [2024-11-26T19:55:05.749Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:02.052 [2024-11-26T19:55:05.749Z] =================================================================================================================== 00:25:02.052 [2024-11-26T19:55:05.749Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:02.052 20:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1764704 00:25:02.052 20:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:25:02.052 20:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:02.052 20:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:02.052 20:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:02.052 20:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:02.052 20:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1765221 00:25:02.052 20:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:25:02.052 20:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1765221 /var/tmp/bperf.sock 00:25:02.052 20:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1765221 ']' 00:25:02.052 20:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:02.052 20:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:02.052 20:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:02.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:02.052 20:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:02.052 20:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:02.311 [2024-11-26 20:55:05.764632] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:25:02.311 [2024-11-26 20:55:05.764744] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1765221 ] 00:25:02.311 [2024-11-26 20:55:05.830789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.311 [2024-11-26 20:55:05.890483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:02.311 20:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:02.311 20:55:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:02.311 20:55:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:02.311 20:55:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:02.569 20:55:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:02.569 20:55:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.569 20:55:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:02.828 20:55:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.828 20:55:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:02.828 20:55:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:03.085 nvme0n1 00:25:03.085 20:55:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:03.085 20:55:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.086 20:55:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:03.343 20:55:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.344 20:55:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:03.344 20:55:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:03.344 Running I/O for 2 seconds... 00:25:03.344 [2024-11-26 20:55:06.931019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ee1b48 00:25:03.344 [2024-11-26 20:55:06.932251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.344 [2024-11-26 20:55:06.932315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:03.344 [2024-11-26 20:55:06.942689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ee3498 00:25:03.344 [2024-11-26 20:55:06.943629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.344 [2024-11-26 20:55:06.943660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:03.344 [2024-11-26 20:55:06.955375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ef4b08 00:25:03.344 [2024-11-26 20:55:06.956615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.344 [2024-11-26 20:55:06.956645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:03.344 [2024-11-26 20:55:06.967729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ee4140 00:25:03.344 [2024-11-26 20:55:06.969051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.344 [2024-11-26 20:55:06.969094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:03.344 [2024-11-26 20:55:06.979145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ee6fa8 00:25:03.344 [2024-11-26 20:55:06.980270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:25539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.344 [2024-11-26 20:55:06.980299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:03.344 [2024-11-26 20:55:06.990975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ee3060 00:25:03.344 [2024-11-26 20:55:06.991997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.344 [2024-11-26 20:55:06.992041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:03.344 [2024-11-26 20:55:07.003558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ee6300 00:25:03.344 [2024-11-26 20:55:07.004766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.344 [2024-11-26 20:55:07.004812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:03.344 [2024-11-26 20:55:07.015628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ee5ec8 00:25:03.344 [2024-11-26 20:55:07.016911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.344 [2024-11-26 20:55:07.016955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:03.344 [2024-11-26 20:55:07.027808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ef9f68 00:25:03.344 [2024-11-26 20:55:07.028859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.344 [2024-11-26 20:55:07.028903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:03.602 [2024-11-26 20:55:07.039848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016efa7d8 00:25:03.602 [2024-11-26 20:55:07.041051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:3631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.602 [2024-11-26 20:55:07.041080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:03.602 [2024-11-26 20:55:07.051732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ef96f8 00:25:03.602 [2024-11-26 20:55:07.052804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.602 [2024-11-26 20:55:07.052848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:03.602 [2024-11-26 20:55:07.063660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ef9b30 00:25:03.602 [2024-11-26 20:55:07.064859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.602 [2024-11-26 20:55:07.064888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:03.602 [2024-11-26 20:55:07.076096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ef46d0 00:25:03.602 [2024-11-26 20:55:07.077411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.602 [2024-11-26 20:55:07.077440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:03.602 [2024-11-26 20:55:07.088147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ee84c0 00:25:03.602 [2024-11-26 20:55:07.089131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:10329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.602 [2024-11-26 20:55:07.089160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:03.602 [2024-11-26 20:55:07.099408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ede8a8 00:25:03.603 [2024-11-26 20:55:07.100216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.603 [2024-11-26 20:55:07.100245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:03.603 [2024-11-26 20:55:07.111644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016eee190 00:25:03.603 [2024-11-26 20:55:07.112735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.603 [2024-11-26 20:55:07.112778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:03.603 [2024-11-26 20:55:07.122895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ef0ff8 00:25:03.603 [2024-11-26 20:55:07.123864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.603 [2024-11-26 20:55:07.123907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:03.603 [2024-11-26 20:55:07.134188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ef31b8 00:25:03.603 [2024-11-26 20:55:07.135072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.603 [2024-11-26 20:55:07.135115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:03.603 [2024-11-26 20:55:07.149192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ee5ec8 00:25:03.603 [2024-11-26 20:55:07.150904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.603 [2024-11-26 20:55:07.150955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:03.603 [2024-11-26 20:55:07.160481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ef3a28 00:25:03.603 [2024-11-26 20:55:07.162084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:9266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.603 [2024-11-26 20:55:07.162128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:03.603 [2024-11-26 20:55:07.171267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ee0ea0 00:25:03.603 [2024-11-26 20:55:07.172568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.603 [2024-11-26 20:55:07.172597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:03.603 [2024-11-26 20:55:07.183215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ee7c50 00:25:03.603 [2024-11-26 20:55:07.184444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.603 [2024-11-26 20:55:07.184474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:03.603 [2024-11-26 20:55:07.196129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016efc128 00:25:03.603 [2024-11-26 20:55:07.197613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:4102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.603 [2024-11-26 20:55:07.197642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:03.603 [2024-11-26 20:55:07.208401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ef2d80 00:25:03.603 [2024-11-26 20:55:07.209903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:6882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.603 [2024-11-26 20:55:07.209947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:03.603 [2024-11-26 20:55:07.219700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ef46d0 00:25:03.603 [2024-11-26 20:55:07.220942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:24365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.603 [2024-11-26 20:55:07.220985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:03.603 [2024-11-26 20:55:07.230842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ef57b0 00:25:03.603 [2024-11-26 20:55:07.231937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.603 [2024-11-26 20:55:07.231982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:03.603 [2024-11-26 20:55:07.242137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ef4298 00:25:03.603 [2024-11-26 20:55:07.243174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.603 [2024-11-26 20:55:07.243217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:03.603 [2024-11-26 20:55:07.253391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016efc998 00:25:03.603 [2024-11-26 20:55:07.254278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.603 [2024-11-26 20:55:07.254330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:03.603 [2024-11-26 20:55:07.268172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016eebb98 00:25:03.603 [2024-11-26 20:55:07.269796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.603 [2024-11-26 20:55:07.269840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:03.603 [2024-11-26 20:55:07.277773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016efa3a0 00:25:03.603 [2024-11-26 20:55:07.278783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:24344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.603 [2024-11-26 20:55:07.278827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:03.603 [2024-11-26 20:55:07.292268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016eee190 00:25:03.603 [2024-11-26 20:55:07.293992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.603 [2024-11-26 20:55:07.294021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:03.862 [2024-11-26 20:55:07.303408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016eebfd0 00:25:03.862 [2024-11-26 20:55:07.305219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.862 [2024-11-26 20:55:07.305249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:03.862 [2024-11-26 20:55:07.316115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ee0630 00:25:03.862 [2024-11-26 20:55:07.317231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:21037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.862 [2024-11-26 20:55:07.317260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:03.862 [2024-11-26 20:55:07.327393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ef92c0 00:25:03.862 [2024-11-26 20:55:07.328267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:17389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.862 [2024-11-26 20:55:07.328295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:03.862 [2024-11-26 20:55:07.338846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ee5ec8 00:25:03.862 [2024-11-26 20:55:07.339981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:1519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.862 [2024-11-26 20:55:07.340008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:03.862 [2024-11-26 20:55:07.350668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ef1430 00:25:03.862 [2024-11-26 20:55:07.351765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:11604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.862 [2024-11-26 20:55:07.351810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:03.862 [2024-11-26 20:55:07.362085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ee12d8 00:25:03.862 [2024-11-26 20:55:07.363062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.862 [2024-11-26 20:55:07.363106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:03.862 [2024-11-26 20:55:07.373397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016eeff18 00:25:03.862 [2024-11-26 20:55:07.374179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.862 [2024-11-26 20:55:07.374223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:03.862 [2024-11-26 20:55:07.388630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016eef270 00:25:03.862 [2024-11-26 20:55:07.390339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.862 [2024-11-26 20:55:07.390382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:03.862 [2024-11-26 20:55:07.397113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016eef270 00:25:03.862 [2024-11-26 20:55:07.397996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.862 [2024-11-26 20:55:07.398038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:03.862 [2024-11-26 20:55:07.411309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016eed920 00:25:03.862 [2024-11-26 20:55:07.412716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:17288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.862 [2024-11-26 20:55:07.412759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:03.862 [2024-11-26 20:55:07.422577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016efa7d8 00:25:03.862 [2024-11-26 20:55:07.423762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.862 [2024-11-26 20:55:07.423806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:03.862 [2024-11-26 20:55:07.433893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ef3a28 00:25:03.862 [2024-11-26 20:55:07.435036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:21394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.862 [2024-11-26 20:55:07.435065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:03.862 [2024-11-26 20:55:07.445328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ee3d08 00:25:03.862 [2024-11-26 20:55:07.446338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.862 [2024-11-26 20:55:07.446367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:03.862 [2024-11-26 20:55:07.456842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016efeb58 00:25:03.862 [2024-11-26 20:55:07.457624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.862 [2024-11-26 20:55:07.457660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:03.862 [2024-11-26 20:55:07.469290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ee27f0 00:25:03.862 [2024-11-26 20:55:07.470414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:8901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.862 [2024-11-26 20:55:07.470442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:03.862 [2024-11-26 20:55:07.481267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ee1f80 00:25:03.862 [2024-11-26 20:55:07.481953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:23717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.862 [2024-11-26 20:55:07.481981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:03.862 [2024-11-26 20:55:07.496025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016eeaef0 00:25:03.862 [2024-11-26 20:55:07.497879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.862 [2024-11-26 20:55:07.497923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:03.862 [2024-11-26 20:55:07.504535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016efc560 00:25:03.862 [2024-11-26 20:55:07.505482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:2812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.862 [2024-11-26 20:55:07.505511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:03.862 [2024-11-26 20:55:07.516959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016eeaab8 00:25:03.862 [2024-11-26 20:55:07.518099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.862 [2024-11-26 20:55:07.518142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:03.862 [2024-11-26 20:55:07.529023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ef6458 00:25:03.862 [2024-11-26 20:55:07.529762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.862 [2024-11-26 20:55:07.529791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:03.862 [2024-11-26 20:55:07.542736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016efdeb0 00:25:03.862 [2024-11-26 20:55:07.544103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.862 [2024-11-26 20:55:07.544147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:03.862 [2024-11-26 20:55:07.554537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ee1b48 00:25:03.863 [2024-11-26 20:55:07.556175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:03.863 [2024-11-26 20:55:07.556220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:04.121 [2024-11-26 20:55:07.565242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016efc128 00:25:04.121 [2024-11-26 20:55:07.566892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.121 [2024-11-26 20:55:07.566922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:04.121 [2024-11-26 20:55:07.575514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ee5a90 00:25:04.121 [2024-11-26 20:55:07.576353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:24003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.121 [2024-11-26 20:55:07.576396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:04.121 [2024-11-26 20:55:07.587893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016eedd58 00:25:04.121 [2024-11-26 20:55:07.588940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:15110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.121 [2024-11-26 20:55:07.588983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:04.121 [2024-11-26 20:55:07.600111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016eea680 00:25:04.121 [2024-11-26 20:55:07.601089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.121 [2024-11-26 20:55:07.601133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:04.121 [2024-11-26 20:55:07.611681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016eebb98 00:25:04.121 [2024-11-26 20:55:07.612667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:23716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.121 [2024-11-26 20:55:07.612711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:04.121 [2024-11-26 20:55:07.626101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ee1f80 00:25:04.121 [2024-11-26 20:55:07.627513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.121 [2024-11-26 20:55:07.627542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:04.121 [2024-11-26 20:55:07.638826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ede038 00:25:04.121 [2024-11-26 20:55:07.640502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:10285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.121 [2024-11-26 20:55:07.640531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:04.121 [2024-11-26 20:55:07.647244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016eeea00 00:25:04.121 [2024-11-26 20:55:07.648113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.121 [2024-11-26 20:55:07.648156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:04.121 [2024-11-26 20:55:07.661757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016eeee38 00:25:04.121 [2024-11-26 20:55:07.663049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:24640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.121 [2024-11-26 20:55:07.663092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:04.121 [2024-11-26 20:55:07.673996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ef35f0 00:25:04.121 [2024-11-26 20:55:07.675647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.121 [2024-11-26 20:55:07.675675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:04.121 [2024-11-26 20:55:07.685143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016eeaef0 00:25:04.121 [2024-11-26 20:55:07.686346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:19022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.121 [2024-11-26 20:55:07.686375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:04.121 [2024-11-26 20:55:07.697332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016eebfd0 00:25:04.121 [2024-11-26 20:55:07.698449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:8731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.121 [2024-11-26 20:55:07.698477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:04.121 [2024-11-26 20:55:07.708409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ee7c50 00:25:04.121 [2024-11-26 20:55:07.709684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.121 [2024-11-26 20:55:07.709712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:04.121 [2024-11-26 20:55:07.720373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016eec408 00:25:04.121 [2024-11-26 20:55:07.721341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.121 [2024-11-26 20:55:07.721383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:04.121 [2024-11-26 20:55:07.733073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016efeb58 00:25:04.121 [2024-11-26 20:55:07.734315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.121 [2024-11-26 20:55:07.734358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:04.121 [2024-11-26 20:55:07.744313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ee2c28 00:25:04.121 [2024-11-26 20:55:07.745283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.121 [2024-11-26 20:55:07.745321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:04.121 [2024-11-26 20:55:07.758216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ee1710 00:25:04.121 [2024-11-26 20:55:07.759866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:6737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.122 [2024-11-26 20:55:07.759910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:04.122 [2024-11-26 20:55:07.766655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016edece0 00:25:04.122 [2024-11-26 20:55:07.767344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.122 [2024-11-26 20:55:07.767393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:04.122 [2024-11-26 20:55:07.780962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016eed4e8 00:25:04.122 [2024-11-26 20:55:07.782369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.122 [2024-11-26 20:55:07.782412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:04.122 [2024-11-26 20:55:07.793581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ef7100 00:25:04.122 [2024-11-26 20:55:07.795107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:17472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.122 [2024-11-26 20:55:07.795151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:04.122 [2024-11-26 20:55:07.805757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ef92c0 00:25:04.122 [2024-11-26 20:55:07.807200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:10125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.122 [2024-11-26 20:55:07.807243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:04.380 [2024-11-26 20:55:07.816435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016efc128 00:25:04.380 [2024-11-26 20:55:07.818117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.380 [2024-11-26 20:55:07.818145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:04.380 [2024-11-26 20:55:07.826620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ef4f40 00:25:04.380 [2024-11-26 20:55:07.827407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.380 [2024-11-26 20:55:07.827449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:04.380 [2024-11-26 20:55:07.838972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ef8618 00:25:04.380 [2024-11-26 20:55:07.839940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:16651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.380 [2024-11-26 20:55:07.839982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:04.380 [2024-11-26 20:55:07.853501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016eeaef0 00:25:04.380 [2024-11-26 20:55:07.855044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:11370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.380 [2024-11-26 20:55:07.855088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:04.380 [2024-11-26 20:55:07.864227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ee7c50 00:25:04.381 [2024-11-26 20:55:07.865891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.381 [2024-11-26 20:55:07.865920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:04.381 [2024-11-26 20:55:07.876505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ef7da8 00:25:04.381 [2024-11-26 20:55:07.877873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.381 [2024-11-26 20:55:07.877902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:04.381 [2024-11-26 20:55:07.888447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016eea248 00:25:04.381 [2024-11-26 20:55:07.889845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:17772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.381 [2024-11-26 20:55:07.889889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:04.381 [2024-11-26 20:55:07.900491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016eeaab8 00:25:04.381 [2024-11-26 20:55:07.901442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.381 [2024-11-26 20:55:07.901472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:04.381 [2024-11-26 20:55:07.914871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016edf550 00:25:04.381 21146.00 IOPS, 82.60 MiB/s [2024-11-26T19:55:08.078Z] [2024-11-26 20:55:07.916794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:15391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.381 [2024-11-26 20:55:07.916837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:04.381 [2024-11-26 20:55:07.923239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ee95a0 00:25:04.381 [2024-11-26 20:55:07.924293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:25352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.381 [2024-11-26 20:55:07.924344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:04.381 [2024-11-26 20:55:07.935559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ef9f68 00:25:04.381 [2024-11-26 20:55:07.936762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.381 [2024-11-26 20:55:07.936806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:04.381 [2024-11-26 20:55:07.948055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ef5378 00:25:04.381 [2024-11-26 20:55:07.948835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.381 [2024-11-26 20:55:07.948877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:04.381 [2024-11-26 20:55:07.959391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016efd640 00:25:04.381 [2024-11-26 20:55:07.959973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.381 [2024-11-26 20:55:07.960001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:04.381 [2024-11-26 20:55:07.973499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ee95a0 00:25:04.381 [2024-11-26 20:55:07.975097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.381 [2024-11-26 20:55:07.975141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:04.381 [2024-11-26 20:55:07.981841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016edece0 00:25:04.381 [2024-11-26 20:55:07.982559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.381 [2024-11-26 20:55:07.982602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:04.381 [2024-11-26 20:55:07.994135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ef5be8 00:25:04.381 [2024-11-26 20:55:07.994959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:14413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.381 [2024-11-26 20:55:07.995004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:04.381 [2024-11-26 20:55:08.008282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016efe2e8 00:25:04.381 [2024-11-26 20:55:08.009490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.381 [2024-11-26 20:55:08.009518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:04.381 [2024-11-26 20:55:08.020742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016efb480 00:25:04.381 [2024-11-26 20:55:08.022059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.381 [2024-11-26 20:55:08.022102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:04.381 [2024-11-26 20:55:08.030692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ee0630 00:25:04.381 [2024-11-26 20:55:08.031382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:91 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.381 [2024-11-26 20:55:08.031411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:04.381 [2024-11-26 20:55:08.045526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ee9168 00:25:04.381 [2024-11-26 20:55:08.047275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.381 [2024-11-26 20:55:08.047325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:04.381 [2024-11-26 20:55:08.053912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ede038 00:25:04.381 [2024-11-26 20:55:08.054830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.381 [2024-11-26 20:55:08.054873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:04.381 [2024-11-26 20:55:08.068152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016eecc78 00:25:04.381 [2024-11-26 20:55:08.069629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:1585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.381 [2024-11-26 20:55:08.069673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:04.640 [2024-11-26 20:55:08.078999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ee6738 00:25:04.640 [2024-11-26 20:55:08.080150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:6862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.640 [2024-11-26 20:55:08.080200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:04.640 [2024-11-26 20:55:08.090024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016eef6a8 00:25:04.640 [2024-11-26 20:55:08.091059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.640 [2024-11-26 20:55:08.091102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:04.640 [2024-11-26 20:55:08.104312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016eec408 00:25:04.640 [2024-11-26 20:55:08.105961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.640 [2024-11-26 20:55:08.106003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:04.640 [2024-11-26 20:55:08.116979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016efb048 00:25:04.640 [2024-11-26 20:55:08.118647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.640 [2024-11-26 20:55:08.118690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:04.640 [2024-11-26 20:55:08.125406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016eea248 00:25:04.640 [2024-11-26 20:55:08.126193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.640 [2024-11-26 20:55:08.126234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:04.640 [2024-11-26 20:55:08.137130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ef3a28 00:25:04.640 [2024-11-26 20:55:08.138077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.640 [2024-11-26 20:55:08.138119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.640 [2024-11-26 20:55:08.151534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016efeb58 00:25:04.640 [2024-11-26 20:55:08.152844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:11908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.641 [2024-11-26 20:55:08.152886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:04.641 [2024-11-26 20:55:08.162731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016eedd58 00:25:04.641 [2024-11-26 20:55:08.163940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:3750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.641 [2024-11-26 20:55:08.163983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:04.641 [2024-11-26 20:55:08.176576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ef5378 00:25:04.641 [2024-11-26 20:55:08.178280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.641 [2024-11-26 20:55:08.178329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:04.641 [2024-11-26 20:55:08.184829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016efeb58 00:25:04.641 [2024-11-26 20:55:08.185803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.641 [2024-11-26 20:55:08.185846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:04.641 [2024-11-26 20:55:08.199183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016eed0b0 00:25:04.641 [2024-11-26 20:55:08.200761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:11315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.641 [2024-11-26 20:55:08.200805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:04.641 [2024-11-26 20:55:08.210156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016efe720 00:25:04.641 [2024-11-26 20:55:08.211514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:6298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.641 [2024-11-26 20:55:08.211542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:04.641 [2024-11-26 20:55:08.221990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ef7538 00:25:04.641 [2024-11-26 20:55:08.223288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.641 [2024-11-26 20:55:08.223323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:04.641 [2024-11-26 20:55:08.236365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ef2948 00:25:04.641 [2024-11-26 20:55:08.238164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.641 [2024-11-26 20:55:08.238207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:04.641 [2024-11-26 20:55:08.244763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016efac10 00:25:04.641 [2024-11-26 20:55:08.245839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:11134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.641 [2024-11-26 20:55:08.245882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:04.641 [2024-11-26 20:55:08.259299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016eea680 00:25:04.641 [2024-11-26 20:55:08.261037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.641 [2024-11-26 20:55:08.261067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:04.641 [2024-11-26 20:55:08.267796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ee1b48 00:25:04.641 [2024-11-26 20:55:08.268545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.641 [2024-11-26 20:55:08.268574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:04.641 [2024-11-26 20:55:08.280482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ef2d80 00:25:04.641 [2024-11-26 20:55:08.281237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.641 [2024-11-26 20:55:08.281280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:04.641 [2024-11-26 20:55:08.295442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016efb480 00:25:04.641 [2024-11-26 20:55:08.297147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.641 [2024-11-26 20:55:08.297191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:04.641 [2024-11-26 20:55:08.306648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ee23b8 00:25:04.641 [2024-11-26 20:55:08.308173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.641 [2024-11-26 20:55:08.308217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:04.641 [2024-11-26 20:55:08.318040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ef9f68 00:25:04.641 [2024-11-26 20:55:08.319592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.641 [2024-11-26 20:55:08.319636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:04.641 [2024-11-26 20:55:08.329042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ef9b30 00:25:04.641 [2024-11-26 20:55:08.330259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.641 [2024-11-26 20:55:08.330288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:04.900 [2024-11-26 20:55:08.340995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016ee6b70 00:25:04.900 [2024-11-26 20:55:08.342104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.900 [2024-11-26 20:55:08.342146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:04.900 [2024-11-26 20:55:08.353488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016eea680 00:25:04.900 [2024-11-26 20:55:08.354846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:19729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.900 [2024-11-26 20:55:08.354888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:04.900 [2024-11-26 20:55:08.365954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016eebb98 00:25:04.900 [2024-11-26 20:55:08.367464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:17784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.900 [2024-11-26 20:55:08.367508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:04.900 [2024-11-26 20:55:08.377387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016efda78 00:25:04.900 [2024-11-26 20:55:08.377608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:3795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.900 [2024-11-26 20:55:08.377650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:04.900 [2024-11-26 20:55:08.391921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016efda78 00:25:04.900 [2024-11-26 20:55:08.392173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.900 [2024-11-26 20:55:08.392220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:04.900 [2024-11-26 20:55:08.406199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016efda78 00:25:04.900 [2024-11-26 20:55:08.406445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:20750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.900 [2024-11-26 20:55:08.406473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:04.900 [2024-11-26 20:55:08.420679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016efda78 00:25:04.900 [2024-11-26 20:55:08.420935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:11205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.900 [2024-11-26 20:55:08.420977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:04.900 [2024-11-26 20:55:08.434977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016efda78 00:25:04.900 [2024-11-26 20:55:08.435255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:15810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.900 [2024-11-26 20:55:08.435298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:04.900 [2024-11-26 20:55:08.449221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016efda78 00:25:04.900 [2024-11-26 20:55:08.449448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:17385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.900 [2024-11-26 20:55:08.449477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:04.900 [2024-11-26 20:55:08.463192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016efda78 00:25:04.900 [2024-11-26 20:55:08.463438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.900 [2024-11-26 20:55:08.463467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:04.900 [2024-11-26 20:55:08.477048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016efda78 00:25:04.900 [2024-11-26 20:55:08.477292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:18242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.900 [2024-11-26 20:55:08.477326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:04.900 [2024-11-26 20:55:08.491278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016efda78 00:25:04.900 [2024-11-26 20:55:08.491562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.900 [2024-11-26 20:55:08.491605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:04.900 [2024-11-26 20:55:08.505547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016efda78 00:25:04.901 [2024-11-26 20:55:08.505840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.901 [2024-11-26 20:55:08.505884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:04.901 [2024-11-26 20:55:08.519877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016efda78 00:25:04.901 [2024-11-26 20:55:08.520168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:25238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.901 [2024-11-26 20:55:08.520210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:04.901 [2024-11-26 20:55:08.534168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016efda78 00:25:04.901 [2024-11-26 20:55:08.534432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:15582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.901 [2024-11-26 20:55:08.534463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:04.901 [2024-11-26 20:55:08.548458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016efda78 00:25:04.901 [2024-11-26 20:55:08.548729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:10017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.901 [2024-11-26 20:55:08.548771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:04.901 [2024-11-26 20:55:08.562724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016efda78 00:25:04.901 [2024-11-26 20:55:08.562979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.901 [2024-11-26 20:55:08.563022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:04.901 [2024-11-26 20:55:08.577114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016efda78 00:25:04.901 [2024-11-26 20:55:08.577387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.901 [2024-11-26 20:55:08.577431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:04.901 [2024-11-26 20:55:08.591427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016efda78 00:25:04.901 [2024-11-26 20:55:08.591635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:25144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.901 [2024-11-26 20:55:08.591677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:05.159 [2024-11-26 20:55:08.605446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016efda78 00:25:05.159 [2024-11-26 20:55:08.605670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:10840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.159 [2024-11-26 20:55:08.605698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:05.159 [2024-11-26 20:55:08.619667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016efda78 00:25:05.159 [2024-11-26 20:55:08.619986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.159 [2024-11-26 20:55:08.620015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:05.159 [2024-11-26 20:55:08.633852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016efda78 00:25:05.160 [2024-11-26 20:55:08.634080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:20597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.160 [2024-11-26 20:55:08.634124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:05.160 [2024-11-26 20:55:08.647758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016efda78 00:25:05.160 [2024-11-26 20:55:08.648002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.160 [2024-11-26 20:55:08.648029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:05.160 [2024-11-26 20:55:08.661527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016efda78 00:25:05.160 [2024-11-26 20:55:08.661762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.160 [2024-11-26 20:55:08.661806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:05.160 [2024-11-26 20:55:08.675263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016efda78 00:25:05.160 [2024-11-26 20:55:08.675474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:15657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.160 [2024-11-26 20:55:08.675503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:05.160 [2024-11-26 20:55:08.688965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016efda78 00:25:05.160 [2024-11-26 20:55:08.689188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:6742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.160 [2024-11-26 20:55:08.689216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:05.160 [2024-11-26 20:55:08.702797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016efda78 00:25:05.160 [2024-11-26 20:55:08.703025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.160 [2024-11-26 20:55:08.703054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:05.160 [2024-11-26 20:55:08.716383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016efda78 00:25:05.160 [2024-11-26 20:55:08.716598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:6188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.160 [2024-11-26 20:55:08.716625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:05.160 [2024-11-26 20:55:08.730102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016efda78 00:25:05.160 [2024-11-26 20:55:08.730331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:24719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.160 [2024-11-26 20:55:08.730359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:05.160 [2024-11-26 20:55:08.743729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016efda78 00:25:05.160 [2024-11-26 20:55:08.743931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.160 [2024-11-26 20:55:08.743974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:05.160 [2024-11-26 20:55:08.757592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016efda78 00:25:05.160 [2024-11-26 20:55:08.757813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.160 [2024-11-26 20:55:08.757863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:05.160 [2024-11-26 20:55:08.771438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016efda78 00:25:05.160 [2024-11-26 20:55:08.771640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:20145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.160 [2024-11-26 20:55:08.771668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:05.160 [2024-11-26 20:55:08.785004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016efda78 00:25:05.160 [2024-11-26 20:55:08.785226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.160 [2024-11-26 20:55:08.785253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:05.160 [2024-11-26 20:55:08.798647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016efda78 00:25:05.160 [2024-11-26 20:55:08.798874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.160 [2024-11-26 20:55:08.798902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:05.160 [2024-11-26 20:55:08.812439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016efda78 00:25:05.160 [2024-11-26 20:55:08.812701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.160 [2024-11-26 20:55:08.812744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:05.160 [2024-11-26 20:55:08.826129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016efda78 00:25:05.160 [2024-11-26 20:55:08.826350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.160 [2024-11-26 20:55:08.826378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:05.160 [2024-11-26 20:55:08.840036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016efda78 00:25:05.160 [2024-11-26 20:55:08.840251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:2746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.160 [2024-11-26 20:55:08.840278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:05.160 [2024-11-26 20:55:08.853794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016efda78 00:25:05.160 [2024-11-26 20:55:08.854030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:3503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.160 [2024-11-26 20:55:08.854074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:05.418 [2024-11-26 20:55:08.867487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016efda78 00:25:05.418 [2024-11-26 20:55:08.867737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.418 [2024-11-26 20:55:08.867765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:05.418 [2024-11-26 20:55:08.881362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016efda78 00:25:05.418 [2024-11-26 20:55:08.881594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:24603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.418 [2024-11-26 20:55:08.881637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:05.418 [2024-11-26 20:55:08.895423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016efda78 00:25:05.418 [2024-11-26 20:55:08.895622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:11036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.418 [2024-11-26 20:55:08.895650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:05.418 [2024-11-26 20:55:08.909276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016efda78 00:25:05.418 [2024-11-26 20:55:08.909503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:9719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.418 [2024-11-26 20:55:08.909532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:05.418 20454.50 IOPS, 79.90 MiB/s [2024-11-26T19:55:09.115Z] [2024-11-26 20:55:08.922861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2bd50) with pdu=0x200016efda78 00:25:05.418 [2024-11-26 20:55:08.923069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:05.418 [2024-11-26 20:55:08.923096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:05.418 00:25:05.418 Latency(us) 00:25:05.418 [2024-11-26T19:55:09.115Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:05.418 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:05.418 nvme0n1 : 2.01 20450.86 79.89 0.00 0.00 6244.21 2633.58 15922.82 00:25:05.418 [2024-11-26T19:55:09.115Z] =================================================================================================================== 00:25:05.418 [2024-11-26T19:55:09.115Z] Total : 20450.86 79.89 0.00 0.00 6244.21 2633.58 15922.82 00:25:05.418 { 00:25:05.418 "results": [ 00:25:05.418 { 00:25:05.418 "job": "nvme0n1", 00:25:05.418 "core_mask": "0x2", 00:25:05.419 "workload": "randwrite", 00:25:05.419 "status": "finished", 00:25:05.419 "queue_depth": 128, 00:25:05.419 "io_size": 4096, 00:25:05.419 "runtime": 2.006615, 00:25:05.419 "iops": 20450.858784570035, 00:25:05.419 "mibps": 79.8861671272267, 00:25:05.419 "io_failed": 0, 00:25:05.419 "io_timeout": 0, 00:25:05.419 "avg_latency_us": 6244.207251739397, 00:25:05.419 "min_latency_us": 2633.5762962962963, 00:25:05.419 "max_latency_us": 15922.82074074074 00:25:05.419 } 00:25:05.419 ], 00:25:05.419 "core_count": 1 00:25:05.419 } 00:25:05.419 20:55:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:05.419 20:55:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:05.419 20:55:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:05.419 | .driver_specific 00:25:05.419 | .nvme_error 00:25:05.419 | .status_code 00:25:05.419 | .command_transient_transport_error' 00:25:05.419 20:55:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:05.675 20:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 161 > 0 )) 00:25:05.675 20:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1765221 00:25:05.675 20:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1765221 ']' 00:25:05.675 20:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1765221 00:25:05.675 20:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:05.675 20:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:05.675 20:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1765221 00:25:05.675 20:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:05.675 20:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:05.675 20:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1765221' 00:25:05.675 killing process with pid 1765221 00:25:05.675 20:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1765221 00:25:05.675 Received shutdown signal, test time was about 2.000000 seconds 00:25:05.675 00:25:05.675 Latency(us) 00:25:05.675 [2024-11-26T19:55:09.372Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:05.675 [2024-11-26T19:55:09.372Z] =================================================================================================================== 00:25:05.675 [2024-11-26T19:55:09.372Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:05.675 20:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1765221 00:25:05.933 20:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:25:05.933 20:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:05.933 20:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:05.933 20:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:05.933 20:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:05.933 20:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1765636 00:25:05.933 20:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:25:05.933 20:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1765636 /var/tmp/bperf.sock 00:25:05.933 20:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1765636 ']' 00:25:05.933 20:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:05.933 20:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:05.933 20:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:05.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:05.933 20:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:05.933 20:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:05.933 [2024-11-26 20:55:09.542459] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:25:05.933 [2024-11-26 20:55:09.542543] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1765636 ] 00:25:05.933 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:05.933 Zero copy mechanism will not be used. 00:25:05.933 [2024-11-26 20:55:09.607689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.192 [2024-11-26 20:55:09.665153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:06.192 20:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:06.192 20:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:06.192 20:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:06.192 20:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:06.449 20:55:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:06.449 20:55:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.449 20:55:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:06.449 20:55:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.449 20:55:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:06.449 20:55:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:06.706 nvme0n1 00:25:06.706 20:55:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:06.706 20:55:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.963 20:55:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:06.964 20:55:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.964 20:55:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:06.964 20:55:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:06.964 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:06.964 Zero copy mechanism will not be used. 00:25:06.964 Running I/O for 2 seconds... 00:25:06.964 [2024-11-26 20:55:10.524768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:06.964 [2024-11-26 20:55:10.524866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.964 [2024-11-26 20:55:10.524907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:06.964 [2024-11-26 20:55:10.530595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:06.964 [2024-11-26 20:55:10.530675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.964 [2024-11-26 20:55:10.530704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:06.964 [2024-11-26 20:55:10.536613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:06.964 [2024-11-26 20:55:10.536686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.964 [2024-11-26 20:55:10.536713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:06.964 [2024-11-26 20:55:10.542411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:06.964 [2024-11-26 20:55:10.542488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.964 [2024-11-26 20:55:10.542516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.964 [2024-11-26 20:55:10.548343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:06.964 [2024-11-26 20:55:10.548421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.964 [2024-11-26 20:55:10.548450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:06.964 [2024-11-26 20:55:10.554665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:06.964 [2024-11-26 20:55:10.554755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.964 [2024-11-26 20:55:10.554783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:06.964 [2024-11-26 20:55:10.560689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:06.964 [2024-11-26 20:55:10.560765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.964 [2024-11-26 20:55:10.560793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:06.964 [2024-11-26 20:55:10.565801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:06.964 [2024-11-26 20:55:10.565891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.964 [2024-11-26 20:55:10.565919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.964 [2024-11-26 20:55:10.571008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:06.964 [2024-11-26 20:55:10.571084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.964 [2024-11-26 20:55:10.571112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:06.964 [2024-11-26 20:55:10.576101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:06.964 [2024-11-26 20:55:10.576175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.964 [2024-11-26 20:55:10.576203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:06.964 [2024-11-26 20:55:10.581053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:06.964 [2024-11-26 20:55:10.581128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.964 [2024-11-26 20:55:10.581157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:06.964 [2024-11-26 20:55:10.586632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:06.964 [2024-11-26 20:55:10.586722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.964 [2024-11-26 20:55:10.586751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.964 [2024-11-26 20:55:10.592483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:06.964 [2024-11-26 20:55:10.592560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.964 [2024-11-26 20:55:10.592594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:06.964 [2024-11-26 20:55:10.597806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:06.964 [2024-11-26 20:55:10.597901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.964 [2024-11-26 20:55:10.597929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:06.964 [2024-11-26 20:55:10.603622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:06.964 [2024-11-26 20:55:10.603700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.964 [2024-11-26 20:55:10.603729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:06.964 [2024-11-26 20:55:10.609591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:06.964 [2024-11-26 20:55:10.609664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.964 [2024-11-26 20:55:10.609691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.964 [2024-11-26 20:55:10.614985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:06.964 [2024-11-26 20:55:10.615061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.964 [2024-11-26 20:55:10.615089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:06.964 [2024-11-26 20:55:10.620169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:06.964 [2024-11-26 20:55:10.620246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.964 [2024-11-26 20:55:10.620273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:06.964 [2024-11-26 20:55:10.625586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:06.964 [2024-11-26 20:55:10.625683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.965 [2024-11-26 20:55:10.625711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:06.965 [2024-11-26 20:55:10.631836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:06.965 [2024-11-26 20:55:10.632004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.965 [2024-11-26 20:55:10.632033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.965 [2024-11-26 20:55:10.639156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:06.965 [2024-11-26 20:55:10.639319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.965 [2024-11-26 20:55:10.639348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:06.965 [2024-11-26 20:55:10.645296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:06.965 [2024-11-26 20:55:10.645443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.965 [2024-11-26 20:55:10.645471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:06.965 [2024-11-26 20:55:10.651294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:06.965 [2024-11-26 20:55:10.651441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.965 [2024-11-26 20:55:10.651469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:06.965 [2024-11-26 20:55:10.657537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:06.965 [2024-11-26 20:55:10.657708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.965 [2024-11-26 20:55:10.657737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.224 [2024-11-26 20:55:10.664191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.224 [2024-11-26 20:55:10.664386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.224 [2024-11-26 20:55:10.664416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:07.224 [2024-11-26 20:55:10.670182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.224 [2024-11-26 20:55:10.670297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.224 [2024-11-26 20:55:10.670333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:07.224 [2024-11-26 20:55:10.676262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.224 [2024-11-26 20:55:10.676342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.224 [2024-11-26 20:55:10.676370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:07.224 [2024-11-26 20:55:10.682522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.224 [2024-11-26 20:55:10.682596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.224 [2024-11-26 20:55:10.682624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.224 [2024-11-26 20:55:10.688001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.224 [2024-11-26 20:55:10.688075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.224 [2024-11-26 20:55:10.688102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:07.224 [2024-11-26 20:55:10.693626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.224 [2024-11-26 20:55:10.693703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.224 [2024-11-26 20:55:10.693730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:07.224 [2024-11-26 20:55:10.698674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.224 [2024-11-26 20:55:10.698745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.224 [2024-11-26 20:55:10.698772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:07.224 [2024-11-26 20:55:10.704349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.224 [2024-11-26 20:55:10.704463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.224 [2024-11-26 20:55:10.704492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.224 [2024-11-26 20:55:10.709576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.224 [2024-11-26 20:55:10.709652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.224 [2024-11-26 20:55:10.709679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:07.224 [2024-11-26 20:55:10.714719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.224 [2024-11-26 20:55:10.714791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.224 [2024-11-26 20:55:10.714818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:07.224 [2024-11-26 20:55:10.719774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.224 [2024-11-26 20:55:10.719847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.224 [2024-11-26 20:55:10.719874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:07.224 [2024-11-26 20:55:10.724780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.224 [2024-11-26 20:55:10.724851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.224 [2024-11-26 20:55:10.724877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.224 [2024-11-26 20:55:10.729730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.224 [2024-11-26 20:55:10.729804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.224 [2024-11-26 20:55:10.729831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:07.224 [2024-11-26 20:55:10.734752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.224 [2024-11-26 20:55:10.734827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.224 [2024-11-26 20:55:10.734854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:07.224 [2024-11-26 20:55:10.739792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.224 [2024-11-26 20:55:10.739874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.224 [2024-11-26 20:55:10.739900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:07.224 [2024-11-26 20:55:10.745211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.224 [2024-11-26 20:55:10.745286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.224 [2024-11-26 20:55:10.745321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.224 [2024-11-26 20:55:10.750784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.224 [2024-11-26 20:55:10.750861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.224 [2024-11-26 20:55:10.750887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:07.224 [2024-11-26 20:55:10.755932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.224 [2024-11-26 20:55:10.756007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.224 [2024-11-26 20:55:10.756034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:07.224 [2024-11-26 20:55:10.760977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.224 [2024-11-26 20:55:10.761052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.224 [2024-11-26 20:55:10.761079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:07.224 [2024-11-26 20:55:10.766012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.224 [2024-11-26 20:55:10.766087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.224 [2024-11-26 20:55:10.766113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.224 [2024-11-26 20:55:10.771047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.224 [2024-11-26 20:55:10.771121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.224 [2024-11-26 20:55:10.771148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:07.224 [2024-11-26 20:55:10.776102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.224 [2024-11-26 20:55:10.776177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.224 [2024-11-26 20:55:10.776203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:07.224 [2024-11-26 20:55:10.781014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.224 [2024-11-26 20:55:10.781124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.224 [2024-11-26 20:55:10.781152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:07.224 [2024-11-26 20:55:10.785914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.224 [2024-11-26 20:55:10.785988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.224 [2024-11-26 20:55:10.786014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.224 [2024-11-26 20:55:10.790944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.224 [2024-11-26 20:55:10.791018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.224 [2024-11-26 20:55:10.791044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:07.224 [2024-11-26 20:55:10.796011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.224 [2024-11-26 20:55:10.796110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.224 [2024-11-26 20:55:10.796139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:07.225 [2024-11-26 20:55:10.801591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.225 [2024-11-26 20:55:10.801663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.225 [2024-11-26 20:55:10.801691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:07.225 [2024-11-26 20:55:10.807100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.225 [2024-11-26 20:55:10.807174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.225 [2024-11-26 20:55:10.807201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.225 [2024-11-26 20:55:10.812603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.225 [2024-11-26 20:55:10.812678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.225 [2024-11-26 20:55:10.812705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:07.225 [2024-11-26 20:55:10.818393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.225 [2024-11-26 20:55:10.818501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.225 [2024-11-26 20:55:10.818529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:07.225 [2024-11-26 20:55:10.823953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.225 [2024-11-26 20:55:10.824027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.225 [2024-11-26 20:55:10.824054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:07.225 [2024-11-26 20:55:10.829075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.225 [2024-11-26 20:55:10.829148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.225 [2024-11-26 20:55:10.829182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.225 [2024-11-26 20:55:10.833988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.225 [2024-11-26 20:55:10.834061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.225 [2024-11-26 20:55:10.834088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:07.225 [2024-11-26 20:55:10.838959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.225 [2024-11-26 20:55:10.839031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.225 [2024-11-26 20:55:10.839057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:07.225 [2024-11-26 20:55:10.844024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.225 [2024-11-26 20:55:10.844098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.225 [2024-11-26 20:55:10.844124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:07.225 [2024-11-26 20:55:10.849061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.225 [2024-11-26 20:55:10.849134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.225 [2024-11-26 20:55:10.849160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.225 [2024-11-26 20:55:10.854096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.225 [2024-11-26 20:55:10.854171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.225 [2024-11-26 20:55:10.854199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:07.225 [2024-11-26 20:55:10.859050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.225 [2024-11-26 20:55:10.859124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.225 [2024-11-26 20:55:10.859151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:07.225 [2024-11-26 20:55:10.864035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.225 [2024-11-26 20:55:10.864109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.225 [2024-11-26 20:55:10.864136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:07.225 [2024-11-26 20:55:10.869117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.225 [2024-11-26 20:55:10.869194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.225 [2024-11-26 20:55:10.869221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.225 [2024-11-26 20:55:10.874056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.225 [2024-11-26 20:55:10.874138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.225 [2024-11-26 20:55:10.874165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:07.225 [2024-11-26 20:55:10.879454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.225 [2024-11-26 20:55:10.879527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.225 [2024-11-26 20:55:10.879553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:07.225 [2024-11-26 20:55:10.885558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.225 [2024-11-26 20:55:10.885674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.225 [2024-11-26 20:55:10.885703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:07.225 [2024-11-26 20:55:10.892381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.225 [2024-11-26 20:55:10.892537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.225 [2024-11-26 20:55:10.892565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.225 [2024-11-26 20:55:10.899900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.225 [2024-11-26 20:55:10.899988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.225 [2024-11-26 20:55:10.900015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:07.225 [2024-11-26 20:55:10.906753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.225 [2024-11-26 20:55:10.906874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.225 [2024-11-26 20:55:10.906904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:07.225 [2024-11-26 20:55:10.914052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.225 [2024-11-26 20:55:10.914184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.225 [2024-11-26 20:55:10.914212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:07.486 [2024-11-26 20:55:10.921409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.486 [2024-11-26 20:55:10.921554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.486 [2024-11-26 20:55:10.921583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.486 [2024-11-26 20:55:10.928793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.486 [2024-11-26 20:55:10.928915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.486 [2024-11-26 20:55:10.928943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:07.486 [2024-11-26 20:55:10.936298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.486 [2024-11-26 20:55:10.936424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.486 [2024-11-26 20:55:10.936452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:07.486 [2024-11-26 20:55:10.943904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.486 [2024-11-26 20:55:10.944052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.486 [2024-11-26 20:55:10.944080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:07.486 [2024-11-26 20:55:10.950910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.486 [2024-11-26 20:55:10.951037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.486 [2024-11-26 20:55:10.951066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.486 [2024-11-26 20:55:10.958360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.486 [2024-11-26 20:55:10.958511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.486 [2024-11-26 20:55:10.958540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:07.486 [2024-11-26 20:55:10.965565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.486 [2024-11-26 20:55:10.965691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.486 [2024-11-26 20:55:10.965719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:07.486 [2024-11-26 20:55:10.972994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.486 [2024-11-26 20:55:10.973145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.486 [2024-11-26 20:55:10.973173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:07.486 [2024-11-26 20:55:10.980200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.486 [2024-11-26 20:55:10.980316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.486 [2024-11-26 20:55:10.980344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.486 [2024-11-26 20:55:10.987524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.486 [2024-11-26 20:55:10.987598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.486 [2024-11-26 20:55:10.987626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:07.486 [2024-11-26 20:55:10.993881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.486 [2024-11-26 20:55:10.993953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.486 [2024-11-26 20:55:10.993987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:07.486 [2024-11-26 20:55:10.999140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.486 [2024-11-26 20:55:10.999215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.486 [2024-11-26 20:55:10.999242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:07.486 [2024-11-26 20:55:11.004456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.486 [2024-11-26 20:55:11.004532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.486 [2024-11-26 20:55:11.004559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.486 [2024-11-26 20:55:11.010075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.486 [2024-11-26 20:55:11.010150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.486 [2024-11-26 20:55:11.010177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:07.486 [2024-11-26 20:55:11.016030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.486 [2024-11-26 20:55:11.016106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.486 [2024-11-26 20:55:11.016133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:07.486 [2024-11-26 20:55:11.021865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.486 [2024-11-26 20:55:11.021940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.486 [2024-11-26 20:55:11.021968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:07.486 [2024-11-26 20:55:11.027471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.486 [2024-11-26 20:55:11.027548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.486 [2024-11-26 20:55:11.027575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.486 [2024-11-26 20:55:11.032778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.486 [2024-11-26 20:55:11.032854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.486 [2024-11-26 20:55:11.032880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:07.486 [2024-11-26 20:55:11.038202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.486 [2024-11-26 20:55:11.038278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.486 [2024-11-26 20:55:11.038312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:07.486 [2024-11-26 20:55:11.043755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.486 [2024-11-26 20:55:11.043849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.486 [2024-11-26 20:55:11.043876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:07.486 [2024-11-26 20:55:11.048882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.486 [2024-11-26 20:55:11.048958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.486 [2024-11-26 20:55:11.048985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.486 [2024-11-26 20:55:11.053854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.486 [2024-11-26 20:55:11.053930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.486 [2024-11-26 20:55:11.053956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:07.486 [2024-11-26 20:55:11.058846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.486 [2024-11-26 20:55:11.058923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.487 [2024-11-26 20:55:11.058950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:07.487 [2024-11-26 20:55:11.063903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.487 [2024-11-26 20:55:11.063981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.487 [2024-11-26 20:55:11.064009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:07.487 [2024-11-26 20:55:11.068977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.487 [2024-11-26 20:55:11.069051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.487 [2024-11-26 20:55:11.069077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.487 [2024-11-26 20:55:11.074726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.487 [2024-11-26 20:55:11.074853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.487 [2024-11-26 20:55:11.074881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:07.487 [2024-11-26 20:55:11.080617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.487 [2024-11-26 20:55:11.080691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.487 [2024-11-26 20:55:11.080718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:07.487 [2024-11-26 20:55:11.086118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.487 [2024-11-26 20:55:11.086192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.487 [2024-11-26 20:55:11.086219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:07.487 [2024-11-26 20:55:11.091447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.487 [2024-11-26 20:55:11.091527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.487 [2024-11-26 20:55:11.091554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.487 [2024-11-26 20:55:11.096338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.487 [2024-11-26 20:55:11.096415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.487 [2024-11-26 20:55:11.096442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:07.487 [2024-11-26 20:55:11.101241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.487 [2024-11-26 20:55:11.101321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.487 [2024-11-26 20:55:11.101348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:07.487 [2024-11-26 20:55:11.106240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.487 [2024-11-26 20:55:11.106318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.487 [2024-11-26 20:55:11.106347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:07.487 [2024-11-26 20:55:11.111910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.487 [2024-11-26 20:55:11.112011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.487 [2024-11-26 20:55:11.112039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.487 [2024-11-26 20:55:11.119014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.487 [2024-11-26 20:55:11.119204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.487 [2024-11-26 20:55:11.119231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:07.487 [2024-11-26 20:55:11.125830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.487 [2024-11-26 20:55:11.125959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.487 [2024-11-26 20:55:11.125986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:07.487 [2024-11-26 20:55:11.133139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.487 [2024-11-26 20:55:11.133238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.487 [2024-11-26 20:55:11.133265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:07.487 [2024-11-26 20:55:11.139062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.487 [2024-11-26 20:55:11.139132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.487 [2024-11-26 20:55:11.139167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.487 [2024-11-26 20:55:11.144001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.487 [2024-11-26 20:55:11.144073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.487 [2024-11-26 20:55:11.144100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:07.487 [2024-11-26 20:55:11.148986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.487 [2024-11-26 20:55:11.149057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.487 [2024-11-26 20:55:11.149084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:07.487 [2024-11-26 20:55:11.154159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.487 [2024-11-26 20:55:11.154231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.487 [2024-11-26 20:55:11.154258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:07.487 [2024-11-26 20:55:11.159194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.487 [2024-11-26 20:55:11.159271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.487 [2024-11-26 20:55:11.159298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.487 [2024-11-26 20:55:11.164272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.487 [2024-11-26 20:55:11.164567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.487 [2024-11-26 20:55:11.164597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:07.487 [2024-11-26 20:55:11.169373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.487 [2024-11-26 20:55:11.169443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.487 [2024-11-26 20:55:11.169471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:07.487 [2024-11-26 20:55:11.174270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.487 [2024-11-26 20:55:11.174350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.487 [2024-11-26 20:55:11.174377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:07.487 [2024-11-26 20:55:11.179211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.487 [2024-11-26 20:55:11.179283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.487 [2024-11-26 20:55:11.179318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.748 [2024-11-26 20:55:11.184141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.748 [2024-11-26 20:55:11.184212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.748 [2024-11-26 20:55:11.184239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:07.748 [2024-11-26 20:55:11.189040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.748 [2024-11-26 20:55:11.189111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.748 [2024-11-26 20:55:11.189138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:07.748 [2024-11-26 20:55:11.193998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.748 [2024-11-26 20:55:11.194069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.748 [2024-11-26 20:55:11.194098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:07.748 [2024-11-26 20:55:11.198943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.748 [2024-11-26 20:55:11.199011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.748 [2024-11-26 20:55:11.199039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.748 [2024-11-26 20:55:11.204352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.748 [2024-11-26 20:55:11.204437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.748 [2024-11-26 20:55:11.204479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:07.748 [2024-11-26 20:55:11.210309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.748 [2024-11-26 20:55:11.210424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.748 [2024-11-26 20:55:11.210452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:07.748 [2024-11-26 20:55:11.215696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.748 [2024-11-26 20:55:11.215818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.748 [2024-11-26 20:55:11.215845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:07.748 [2024-11-26 20:55:11.221342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.748 [2024-11-26 20:55:11.221465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.748 [2024-11-26 20:55:11.221492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.748 [2024-11-26 20:55:11.226343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.748 [2024-11-26 20:55:11.226444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.748 [2024-11-26 20:55:11.226478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:07.748 [2024-11-26 20:55:11.231851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.748 [2024-11-26 20:55:11.231937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.748 [2024-11-26 20:55:11.231965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:07.748 [2024-11-26 20:55:11.236765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.748 [2024-11-26 20:55:11.236893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.748 [2024-11-26 20:55:11.236920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:07.748 [2024-11-26 20:55:11.241801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.748 [2024-11-26 20:55:11.241890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.748 [2024-11-26 20:55:11.241918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.748 [2024-11-26 20:55:11.246775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.748 [2024-11-26 20:55:11.246852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.748 [2024-11-26 20:55:11.246880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:07.748 [2024-11-26 20:55:11.251817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.748 [2024-11-26 20:55:11.251897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.748 [2024-11-26 20:55:11.251924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:07.748 [2024-11-26 20:55:11.257015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.748 [2024-11-26 20:55:11.257114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.748 [2024-11-26 20:55:11.257141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:07.748 [2024-11-26 20:55:11.261990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.748 [2024-11-26 20:55:11.262080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.748 [2024-11-26 20:55:11.262108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.748 [2024-11-26 20:55:11.267202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.748 [2024-11-26 20:55:11.267370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.748 [2024-11-26 20:55:11.267399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:07.748 [2024-11-26 20:55:11.273600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.748 [2024-11-26 20:55:11.273783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.748 [2024-11-26 20:55:11.273812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:07.748 [2024-11-26 20:55:11.279842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.748 [2024-11-26 20:55:11.280026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.748 [2024-11-26 20:55:11.280055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:07.748 [2024-11-26 20:55:11.286072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.748 [2024-11-26 20:55:11.286261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.748 [2024-11-26 20:55:11.286290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.748 [2024-11-26 20:55:11.292351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.748 [2024-11-26 20:55:11.292440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.748 [2024-11-26 20:55:11.292468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:07.748 [2024-11-26 20:55:11.297289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.748 [2024-11-26 20:55:11.297386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.748 [2024-11-26 20:55:11.297414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:07.748 [2024-11-26 20:55:11.302467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.748 [2024-11-26 20:55:11.302600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.748 [2024-11-26 20:55:11.302629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:07.748 [2024-11-26 20:55:11.308794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.748 [2024-11-26 20:55:11.308903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.748 [2024-11-26 20:55:11.308931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.748 [2024-11-26 20:55:11.314455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.748 [2024-11-26 20:55:11.314543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.748 [2024-11-26 20:55:11.314571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:07.748 [2024-11-26 20:55:11.320044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.749 [2024-11-26 20:55:11.320170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.749 [2024-11-26 20:55:11.320198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:07.749 [2024-11-26 20:55:11.325330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.749 [2024-11-26 20:55:11.325460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.749 [2024-11-26 20:55:11.325489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:07.749 [2024-11-26 20:55:11.330523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.749 [2024-11-26 20:55:11.330616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.749 [2024-11-26 20:55:11.330642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.749 [2024-11-26 20:55:11.335473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.749 [2024-11-26 20:55:11.335543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.749 [2024-11-26 20:55:11.335570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:07.749 [2024-11-26 20:55:11.340333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.749 [2024-11-26 20:55:11.340448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.749 [2024-11-26 20:55:11.340475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:07.749 [2024-11-26 20:55:11.346327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.749 [2024-11-26 20:55:11.346509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.749 [2024-11-26 20:55:11.346539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:07.749 [2024-11-26 20:55:11.351672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.749 [2024-11-26 20:55:11.351798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.749 [2024-11-26 20:55:11.351827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.749 [2024-11-26 20:55:11.357966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.749 [2024-11-26 20:55:11.358133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.749 [2024-11-26 20:55:11.358163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:07.749 [2024-11-26 20:55:11.364085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.749 [2024-11-26 20:55:11.364195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.749 [2024-11-26 20:55:11.364223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:07.749 [2024-11-26 20:55:11.369048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.749 [2024-11-26 20:55:11.369170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.749 [2024-11-26 20:55:11.369206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:07.749 [2024-11-26 20:55:11.373916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.749 [2024-11-26 20:55:11.374017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.749 [2024-11-26 20:55:11.374044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.749 [2024-11-26 20:55:11.379289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.749 [2024-11-26 20:55:11.379384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.749 [2024-11-26 20:55:11.379411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:07.749 [2024-11-26 20:55:11.384596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.749 [2024-11-26 20:55:11.384665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.749 [2024-11-26 20:55:11.384692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:07.749 [2024-11-26 20:55:11.390025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.749 [2024-11-26 20:55:11.390097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.749 [2024-11-26 20:55:11.390124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:07.749 [2024-11-26 20:55:11.395271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.749 [2024-11-26 20:55:11.395377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.749 [2024-11-26 20:55:11.395405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.749 [2024-11-26 20:55:11.400550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.749 [2024-11-26 20:55:11.400621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.749 [2024-11-26 20:55:11.400648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:07.749 [2024-11-26 20:55:11.405731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.749 [2024-11-26 20:55:11.405805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.749 [2024-11-26 20:55:11.405833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:07.749 [2024-11-26 20:55:11.411029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.749 [2024-11-26 20:55:11.411101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.749 [2024-11-26 20:55:11.411128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:07.749 [2024-11-26 20:55:11.416139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.749 [2024-11-26 20:55:11.416231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.749 [2024-11-26 20:55:11.416272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.749 [2024-11-26 20:55:11.421597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.749 [2024-11-26 20:55:11.421668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.749 [2024-11-26 20:55:11.421695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:07.749 [2024-11-26 20:55:11.427197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.749 [2024-11-26 20:55:11.427268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.749 [2024-11-26 20:55:11.427295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:07.749 [2024-11-26 20:55:11.433576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.749 [2024-11-26 20:55:11.433733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.749 [2024-11-26 20:55:11.433761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:07.749 [2024-11-26 20:55:11.439836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:07.749 [2024-11-26 20:55:11.440045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.749 [2024-11-26 20:55:11.440073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.008 [2024-11-26 20:55:11.446552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.008 [2024-11-26 20:55:11.446673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.008 [2024-11-26 20:55:11.446701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.008 [2024-11-26 20:55:11.452509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.008 [2024-11-26 20:55:11.452705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.008 [2024-11-26 20:55:11.452733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.008 [2024-11-26 20:55:11.458197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.008 [2024-11-26 20:55:11.458294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.008 [2024-11-26 20:55:11.458330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.008 [2024-11-26 20:55:11.463744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.008 [2024-11-26 20:55:11.463945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.008 [2024-11-26 20:55:11.463973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.008 [2024-11-26 20:55:11.469958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.008 [2024-11-26 20:55:11.470163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.008 [2024-11-26 20:55:11.470193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.008 [2024-11-26 20:55:11.476821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.008 [2024-11-26 20:55:11.476965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.008 [2024-11-26 20:55:11.476994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.008 [2024-11-26 20:55:11.482006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.008 [2024-11-26 20:55:11.482077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.009 [2024-11-26 20:55:11.482104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.009 [2024-11-26 20:55:11.486570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.009 [2024-11-26 20:55:11.486641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.009 [2024-11-26 20:55:11.486668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.009 [2024-11-26 20:55:11.491216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.009 [2024-11-26 20:55:11.491286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.009 [2024-11-26 20:55:11.491322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.009 [2024-11-26 20:55:11.496880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.009 [2024-11-26 20:55:11.496997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.009 [2024-11-26 20:55:11.497026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.009 [2024-11-26 20:55:11.503104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.009 [2024-11-26 20:55:11.503273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.009 [2024-11-26 20:55:11.503309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.009 [2024-11-26 20:55:11.509752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.009 [2024-11-26 20:55:11.509875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.009 [2024-11-26 20:55:11.509903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.009 [2024-11-26 20:55:11.516063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.009 [2024-11-26 20:55:11.516278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.009 [2024-11-26 20:55:11.516326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.009 5488.00 IOPS, 686.00 MiB/s [2024-11-26T19:55:11.706Z] [2024-11-26 20:55:11.522955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.009 [2024-11-26 20:55:11.523116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.009 [2024-11-26 20:55:11.523145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.009 [2024-11-26 20:55:11.527620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.009 [2024-11-26 20:55:11.527797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.009 [2024-11-26 20:55:11.527825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.009 [2024-11-26 20:55:11.531954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.009 [2024-11-26 20:55:11.532091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.009 [2024-11-26 20:55:11.532120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.009 [2024-11-26 20:55:11.536201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.009 [2024-11-26 20:55:11.536378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.009 [2024-11-26 20:55:11.536408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.009 [2024-11-26 20:55:11.540507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.009 [2024-11-26 20:55:11.540625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.009 [2024-11-26 20:55:11.540652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.009 [2024-11-26 20:55:11.545134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.009 [2024-11-26 20:55:11.545279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.009 [2024-11-26 20:55:11.545318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.009 [2024-11-26 20:55:11.551301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.009 [2024-11-26 20:55:11.551496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.009 [2024-11-26 20:55:11.551526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.009 [2024-11-26 20:55:11.556253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.009 [2024-11-26 20:55:11.556388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.009 [2024-11-26 20:55:11.556420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.009 [2024-11-26 20:55:11.560614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.009 [2024-11-26 20:55:11.560759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.009 [2024-11-26 20:55:11.560788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.009 [2024-11-26 20:55:11.565087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.009 [2024-11-26 20:55:11.565245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.009 [2024-11-26 20:55:11.565274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.009 [2024-11-26 20:55:11.570103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.009 [2024-11-26 20:55:11.570206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.009 [2024-11-26 20:55:11.570235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.009 [2024-11-26 20:55:11.575023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.009 [2024-11-26 20:55:11.575154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.009 [2024-11-26 20:55:11.575183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.009 [2024-11-26 20:55:11.581106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.009 [2024-11-26 20:55:11.581254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.009 [2024-11-26 20:55:11.581282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.009 [2024-11-26 20:55:11.586484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.009 [2024-11-26 20:55:11.586640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.009 [2024-11-26 20:55:11.586668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.009 [2024-11-26 20:55:11.591297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.009 [2024-11-26 20:55:11.591433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.009 [2024-11-26 20:55:11.591462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.009 [2024-11-26 20:55:11.596209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.010 [2024-11-26 20:55:11.596347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.010 [2024-11-26 20:55:11.596375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.010 [2024-11-26 20:55:11.601422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.010 [2024-11-26 20:55:11.601561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.010 [2024-11-26 20:55:11.601595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.010 [2024-11-26 20:55:11.606462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.010 [2024-11-26 20:55:11.606629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.010 [2024-11-26 20:55:11.606658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.010 [2024-11-26 20:55:11.612389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.010 [2024-11-26 20:55:11.612601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.010 [2024-11-26 20:55:11.612631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.010 [2024-11-26 20:55:11.617931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.010 [2024-11-26 20:55:11.618085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.010 [2024-11-26 20:55:11.618114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.010 [2024-11-26 20:55:11.622441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.010 [2024-11-26 20:55:11.622583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.010 [2024-11-26 20:55:11.622610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.010 [2024-11-26 20:55:11.626782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.010 [2024-11-26 20:55:11.626939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.010 [2024-11-26 20:55:11.626966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.010 [2024-11-26 20:55:11.631119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.010 [2024-11-26 20:55:11.631243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.010 [2024-11-26 20:55:11.631270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.010 [2024-11-26 20:55:11.635882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.010 [2024-11-26 20:55:11.636044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.010 [2024-11-26 20:55:11.636072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.010 [2024-11-26 20:55:11.640790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.010 [2024-11-26 20:55:11.640876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.010 [2024-11-26 20:55:11.640903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.010 [2024-11-26 20:55:11.645563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.010 [2024-11-26 20:55:11.645654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.010 [2024-11-26 20:55:11.645680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.010 [2024-11-26 20:55:11.650292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.010 [2024-11-26 20:55:11.650390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.010 [2024-11-26 20:55:11.650417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.010 [2024-11-26 20:55:11.654929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.010 [2024-11-26 20:55:11.655016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.010 [2024-11-26 20:55:11.655043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.010 [2024-11-26 20:55:11.659391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.010 [2024-11-26 20:55:11.659475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.010 [2024-11-26 20:55:11.659502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.010 [2024-11-26 20:55:11.663902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.010 [2024-11-26 20:55:11.664004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.010 [2024-11-26 20:55:11.664032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.010 [2024-11-26 20:55:11.668215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.010 [2024-11-26 20:55:11.668308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.010 [2024-11-26 20:55:11.668335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.010 [2024-11-26 20:55:11.672795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.010 [2024-11-26 20:55:11.672884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.010 [2024-11-26 20:55:11.672911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.010 [2024-11-26 20:55:11.677379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.010 [2024-11-26 20:55:11.677466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.010 [2024-11-26 20:55:11.677493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.010 [2024-11-26 20:55:11.681789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.010 [2024-11-26 20:55:11.681886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.010 [2024-11-26 20:55:11.681912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.010 [2024-11-26 20:55:11.686284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.010 [2024-11-26 20:55:11.686376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.010 [2024-11-26 20:55:11.686403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.010 [2024-11-26 20:55:11.690842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.010 [2024-11-26 20:55:11.690932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.010 [2024-11-26 20:55:11.690960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.010 [2024-11-26 20:55:11.695355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.010 [2024-11-26 20:55:11.695442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.010 [2024-11-26 20:55:11.695469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.010 [2024-11-26 20:55:11.700096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.010 [2024-11-26 20:55:11.700198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.010 [2024-11-26 20:55:11.700226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.269 [2024-11-26 20:55:11.705375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.269 [2024-11-26 20:55:11.705471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.269 [2024-11-26 20:55:11.705499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.269 [2024-11-26 20:55:11.710253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.269 [2024-11-26 20:55:11.710343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.269 [2024-11-26 20:55:11.710371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.269 [2024-11-26 20:55:11.714424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.269 [2024-11-26 20:55:11.714511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.269 [2024-11-26 20:55:11.714537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.269 [2024-11-26 20:55:11.718651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.269 [2024-11-26 20:55:11.718739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.269 [2024-11-26 20:55:11.718767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.269 [2024-11-26 20:55:11.722806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.269 [2024-11-26 20:55:11.722893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.270 [2024-11-26 20:55:11.722925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.270 [2024-11-26 20:55:11.726982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.270 [2024-11-26 20:55:11.727066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.270 [2024-11-26 20:55:11.727092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.270 [2024-11-26 20:55:11.731488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.270 [2024-11-26 20:55:11.731647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.270 [2024-11-26 20:55:11.731675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.270 [2024-11-26 20:55:11.736541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.270 [2024-11-26 20:55:11.736745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.270 [2024-11-26 20:55:11.736773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.270 [2024-11-26 20:55:11.741697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.270 [2024-11-26 20:55:11.741831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.270 [2024-11-26 20:55:11.741858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.270 [2024-11-26 20:55:11.747613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.270 [2024-11-26 20:55:11.747810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.270 [2024-11-26 20:55:11.747838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.270 [2024-11-26 20:55:11.752178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.270 [2024-11-26 20:55:11.752324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.270 [2024-11-26 20:55:11.752351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.270 [2024-11-26 20:55:11.756440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.270 [2024-11-26 20:55:11.756583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.270 [2024-11-26 20:55:11.756612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.270 [2024-11-26 20:55:11.761175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.270 [2024-11-26 20:55:11.761289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.270 [2024-11-26 20:55:11.761325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.270 [2024-11-26 20:55:11.765643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.270 [2024-11-26 20:55:11.765760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.270 [2024-11-26 20:55:11.765787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.270 [2024-11-26 20:55:11.769902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.270 [2024-11-26 20:55:11.769995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.270 [2024-11-26 20:55:11.770023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.270 [2024-11-26 20:55:11.774458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.270 [2024-11-26 20:55:11.774616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.270 [2024-11-26 20:55:11.774644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.270 [2024-11-26 20:55:11.779642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.270 [2024-11-26 20:55:11.779831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.270 [2024-11-26 20:55:11.779859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.270 [2024-11-26 20:55:11.784737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.270 [2024-11-26 20:55:11.784903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.270 [2024-11-26 20:55:11.784930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.270 [2024-11-26 20:55:11.790539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.270 [2024-11-26 20:55:11.790639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.270 [2024-11-26 20:55:11.790667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.270 [2024-11-26 20:55:11.796140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.270 [2024-11-26 20:55:11.796352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.270 [2024-11-26 20:55:11.796379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.270 [2024-11-26 20:55:11.802380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.270 [2024-11-26 20:55:11.802547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.270 [2024-11-26 20:55:11.802574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.270 [2024-11-26 20:55:11.808484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.270 [2024-11-26 20:55:11.808679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.270 [2024-11-26 20:55:11.808706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.270 [2024-11-26 20:55:11.814367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.270 [2024-11-26 20:55:11.814526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.270 [2024-11-26 20:55:11.814554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.270 [2024-11-26 20:55:11.820560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.270 [2024-11-26 20:55:11.820770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.270 [2024-11-26 20:55:11.820811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.270 [2024-11-26 20:55:11.826860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.270 [2024-11-26 20:55:11.827026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.270 [2024-11-26 20:55:11.827053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.270 [2024-11-26 20:55:11.832863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.270 [2024-11-26 20:55:11.832992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.270 [2024-11-26 20:55:11.833019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.270 [2024-11-26 20:55:11.838880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.270 [2024-11-26 20:55:11.838977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.270 [2024-11-26 20:55:11.839004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.270 [2024-11-26 20:55:11.845128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.270 [2024-11-26 20:55:11.845348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.270 [2024-11-26 20:55:11.845375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.270 [2024-11-26 20:55:11.851557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.270 [2024-11-26 20:55:11.851714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.270 [2024-11-26 20:55:11.851741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.270 [2024-11-26 20:55:11.857033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.270 [2024-11-26 20:55:11.857210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.270 [2024-11-26 20:55:11.857239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.270 [2024-11-26 20:55:11.861801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.270 [2024-11-26 20:55:11.861907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.270 [2024-11-26 20:55:11.861939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.270 [2024-11-26 20:55:11.866723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.270 [2024-11-26 20:55:11.866906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.270 [2024-11-26 20:55:11.866933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.271 [2024-11-26 20:55:11.871954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.271 [2024-11-26 20:55:11.872099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.271 [2024-11-26 20:55:11.872126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.271 [2024-11-26 20:55:11.877565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.271 [2024-11-26 20:55:11.877711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.271 [2024-11-26 20:55:11.877738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.271 [2024-11-26 20:55:11.883291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.271 [2024-11-26 20:55:11.883430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.271 [2024-11-26 20:55:11.883458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.271 [2024-11-26 20:55:11.888489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.271 [2024-11-26 20:55:11.888619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.271 [2024-11-26 20:55:11.888647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.271 [2024-11-26 20:55:11.893589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.271 [2024-11-26 20:55:11.893734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.271 [2024-11-26 20:55:11.893761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.271 [2024-11-26 20:55:11.898738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.271 [2024-11-26 20:55:11.898929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.271 [2024-11-26 20:55:11.898957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.271 [2024-11-26 20:55:11.903846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.271 [2024-11-26 20:55:11.904004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.271 [2024-11-26 20:55:11.904032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.271 [2024-11-26 20:55:11.909174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.271 [2024-11-26 20:55:11.909328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.271 [2024-11-26 20:55:11.909355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.271 [2024-11-26 20:55:11.915266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.271 [2024-11-26 20:55:11.915473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.271 [2024-11-26 20:55:11.915502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.271 [2024-11-26 20:55:11.920426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.271 [2024-11-26 20:55:11.920548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.271 [2024-11-26 20:55:11.920576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.271 [2024-11-26 20:55:11.925606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.271 [2024-11-26 20:55:11.925753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.271 [2024-11-26 20:55:11.925780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.271 [2024-11-26 20:55:11.930827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.271 [2024-11-26 20:55:11.930961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.271 [2024-11-26 20:55:11.930989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.271 [2024-11-26 20:55:11.935994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.271 [2024-11-26 20:55:11.936163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.271 [2024-11-26 20:55:11.936191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.271 [2024-11-26 20:55:11.941682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.271 [2024-11-26 20:55:11.941875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.271 [2024-11-26 20:55:11.941903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.271 [2024-11-26 20:55:11.947165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.271 [2024-11-26 20:55:11.947282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.271 [2024-11-26 20:55:11.947317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.271 [2024-11-26 20:55:11.951477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.271 [2024-11-26 20:55:11.951568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.271 [2024-11-26 20:55:11.951600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.271 [2024-11-26 20:55:11.956003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.271 [2024-11-26 20:55:11.956112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.271 [2024-11-26 20:55:11.956139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.271 [2024-11-26 20:55:11.960458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.271 [2024-11-26 20:55:11.960581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.271 [2024-11-26 20:55:11.960609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.530 [2024-11-26 20:55:11.964952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.530 [2024-11-26 20:55:11.965058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.530 [2024-11-26 20:55:11.965101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.530 [2024-11-26 20:55:11.969490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.530 [2024-11-26 20:55:11.969617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.530 [2024-11-26 20:55:11.969644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.530 [2024-11-26 20:55:11.973732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.530 [2024-11-26 20:55:11.973828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.530 [2024-11-26 20:55:11.973855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.530 [2024-11-26 20:55:11.978049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.530 [2024-11-26 20:55:11.978168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.530 [2024-11-26 20:55:11.978195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.530 [2024-11-26 20:55:11.982476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.530 [2024-11-26 20:55:11.982616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.530 [2024-11-26 20:55:11.982643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.530 [2024-11-26 20:55:11.987107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.530 [2024-11-26 20:55:11.987237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.530 [2024-11-26 20:55:11.987278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.530 [2024-11-26 20:55:11.991764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.530 [2024-11-26 20:55:11.991865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.530 [2024-11-26 20:55:11.991892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.530 [2024-11-26 20:55:11.996267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.531 [2024-11-26 20:55:11.996401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.531 [2024-11-26 20:55:11.996428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.531 [2024-11-26 20:55:12.000648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.531 [2024-11-26 20:55:12.000788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.531 [2024-11-26 20:55:12.000815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.531 [2024-11-26 20:55:12.005528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.531 [2024-11-26 20:55:12.005624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.531 [2024-11-26 20:55:12.005652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.531 [2024-11-26 20:55:12.010748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.531 [2024-11-26 20:55:12.010891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.531 [2024-11-26 20:55:12.010919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.531 [2024-11-26 20:55:12.015229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.531 [2024-11-26 20:55:12.015338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.531 [2024-11-26 20:55:12.015366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.531 [2024-11-26 20:55:12.019647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.531 [2024-11-26 20:55:12.019766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.531 [2024-11-26 20:55:12.019793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.531 [2024-11-26 20:55:12.024264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.531 [2024-11-26 20:55:12.024422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.531 [2024-11-26 20:55:12.024449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.531 [2024-11-26 20:55:12.028689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.531 [2024-11-26 20:55:12.028842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.531 [2024-11-26 20:55:12.028883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.531 [2024-11-26 20:55:12.033072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.531 [2024-11-26 20:55:12.033183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.531 [2024-11-26 20:55:12.033210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.531 [2024-11-26 20:55:12.037358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.531 [2024-11-26 20:55:12.037474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.531 [2024-11-26 20:55:12.037502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.531 [2024-11-26 20:55:12.041784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.531 [2024-11-26 20:55:12.041913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.531 [2024-11-26 20:55:12.041940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.531 [2024-11-26 20:55:12.047176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.531 [2024-11-26 20:55:12.047264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.531 [2024-11-26 20:55:12.047313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.531 [2024-11-26 20:55:12.052191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.531 [2024-11-26 20:55:12.052330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.531 [2024-11-26 20:55:12.052358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.531 [2024-11-26 20:55:12.056762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.531 [2024-11-26 20:55:12.056852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.531 [2024-11-26 20:55:12.056879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.531 [2024-11-26 20:55:12.061174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.531 [2024-11-26 20:55:12.061285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.531 [2024-11-26 20:55:12.061321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.531 [2024-11-26 20:55:12.066991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.531 [2024-11-26 20:55:12.067137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.531 [2024-11-26 20:55:12.067164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.531 [2024-11-26 20:55:12.072209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.531 [2024-11-26 20:55:12.072364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.531 [2024-11-26 20:55:12.072398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.531 [2024-11-26 20:55:12.076772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.531 [2024-11-26 20:55:12.076930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.531 [2024-11-26 20:55:12.076959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.531 [2024-11-26 20:55:12.081276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.531 [2024-11-26 20:55:12.081391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.531 [2024-11-26 20:55:12.081419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.532 [2024-11-26 20:55:12.085876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.532 [2024-11-26 20:55:12.085993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.532 [2024-11-26 20:55:12.086021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.532 [2024-11-26 20:55:12.090330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.532 [2024-11-26 20:55:12.090472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.532 [2024-11-26 20:55:12.090501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.532 [2024-11-26 20:55:12.094845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.532 [2024-11-26 20:55:12.095007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.532 [2024-11-26 20:55:12.095035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.532 [2024-11-26 20:55:12.099355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.532 [2024-11-26 20:55:12.099499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.532 [2024-11-26 20:55:12.099528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.532 [2024-11-26 20:55:12.103824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.532 [2024-11-26 20:55:12.103950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.532 [2024-11-26 20:55:12.103978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.532 [2024-11-26 20:55:12.108415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.532 [2024-11-26 20:55:12.108502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.532 [2024-11-26 20:55:12.108529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.532 [2024-11-26 20:55:12.112768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.532 [2024-11-26 20:55:12.112879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.532 [2024-11-26 20:55:12.112906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.532 [2024-11-26 20:55:12.117148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.532 [2024-11-26 20:55:12.117270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.532 [2024-11-26 20:55:12.117298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.532 [2024-11-26 20:55:12.121595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.532 [2024-11-26 20:55:12.121700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.532 [2024-11-26 20:55:12.121727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.532 [2024-11-26 20:55:12.125870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.532 [2024-11-26 20:55:12.125957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.532 [2024-11-26 20:55:12.125984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.532 [2024-11-26 20:55:12.130171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.532 [2024-11-26 20:55:12.130341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.532 [2024-11-26 20:55:12.130369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.532 [2024-11-26 20:55:12.135276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.532 [2024-11-26 20:55:12.135423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.532 [2024-11-26 20:55:12.135458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.532 [2024-11-26 20:55:12.140385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.532 [2024-11-26 20:55:12.140494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.532 [2024-11-26 20:55:12.140521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.532 [2024-11-26 20:55:12.146250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.532 [2024-11-26 20:55:12.146428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.532 [2024-11-26 20:55:12.146456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.532 [2024-11-26 20:55:12.151495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.532 [2024-11-26 20:55:12.151593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.532 [2024-11-26 20:55:12.151620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.532 [2024-11-26 20:55:12.155768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.532 [2024-11-26 20:55:12.155940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.532 [2024-11-26 20:55:12.155968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.532 [2024-11-26 20:55:12.160300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.532 [2024-11-26 20:55:12.160433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.532 [2024-11-26 20:55:12.160460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.532 [2024-11-26 20:55:12.164703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.532 [2024-11-26 20:55:12.164829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.532 [2024-11-26 20:55:12.164856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.532 [2024-11-26 20:55:12.169194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.532 [2024-11-26 20:55:12.169331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.532 [2024-11-26 20:55:12.169359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.532 [2024-11-26 20:55:12.173801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.532 [2024-11-26 20:55:12.173944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.532 [2024-11-26 20:55:12.173970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.532 [2024-11-26 20:55:12.178017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.532 [2024-11-26 20:55:12.178146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.532 [2024-11-26 20:55:12.178172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.532 [2024-11-26 20:55:12.182768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.532 [2024-11-26 20:55:12.182921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.532 [2024-11-26 20:55:12.182948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.532 [2024-11-26 20:55:12.187863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.532 [2024-11-26 20:55:12.188002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.532 [2024-11-26 20:55:12.188028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.532 [2024-11-26 20:55:12.193376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.532 [2024-11-26 20:55:12.193592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.532 [2024-11-26 20:55:12.193627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.533 [2024-11-26 20:55:12.198837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.533 [2024-11-26 20:55:12.198980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.533 [2024-11-26 20:55:12.199007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.533 [2024-11-26 20:55:12.203246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.533 [2024-11-26 20:55:12.203358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.533 [2024-11-26 20:55:12.203386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.533 [2024-11-26 20:55:12.207634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.533 [2024-11-26 20:55:12.207791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.533 [2024-11-26 20:55:12.207820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.533 [2024-11-26 20:55:12.212122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.533 [2024-11-26 20:55:12.212241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.533 [2024-11-26 20:55:12.212267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.533 [2024-11-26 20:55:12.216746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.533 [2024-11-26 20:55:12.216834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.533 [2024-11-26 20:55:12.216861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.533 [2024-11-26 20:55:12.221321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.533 [2024-11-26 20:55:12.221447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.533 [2024-11-26 20:55:12.221474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.792 [2024-11-26 20:55:12.226176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.792 [2024-11-26 20:55:12.226353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.792 [2024-11-26 20:55:12.226381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.792 [2024-11-26 20:55:12.231272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.792 [2024-11-26 20:55:12.231433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.792 [2024-11-26 20:55:12.231461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.792 [2024-11-26 20:55:12.237517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.792 [2024-11-26 20:55:12.237645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.792 [2024-11-26 20:55:12.237673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.792 [2024-11-26 20:55:12.242231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.792 [2024-11-26 20:55:12.242338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.792 [2024-11-26 20:55:12.242366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.792 [2024-11-26 20:55:12.246329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.792 [2024-11-26 20:55:12.246478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.792 [2024-11-26 20:55:12.246505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.792 [2024-11-26 20:55:12.250745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.792 [2024-11-26 20:55:12.250863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.792 [2024-11-26 20:55:12.250890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.792 [2024-11-26 20:55:12.255056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.792 [2024-11-26 20:55:12.255145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.792 [2024-11-26 20:55:12.255172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.792 [2024-11-26 20:55:12.259398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.792 [2024-11-26 20:55:12.259533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.792 [2024-11-26 20:55:12.259559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.792 [2024-11-26 20:55:12.264394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.792 [2024-11-26 20:55:12.264549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.792 [2024-11-26 20:55:12.264576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.792 [2024-11-26 20:55:12.269513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.792 [2024-11-26 20:55:12.269619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.792 [2024-11-26 20:55:12.269647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.792 [2024-11-26 20:55:12.274792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.792 [2024-11-26 20:55:12.274943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.792 [2024-11-26 20:55:12.274970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.792 [2024-11-26 20:55:12.280351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.792 [2024-11-26 20:55:12.280490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.792 [2024-11-26 20:55:12.280518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.792 [2024-11-26 20:55:12.285549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.792 [2024-11-26 20:55:12.285717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.792 [2024-11-26 20:55:12.285745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.793 [2024-11-26 20:55:12.290784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.793 [2024-11-26 20:55:12.290941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.793 [2024-11-26 20:55:12.290970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.793 [2024-11-26 20:55:12.295872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.793 [2024-11-26 20:55:12.296005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.793 [2024-11-26 20:55:12.296033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.793 [2024-11-26 20:55:12.301072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.793 [2024-11-26 20:55:12.301235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.793 [2024-11-26 20:55:12.301263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.793 [2024-11-26 20:55:12.306168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.793 [2024-11-26 20:55:12.306335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.793 [2024-11-26 20:55:12.306364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.793 [2024-11-26 20:55:12.311387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.793 [2024-11-26 20:55:12.311501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.793 [2024-11-26 20:55:12.311530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.793 [2024-11-26 20:55:12.316604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.793 [2024-11-26 20:55:12.316718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.793 [2024-11-26 20:55:12.316745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.793 [2024-11-26 20:55:12.321787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.793 [2024-11-26 20:55:12.321916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.793 [2024-11-26 20:55:12.321949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.793 [2024-11-26 20:55:12.326986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.793 [2024-11-26 20:55:12.327132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.793 [2024-11-26 20:55:12.327159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.793 [2024-11-26 20:55:12.332119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.793 [2024-11-26 20:55:12.332345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.793 [2024-11-26 20:55:12.332376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.793 [2024-11-26 20:55:12.337384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.793 [2024-11-26 20:55:12.337542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.793 [2024-11-26 20:55:12.337569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.793 [2024-11-26 20:55:12.342609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.793 [2024-11-26 20:55:12.342755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.793 [2024-11-26 20:55:12.342782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.793 [2024-11-26 20:55:12.347952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.793 [2024-11-26 20:55:12.348084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.793 [2024-11-26 20:55:12.348111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.793 [2024-11-26 20:55:12.353156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.793 [2024-11-26 20:55:12.353318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.793 [2024-11-26 20:55:12.353345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.793 [2024-11-26 20:55:12.358274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.793 [2024-11-26 20:55:12.358497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.793 [2024-11-26 20:55:12.358528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.793 [2024-11-26 20:55:12.363555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.793 [2024-11-26 20:55:12.363698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.793 [2024-11-26 20:55:12.363726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.793 [2024-11-26 20:55:12.368938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.793 [2024-11-26 20:55:12.369162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.793 [2024-11-26 20:55:12.369207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.793 [2024-11-26 20:55:12.374109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.793 [2024-11-26 20:55:12.374274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.793 [2024-11-26 20:55:12.374309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.793 [2024-11-26 20:55:12.379401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.793 [2024-11-26 20:55:12.379571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.793 [2024-11-26 20:55:12.379599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.793 [2024-11-26 20:55:12.384583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.793 [2024-11-26 20:55:12.384788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.793 [2024-11-26 20:55:12.384815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.793 [2024-11-26 20:55:12.389956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.793 [2024-11-26 20:55:12.390120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.793 [2024-11-26 20:55:12.390147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.793 [2024-11-26 20:55:12.395185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.793 [2024-11-26 20:55:12.395343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.793 [2024-11-26 20:55:12.395370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.793 [2024-11-26 20:55:12.400397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.793 [2024-11-26 20:55:12.400538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.793 [2024-11-26 20:55:12.400565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.793 [2024-11-26 20:55:12.405572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.793 [2024-11-26 20:55:12.405741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.793 [2024-11-26 20:55:12.405785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.793 [2024-11-26 20:55:12.410855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.793 [2024-11-26 20:55:12.411078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.793 [2024-11-26 20:55:12.411116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.793 [2024-11-26 20:55:12.416115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.793 [2024-11-26 20:55:12.416293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.793 [2024-11-26 20:55:12.416332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.793 [2024-11-26 20:55:12.421298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.793 [2024-11-26 20:55:12.421489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.793 [2024-11-26 20:55:12.421518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.793 [2024-11-26 20:55:12.426490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.793 [2024-11-26 20:55:12.426716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.793 [2024-11-26 20:55:12.426747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.793 [2024-11-26 20:55:12.431727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.794 [2024-11-26 20:55:12.431923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.794 [2024-11-26 20:55:12.431953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.794 [2024-11-26 20:55:12.436892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.794 [2024-11-26 20:55:12.437111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.794 [2024-11-26 20:55:12.437141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.794 [2024-11-26 20:55:12.442219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.794 [2024-11-26 20:55:12.442355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.794 [2024-11-26 20:55:12.442396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.794 [2024-11-26 20:55:12.447271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.794 [2024-11-26 20:55:12.447411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.794 [2024-11-26 20:55:12.447440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.794 [2024-11-26 20:55:12.452371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.794 [2024-11-26 20:55:12.452492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.794 [2024-11-26 20:55:12.452520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.794 [2024-11-26 20:55:12.457550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.794 [2024-11-26 20:55:12.457694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.794 [2024-11-26 20:55:12.457723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.794 [2024-11-26 20:55:12.462800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.794 [2024-11-26 20:55:12.463025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.794 [2024-11-26 20:55:12.463055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:08.794 [2024-11-26 20:55:12.467897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.794 [2024-11-26 20:55:12.468103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.794 [2024-11-26 20:55:12.468133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:08.794 [2024-11-26 20:55:12.473043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.794 [2024-11-26 20:55:12.473207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.794 [2024-11-26 20:55:12.473237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:08.794 [2024-11-26 20:55:12.478367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.794 [2024-11-26 20:55:12.478511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.794 [2024-11-26 20:55:12.478540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:08.794 [2024-11-26 20:55:12.483539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:08.794 [2024-11-26 20:55:12.483750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.794 [2024-11-26 20:55:12.483780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.052 [2024-11-26 20:55:12.488742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:09.052 [2024-11-26 20:55:12.488945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.052 [2024-11-26 20:55:12.488975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:09.052 [2024-11-26 20:55:12.494073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:09.052 [2024-11-26 20:55:12.494285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.052 [2024-11-26 20:55:12.494324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:09.052 [2024-11-26 20:55:12.499206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:09.052 [2024-11-26 20:55:12.499380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.052 [2024-11-26 20:55:12.499424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:09.052 [2024-11-26 20:55:12.504413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:09.052 [2024-11-26 20:55:12.504561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.052 [2024-11-26 20:55:12.504590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:09.052 [2024-11-26 20:55:12.509633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:09.052 [2024-11-26 20:55:12.509779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.052 [2024-11-26 20:55:12.509809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:09.052 [2024-11-26 20:55:12.514779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:09.052 [2024-11-26 20:55:12.514985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.052 [2024-11-26 20:55:12.515015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:09.052 5871.50 IOPS, 733.94 MiB/s [2024-11-26T19:55:12.749Z] [2024-11-26 20:55:12.521256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1e2c090) with pdu=0x200016eff3c8 00:25:09.052 [2024-11-26 20:55:12.521404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.052 [2024-11-26 20:55:12.521434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:09.052 00:25:09.052 Latency(us) 00:25:09.052 [2024-11-26T19:55:12.749Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:09.052 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:09.052 nvme0n1 : 2.00 5868.72 733.59 0.00 0.00 2718.99 1905.40 7670.14 00:25:09.052 [2024-11-26T19:55:12.749Z] =================================================================================================================== 00:25:09.052 [2024-11-26T19:55:12.749Z] Total : 5868.72 733.59 0.00 0.00 2718.99 1905.40 7670.14 00:25:09.052 { 00:25:09.052 "results": [ 00:25:09.052 { 00:25:09.052 "job": "nvme0n1", 00:25:09.052 "core_mask": "0x2", 00:25:09.052 "workload": "randwrite", 00:25:09.052 "status": "finished", 00:25:09.052 "queue_depth": 16, 00:25:09.052 "io_size": 131072, 00:25:09.052 "runtime": 2.004526, 00:25:09.052 "iops": 5868.719088702267, 00:25:09.052 "mibps": 733.5898860877834, 00:25:09.052 "io_failed": 0, 00:25:09.052 "io_timeout": 0, 00:25:09.052 "avg_latency_us": 2718.9925482640074, 00:25:09.052 "min_latency_us": 1905.3985185185186, 00:25:09.052 "max_latency_us": 7670.139259259259 00:25:09.052 } 00:25:09.052 ], 00:25:09.052 "core_count": 1 00:25:09.052 } 00:25:09.052 20:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:09.052 20:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:09.052 20:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:09.052 | .driver_specific 00:25:09.052 | .nvme_error 00:25:09.052 | .status_code 00:25:09.052 | .command_transient_transport_error' 00:25:09.052 20:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:09.309 20:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 379 > 0 )) 00:25:09.309 20:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1765636 00:25:09.309 20:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1765636 ']' 00:25:09.309 20:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1765636 00:25:09.309 20:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:09.309 20:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:09.309 20:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1765636 00:25:09.309 20:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:09.309 20:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:09.309 20:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1765636' 00:25:09.309 killing process with pid 1765636 00:25:09.309 20:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1765636 00:25:09.309 Received shutdown signal, test time was about 2.000000 seconds 00:25:09.309 00:25:09.309 Latency(us) 00:25:09.309 [2024-11-26T19:55:13.006Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:09.309 [2024-11-26T19:55:13.006Z] =================================================================================================================== 00:25:09.309 [2024-11-26T19:55:13.006Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:09.309 20:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1765636 00:25:09.567 20:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1764267 00:25:09.567 20:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1764267 ']' 00:25:09.567 20:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1764267 00:25:09.567 20:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:09.567 20:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:09.567 20:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1764267 00:25:09.567 20:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:09.567 20:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:09.567 20:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1764267' 00:25:09.567 killing process with pid 1764267 00:25:09.567 20:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1764267 00:25:09.567 20:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1764267 00:25:09.824 00:25:09.824 real 0m15.286s 00:25:09.824 user 0m30.727s 00:25:09.824 sys 0m4.231s 00:25:09.824 20:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:09.824 20:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:09.824 ************************************ 00:25:09.824 END TEST nvmf_digest_error 00:25:09.824 ************************************ 00:25:09.824 20:55:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:25:09.824 20:55:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:25:09.824 20:55:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:09.824 20:55:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:25:09.824 20:55:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:09.824 20:55:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:25:09.824 20:55:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:09.824 20:55:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:09.824 rmmod nvme_tcp 00:25:09.824 rmmod nvme_fabrics 00:25:09.824 rmmod nvme_keyring 00:25:09.824 20:55:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:09.824 20:55:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:25:09.824 20:55:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:25:09.824 20:55:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 1764267 ']' 00:25:09.824 20:55:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 1764267 00:25:09.824 20:55:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 1764267 ']' 00:25:09.824 20:55:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 1764267 00:25:09.824 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1764267) - No such process 00:25:09.824 20:55:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 1764267 is not found' 00:25:09.824 Process with pid 1764267 is not found 00:25:09.824 20:55:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:09.824 20:55:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:09.824 20:55:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:09.824 20:55:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:25:09.824 20:55:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:25:09.824 20:55:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:09.824 20:55:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:25:09.824 20:55:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:09.824 20:55:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:09.824 20:55:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:09.824 20:55:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:09.824 20:55:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.757 20:55:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:11.757 00:25:11.757 real 0m35.502s 00:25:11.757 user 1m2.894s 00:25:11.757 sys 0m10.232s 00:25:11.757 20:55:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:11.757 20:55:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:11.757 ************************************ 00:25:11.757 END TEST nvmf_digest 00:25:11.757 ************************************ 00:25:12.016 20:55:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:25:12.016 20:55:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:25:12.016 20:55:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:25:12.016 20:55:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:12.016 20:55:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:12.016 20:55:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:12.016 20:55:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.016 ************************************ 00:25:12.016 START TEST nvmf_bdevperf 00:25:12.016 ************************************ 00:25:12.016 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:12.016 * Looking for test storage... 00:25:12.016 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:12.016 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:12.016 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:25:12.016 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:12.016 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:12.017 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:12.017 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:12.017 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:12.017 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:25:12.017 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:25:12.017 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:25:12.017 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:25:12.017 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:25:12.017 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:25:12.017 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:25:12.017 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:12.017 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:25:12.017 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:25:12.017 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:12.017 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:12.017 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:25:12.017 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:25:12.017 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:12.017 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:25:12.017 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:12.017 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:25:12.017 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:25:12.017 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:12.017 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:25:12.017 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:12.017 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:12.017 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:12.017 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:25:12.017 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:12.017 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:12.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.017 --rc genhtml_branch_coverage=1 00:25:12.017 --rc genhtml_function_coverage=1 00:25:12.017 --rc genhtml_legend=1 00:25:12.017 --rc geninfo_all_blocks=1 00:25:12.017 --rc geninfo_unexecuted_blocks=1 00:25:12.017 00:25:12.017 ' 00:25:12.017 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:12.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.017 --rc genhtml_branch_coverage=1 00:25:12.017 --rc genhtml_function_coverage=1 00:25:12.017 --rc genhtml_legend=1 00:25:12.017 --rc geninfo_all_blocks=1 00:25:12.017 --rc geninfo_unexecuted_blocks=1 00:25:12.017 00:25:12.017 ' 00:25:12.017 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:12.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.017 --rc genhtml_branch_coverage=1 00:25:12.017 --rc genhtml_function_coverage=1 00:25:12.017 --rc genhtml_legend=1 00:25:12.017 --rc geninfo_all_blocks=1 00:25:12.017 --rc geninfo_unexecuted_blocks=1 00:25:12.017 00:25:12.017 ' 00:25:12.017 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:12.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.017 --rc genhtml_branch_coverage=1 00:25:12.017 --rc genhtml_function_coverage=1 00:25:12.017 --rc genhtml_legend=1 00:25:12.017 --rc geninfo_all_blocks=1 00:25:12.017 --rc geninfo_unexecuted_blocks=1 00:25:12.017 00:25:12.017 ' 00:25:12.017 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:12.017 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:25:12.017 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:12.017 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:12.017 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:12.017 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:12.017 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:12.017 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:12.017 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:12.017 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:12.017 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:12.017 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:12.017 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:12.017 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:12.017 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:12.017 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:12.017 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:12.017 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:12.017 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:12.017 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:12.017 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:12.017 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:12.017 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:12.017 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.018 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.018 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.018 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:25:12.018 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.018 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:25:12.018 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:12.018 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:12.018 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:12.018 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:12.018 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:12.018 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:12.018 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:12.018 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:12.018 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:12.018 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:12.018 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:12.018 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:12.018 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:25:12.018 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:12.018 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:12.018 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:12.018 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:12.018 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:12.018 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:12.018 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:12.018 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:12.018 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:12.018 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:12.018 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:12.018 20:55:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:14.549 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:14.549 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:14.549 Found net devices under 0000:09:00.0: cvl_0_0 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:14.549 Found net devices under 0000:09:00.1: cvl_0_1 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:14.549 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:14.550 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:25:14.550 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:14.550 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:14.550 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:14.550 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:14.550 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:14.550 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:14.550 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:14.550 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:14.550 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:14.550 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:14.550 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:14.550 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:14.550 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:14.550 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:14.550 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:14.550 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:14.550 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:14.550 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:14.550 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:14.550 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:14.550 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:14.550 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:14.550 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:14.550 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:14.550 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:14.550 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:14.550 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:14.550 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.347 ms 00:25:14.550 00:25:14.550 --- 10.0.0.2 ping statistics --- 00:25:14.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.550 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:25:14.550 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:14.550 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:14.550 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:25:14.550 00:25:14.550 --- 10.0.0.1 ping statistics --- 00:25:14.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.550 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:25:14.550 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:14.550 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:25:14.550 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:14.550 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:14.550 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:14.550 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:14.550 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:14.550 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:14.550 20:55:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:14.550 20:55:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:25:14.550 20:55:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:14.550 20:55:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:14.550 20:55:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:14.550 20:55:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:14.550 20:55:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1767990 00:25:14.550 20:55:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:14.550 20:55:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1767990 00:25:14.550 20:55:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1767990 ']' 00:25:14.550 20:55:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:14.550 20:55:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:14.550 20:55:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:14.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:14.550 20:55:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:14.550 20:55:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:14.550 [2024-11-26 20:55:18.062476] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:25:14.550 [2024-11-26 20:55:18.062555] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:14.550 [2024-11-26 20:55:18.136406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:14.550 [2024-11-26 20:55:18.197367] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:14.550 [2024-11-26 20:55:18.197414] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:14.550 [2024-11-26 20:55:18.197433] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:14.550 [2024-11-26 20:55:18.197444] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:14.550 [2024-11-26 20:55:18.197454] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:14.550 [2024-11-26 20:55:18.199046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:14.550 [2024-11-26 20:55:18.199097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:14.550 [2024-11-26 20:55:18.199101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:14.809 20:55:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:14.809 20:55:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:25:14.809 20:55:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:14.809 20:55:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:14.809 20:55:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:14.809 20:55:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:14.810 20:55:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:14.810 20:55:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.810 20:55:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:14.810 [2024-11-26 20:55:18.344444] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:14.810 20:55:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.810 20:55:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:14.810 20:55:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.810 20:55:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:14.810 Malloc0 00:25:14.810 20:55:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.810 20:55:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:14.810 20:55:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.810 20:55:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:14.810 20:55:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.810 20:55:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:14.810 20:55:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.810 20:55:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:14.810 20:55:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.810 20:55:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:14.810 20:55:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.810 20:55:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:14.810 [2024-11-26 20:55:18.411420] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:14.810 20:55:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.810 20:55:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:25:14.810 20:55:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:25:14.810 20:55:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:25:14.810 20:55:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:25:14.810 20:55:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:14.810 20:55:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:14.810 { 00:25:14.810 "params": { 00:25:14.810 "name": "Nvme$subsystem", 00:25:14.810 "trtype": "$TEST_TRANSPORT", 00:25:14.810 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:14.810 "adrfam": "ipv4", 00:25:14.810 "trsvcid": "$NVMF_PORT", 00:25:14.810 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:14.810 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:14.810 "hdgst": ${hdgst:-false}, 00:25:14.810 "ddgst": ${ddgst:-false} 00:25:14.810 }, 00:25:14.810 "method": "bdev_nvme_attach_controller" 00:25:14.810 } 00:25:14.810 EOF 00:25:14.810 )") 00:25:14.810 20:55:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:25:14.810 20:55:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:25:14.810 20:55:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:25:14.810 20:55:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:14.810 "params": { 00:25:14.810 "name": "Nvme1", 00:25:14.810 "trtype": "tcp", 00:25:14.810 "traddr": "10.0.0.2", 00:25:14.810 "adrfam": "ipv4", 00:25:14.810 "trsvcid": "4420", 00:25:14.810 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:14.810 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:14.810 "hdgst": false, 00:25:14.810 "ddgst": false 00:25:14.810 }, 00:25:14.810 "method": "bdev_nvme_attach_controller" 00:25:14.810 }' 00:25:14.810 [2024-11-26 20:55:18.459614] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:25:14.810 [2024-11-26 20:55:18.459709] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1768141 ] 00:25:15.068 [2024-11-26 20:55:18.529187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:15.068 [2024-11-26 20:55:18.588783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:15.326 Running I/O for 1 seconds... 00:25:16.260 8556.00 IOPS, 33.42 MiB/s 00:25:16.260 Latency(us) 00:25:16.260 [2024-11-26T19:55:19.957Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:16.260 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:16.260 Verification LBA range: start 0x0 length 0x4000 00:25:16.260 Nvme1n1 : 1.01 8596.32 33.58 0.00 0.00 14804.17 2973.39 13592.65 00:25:16.260 [2024-11-26T19:55:19.957Z] =================================================================================================================== 00:25:16.260 [2024-11-26T19:55:19.957Z] Total : 8596.32 33.58 0.00 0.00 14804.17 2973.39 13592.65 00:25:16.519 20:55:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1768275 00:25:16.519 20:55:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:25:16.519 20:55:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:25:16.519 20:55:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:25:16.519 20:55:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:25:16.519 20:55:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:25:16.519 20:55:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:16.519 20:55:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:16.519 { 00:25:16.519 "params": { 00:25:16.519 "name": "Nvme$subsystem", 00:25:16.519 "trtype": "$TEST_TRANSPORT", 00:25:16.519 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:16.519 "adrfam": "ipv4", 00:25:16.519 "trsvcid": "$NVMF_PORT", 00:25:16.519 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:16.519 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:16.519 "hdgst": ${hdgst:-false}, 00:25:16.519 "ddgst": ${ddgst:-false} 00:25:16.519 }, 00:25:16.519 "method": "bdev_nvme_attach_controller" 00:25:16.519 } 00:25:16.519 EOF 00:25:16.519 )") 00:25:16.519 20:55:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:25:16.519 20:55:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:25:16.519 20:55:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:25:16.519 20:55:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:16.519 "params": { 00:25:16.519 "name": "Nvme1", 00:25:16.519 "trtype": "tcp", 00:25:16.519 "traddr": "10.0.0.2", 00:25:16.519 "adrfam": "ipv4", 00:25:16.519 "trsvcid": "4420", 00:25:16.519 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:16.519 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:16.519 "hdgst": false, 00:25:16.519 "ddgst": false 00:25:16.519 }, 00:25:16.519 "method": "bdev_nvme_attach_controller" 00:25:16.519 }' 00:25:16.519 [2024-11-26 20:55:20.113877] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:25:16.519 [2024-11-26 20:55:20.113970] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1768275 ] 00:25:16.519 [2024-11-26 20:55:20.185190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:16.776 [2024-11-26 20:55:20.249582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:17.034 Running I/O for 15 seconds... 00:25:19.339 8509.00 IOPS, 33.24 MiB/s [2024-11-26T19:55:23.298Z] 8551.00 IOPS, 33.40 MiB/s [2024-11-26T19:55:23.298Z] 20:55:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1767990 00:25:19.601 20:55:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:25:19.601 [2024-11-26 20:55:23.079442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:38232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.601 [2024-11-26 20:55:23.079491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.601 [2024-11-26 20:55:23.079521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:38240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.601 [2024-11-26 20:55:23.079538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.601 [2024-11-26 20:55:23.079556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.601 [2024-11-26 20:55:23.079573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.601 [2024-11-26 20:55:23.079590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:38256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.601 [2024-11-26 20:55:23.079613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.601 [2024-11-26 20:55:23.079631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:38264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.601 [2024-11-26 20:55:23.079647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.601 [2024-11-26 20:55:23.079680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:38272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.601 [2024-11-26 20:55:23.079696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.601 [2024-11-26 20:55:23.079710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:38280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.601 [2024-11-26 20:55:23.079725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.601 [2024-11-26 20:55:23.079740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.601 [2024-11-26 20:55:23.079754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.601 [2024-11-26 20:55:23.079771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:38296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.601 [2024-11-26 20:55:23.079785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.601 [2024-11-26 20:55:23.079815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:38304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.601 [2024-11-26 20:55:23.079838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.601 [2024-11-26 20:55:23.079855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:38312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.601 [2024-11-26 20:55:23.079869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.601 [2024-11-26 20:55:23.079884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:38320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.601 [2024-11-26 20:55:23.079900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.601 [2024-11-26 20:55:23.079929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:38328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.601 [2024-11-26 20:55:23.079942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.601 [2024-11-26 20:55:23.079956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:38336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.601 [2024-11-26 20:55:23.079969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.601 [2024-11-26 20:55:23.079984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:38344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.601 [2024-11-26 20:55:23.079997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.601 [2024-11-26 20:55:23.080017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:38352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.601 [2024-11-26 20:55:23.080031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.601 [2024-11-26 20:55:23.080046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:38360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.601 [2024-11-26 20:55:23.080059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.601 [2024-11-26 20:55:23.080073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.601 [2024-11-26 20:55:23.080086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.601 [2024-11-26 20:55:23.080101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.601 [2024-11-26 20:55:23.080115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.601 [2024-11-26 20:55:23.080130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:38384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.601 [2024-11-26 20:55:23.080142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.601 [2024-11-26 20:55:23.080156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:38392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.601 [2024-11-26 20:55:23.080169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.601 [2024-11-26 20:55:23.080183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:38400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.601 [2024-11-26 20:55:23.080210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.601 [2024-11-26 20:55:23.080228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:38408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.601 [2024-11-26 20:55:23.080241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.601 [2024-11-26 20:55:23.080255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:38416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.601 [2024-11-26 20:55:23.080267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.601 [2024-11-26 20:55:23.080316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:38424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.601 [2024-11-26 20:55:23.080334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.601 [2024-11-26 20:55:23.080350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:38432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.601 [2024-11-26 20:55:23.080364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.601 [2024-11-26 20:55:23.080379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:38440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.601 [2024-11-26 20:55:23.080394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.601 [2024-11-26 20:55:23.080409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.601 [2024-11-26 20:55:23.080423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.601 [2024-11-26 20:55:23.080439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:38456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.601 [2024-11-26 20:55:23.080452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.601 [2024-11-26 20:55:23.080468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:38464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.601 [2024-11-26 20:55:23.080483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.601 [2024-11-26 20:55:23.080498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:38472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.601 [2024-11-26 20:55:23.080512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.601 [2024-11-26 20:55:23.080534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:38480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.601 [2024-11-26 20:55:23.080550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.601 [2024-11-26 20:55:23.080566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.602 [2024-11-26 20:55:23.080580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.602 [2024-11-26 20:55:23.080620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:38496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.602 [2024-11-26 20:55:23.080633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.602 [2024-11-26 20:55:23.080647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:38504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.602 [2024-11-26 20:55:23.080664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.602 [2024-11-26 20:55:23.080679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:38512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.602 [2024-11-26 20:55:23.080691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.602 [2024-11-26 20:55:23.080706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:38520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.602 [2024-11-26 20:55:23.080732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.602 [2024-11-26 20:55:23.080747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:38528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.602 [2024-11-26 20:55:23.080759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.602 [2024-11-26 20:55:23.080773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:38536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.602 [2024-11-26 20:55:23.080786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.602 [2024-11-26 20:55:23.080799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:38544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.602 [2024-11-26 20:55:23.080812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.602 [2024-11-26 20:55:23.080827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:38552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.602 [2024-11-26 20:55:23.080839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.602 [2024-11-26 20:55:23.080852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:38560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.602 [2024-11-26 20:55:23.080864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.602 [2024-11-26 20:55:23.080878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:38568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.602 [2024-11-26 20:55:23.080891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.602 [2024-11-26 20:55:23.080904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:38576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.602 [2024-11-26 20:55:23.080916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.602 [2024-11-26 20:55:23.080944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:38584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.602 [2024-11-26 20:55:23.080957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.602 [2024-11-26 20:55:23.080972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:38592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.602 [2024-11-26 20:55:23.080984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.602 [2024-11-26 20:55:23.080997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:38600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.602 [2024-11-26 20:55:23.081010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.602 [2024-11-26 20:55:23.081028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:38608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.602 [2024-11-26 20:55:23.081042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.602 [2024-11-26 20:55:23.081056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:38616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.602 [2024-11-26 20:55:23.081068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.602 [2024-11-26 20:55:23.081082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:38624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.602 [2024-11-26 20:55:23.081094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.602 [2024-11-26 20:55:23.081109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:38632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.602 [2024-11-26 20:55:23.081122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.602 [2024-11-26 20:55:23.081136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:38640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.602 [2024-11-26 20:55:23.081149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.602 [2024-11-26 20:55:23.081163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:37704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.602 [2024-11-26 20:55:23.081176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.602 [2024-11-26 20:55:23.081190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:37712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.602 [2024-11-26 20:55:23.081203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.602 [2024-11-26 20:55:23.081217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:37720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.602 [2024-11-26 20:55:23.081229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.602 [2024-11-26 20:55:23.081243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:37728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.602 [2024-11-26 20:55:23.081256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.602 [2024-11-26 20:55:23.081270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:37736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.602 [2024-11-26 20:55:23.081297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.602 [2024-11-26 20:55:23.081331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:37744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.602 [2024-11-26 20:55:23.081347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.602 [2024-11-26 20:55:23.081363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:37752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.602 [2024-11-26 20:55:23.081388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.602 [2024-11-26 20:55:23.081403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:37760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.602 [2024-11-26 20:55:23.081418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.602 [2024-11-26 20:55:23.081438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:37768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.602 [2024-11-26 20:55:23.081453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.602 [2024-11-26 20:55:23.081469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.602 [2024-11-26 20:55:23.081483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.602 [2024-11-26 20:55:23.081499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.602 [2024-11-26 20:55:23.081513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.602 [2024-11-26 20:55:23.081530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:37792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.602 [2024-11-26 20:55:23.081545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.602 [2024-11-26 20:55:23.081561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:37800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.602 [2024-11-26 20:55:23.081575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.602 [2024-11-26 20:55:23.081615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:37808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.602 [2024-11-26 20:55:23.081629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.602 [2024-11-26 20:55:23.081645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:37816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.602 [2024-11-26 20:55:23.081673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.602 [2024-11-26 20:55:23.081688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:37824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.602 [2024-11-26 20:55:23.081701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.602 [2024-11-26 20:55:23.081716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:37832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.602 [2024-11-26 20:55:23.081729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.602 [2024-11-26 20:55:23.081743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:37840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.602 [2024-11-26 20:55:23.081756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.602 [2024-11-26 20:55:23.081770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:37848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.602 [2024-11-26 20:55:23.081783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.602 [2024-11-26 20:55:23.081802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:37856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.602 [2024-11-26 20:55:23.081816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.603 [2024-11-26 20:55:23.081830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:37864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.603 [2024-11-26 20:55:23.081846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.603 [2024-11-26 20:55:23.081860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:37872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.603 [2024-11-26 20:55:23.081873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.603 [2024-11-26 20:55:23.081887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:37880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.603 [2024-11-26 20:55:23.081900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.603 [2024-11-26 20:55:23.081914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:37888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.603 [2024-11-26 20:55:23.081926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.603 [2024-11-26 20:55:23.081940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:37896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.603 [2024-11-26 20:55:23.081953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.603 [2024-11-26 20:55:23.081966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.603 [2024-11-26 20:55:23.081979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.603 [2024-11-26 20:55:23.081992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:37912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.603 [2024-11-26 20:55:23.082004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.603 [2024-11-26 20:55:23.082018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:37920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.603 [2024-11-26 20:55:23.082031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.603 [2024-11-26 20:55:23.082044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.603 [2024-11-26 20:55:23.082056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.603 [2024-11-26 20:55:23.082070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:37936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.603 [2024-11-26 20:55:23.082082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.603 [2024-11-26 20:55:23.082096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:37944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.603 [2024-11-26 20:55:23.082109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.603 [2024-11-26 20:55:23.082123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:37952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.603 [2024-11-26 20:55:23.082135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.603 [2024-11-26 20:55:23.082150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:37960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.603 [2024-11-26 20:55:23.082162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.603 [2024-11-26 20:55:23.082179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:37968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.603 [2024-11-26 20:55:23.082192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.603 [2024-11-26 20:55:23.082206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:37976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.603 [2024-11-26 20:55:23.082218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.603 [2024-11-26 20:55:23.082237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:38648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.603 [2024-11-26 20:55:23.082250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.603 [2024-11-26 20:55:23.082264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:38656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.603 [2024-11-26 20:55:23.082276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.603 [2024-11-26 20:55:23.082313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:38664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.603 [2024-11-26 20:55:23.082330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.603 [2024-11-26 20:55:23.082349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:38672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.603 [2024-11-26 20:55:23.082364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.603 [2024-11-26 20:55:23.082379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:38680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.603 [2024-11-26 20:55:23.082393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.603 [2024-11-26 20:55:23.082409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:38688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.603 [2024-11-26 20:55:23.082423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.603 [2024-11-26 20:55:23.082439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:38696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.603 [2024-11-26 20:55:23.082453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.603 [2024-11-26 20:55:23.082469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:38704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.603 [2024-11-26 20:55:23.082483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.603 [2024-11-26 20:55:23.082499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:38712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.603 [2024-11-26 20:55:23.082514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.603 [2024-11-26 20:55:23.082529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:37984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.603 [2024-11-26 20:55:23.082543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.603 [2024-11-26 20:55:23.082559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:37992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.603 [2024-11-26 20:55:23.082578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.603 [2024-11-26 20:55:23.082610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.603 [2024-11-26 20:55:23.082628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.603 [2024-11-26 20:55:23.082643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.603 [2024-11-26 20:55:23.082671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.603 [2024-11-26 20:55:23.082685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:38016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.603 [2024-11-26 20:55:23.082698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.603 [2024-11-26 20:55:23.082712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:38024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.603 [2024-11-26 20:55:23.082724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.603 [2024-11-26 20:55:23.082738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:38032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.603 [2024-11-26 20:55:23.082750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.603 [2024-11-26 20:55:23.082768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:38040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.603 [2024-11-26 20:55:23.082781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.603 [2024-11-26 20:55:23.082795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:38048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.603 [2024-11-26 20:55:23.082808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.603 [2024-11-26 20:55:23.082822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.603 [2024-11-26 20:55:23.082835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.603 [2024-11-26 20:55:23.082848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:38064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.603 [2024-11-26 20:55:23.082861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.603 [2024-11-26 20:55:23.082875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:38072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.603 [2024-11-26 20:55:23.082888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.603 [2024-11-26 20:55:23.082902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:38080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.603 [2024-11-26 20:55:23.082914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.603 [2024-11-26 20:55:23.082928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:38088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.603 [2024-11-26 20:55:23.082941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.603 [2024-11-26 20:55:23.082961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.604 [2024-11-26 20:55:23.082975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.604 [2024-11-26 20:55:23.082988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:38720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:19.604 [2024-11-26 20:55:23.083001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.604 [2024-11-26 20:55:23.083015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.604 [2024-11-26 20:55:23.083027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.604 [2024-11-26 20:55:23.083041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:38112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.604 [2024-11-26 20:55:23.083054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.604 [2024-11-26 20:55:23.083067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:38120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.604 [2024-11-26 20:55:23.083084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.604 [2024-11-26 20:55:23.083099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:38128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.604 [2024-11-26 20:55:23.083112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.604 [2024-11-26 20:55:23.083125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.604 [2024-11-26 20:55:23.083138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.604 [2024-11-26 20:55:23.083151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:38144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.604 [2024-11-26 20:55:23.083164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.604 [2024-11-26 20:55:23.083178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:38152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.604 [2024-11-26 20:55:23.083190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.604 [2024-11-26 20:55:23.083208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:38160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.604 [2024-11-26 20:55:23.083221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.604 [2024-11-26 20:55:23.083235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:38168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.604 [2024-11-26 20:55:23.083248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.604 [2024-11-26 20:55:23.083262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:38176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.604 [2024-11-26 20:55:23.083274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.604 [2024-11-26 20:55:23.083312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:38184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.604 [2024-11-26 20:55:23.083329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.604 [2024-11-26 20:55:23.083351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:38192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.604 [2024-11-26 20:55:23.083366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.604 [2024-11-26 20:55:23.083382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:38200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.604 [2024-11-26 20:55:23.083396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.604 [2024-11-26 20:55:23.083412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:38208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.604 [2024-11-26 20:55:23.083426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.604 [2024-11-26 20:55:23.083441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:38216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:19.604 [2024-11-26 20:55:23.083456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.604 [2024-11-26 20:55:23.083470] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9af880 is same with the state(6) to be set 00:25:19.604 [2024-11-26 20:55:23.083487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:19.604 [2024-11-26 20:55:23.083499] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:19.604 [2024-11-26 20:55:23.083511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38224 len:8 PRP1 0x0 PRP2 0x0 00:25:19.604 [2024-11-26 20:55:23.083524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.604 [2024-11-26 20:55:23.083685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.604 [2024-11-26 20:55:23.083705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.604 [2024-11-26 20:55:23.083734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.604 [2024-11-26 20:55:23.083748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.604 [2024-11-26 20:55:23.083762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.604 [2024-11-26 20:55:23.083794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.604 [2024-11-26 20:55:23.083817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:19.604 [2024-11-26 20:55:23.083832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:19.604 [2024-11-26 20:55:23.083845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:19.604 [2024-11-26 20:55:23.087003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:19.604 [2024-11-26 20:55:23.087035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:19.604 [2024-11-26 20:55:23.087608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.604 [2024-11-26 20:55:23.087638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:19.604 [2024-11-26 20:55:23.087660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:19.604 [2024-11-26 20:55:23.087902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:19.604 [2024-11-26 20:55:23.088096] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:19.604 [2024-11-26 20:55:23.088115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:19.604 [2024-11-26 20:55:23.088130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:19.604 [2024-11-26 20:55:23.088144] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:19.604 [2024-11-26 20:55:23.100420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:19.604 [2024-11-26 20:55:23.100792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.604 [2024-11-26 20:55:23.100821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:19.604 [2024-11-26 20:55:23.100837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:19.604 [2024-11-26 20:55:23.101048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:19.604 [2024-11-26 20:55:23.101252] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:19.604 [2024-11-26 20:55:23.101272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:19.604 [2024-11-26 20:55:23.101299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:19.604 [2024-11-26 20:55:23.101326] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:19.604 [2024-11-26 20:55:23.113577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:19.604 [2024-11-26 20:55:23.113988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.604 [2024-11-26 20:55:23.114017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:19.604 [2024-11-26 20:55:23.114033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:19.604 [2024-11-26 20:55:23.114265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:19.604 [2024-11-26 20:55:23.114499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:19.604 [2024-11-26 20:55:23.114522] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:19.604 [2024-11-26 20:55:23.114536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:19.604 [2024-11-26 20:55:23.114549] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:19.604 [2024-11-26 20:55:23.126717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:19.604 [2024-11-26 20:55:23.127067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.604 [2024-11-26 20:55:23.127096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:19.604 [2024-11-26 20:55:23.127112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:19.604 [2024-11-26 20:55:23.127362] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:19.604 [2024-11-26 20:55:23.127583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:19.604 [2024-11-26 20:55:23.127625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:19.604 [2024-11-26 20:55:23.127640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:19.604 [2024-11-26 20:55:23.127653] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:19.605 [2024-11-26 20:55:23.139772] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:19.605 [2024-11-26 20:55:23.140131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.605 [2024-11-26 20:55:23.140161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:19.605 [2024-11-26 20:55:23.140177] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:19.605 [2024-11-26 20:55:23.140446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:19.605 [2024-11-26 20:55:23.140661] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:19.605 [2024-11-26 20:55:23.140696] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:19.605 [2024-11-26 20:55:23.140709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:19.605 [2024-11-26 20:55:23.140722] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:19.605 [2024-11-26 20:55:23.152848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:19.605 [2024-11-26 20:55:23.153252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.605 [2024-11-26 20:55:23.153282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:19.605 [2024-11-26 20:55:23.153299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:19.605 [2024-11-26 20:55:23.153566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:19.605 [2024-11-26 20:55:23.153772] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:19.605 [2024-11-26 20:55:23.153793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:19.605 [2024-11-26 20:55:23.153806] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:19.605 [2024-11-26 20:55:23.153818] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:19.605 [2024-11-26 20:55:23.165861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:19.605 [2024-11-26 20:55:23.166206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.605 [2024-11-26 20:55:23.166235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:19.605 [2024-11-26 20:55:23.166251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:19.605 [2024-11-26 20:55:23.166498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:19.605 [2024-11-26 20:55:23.166722] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:19.605 [2024-11-26 20:55:23.166743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:19.605 [2024-11-26 20:55:23.166758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:19.605 [2024-11-26 20:55:23.166775] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:19.605 [2024-11-26 20:55:23.178971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:19.605 [2024-11-26 20:55:23.179381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.605 [2024-11-26 20:55:23.179411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:19.605 [2024-11-26 20:55:23.179427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:19.605 [2024-11-26 20:55:23.179671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:19.605 [2024-11-26 20:55:23.179875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:19.605 [2024-11-26 20:55:23.179895] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:19.605 [2024-11-26 20:55:23.179907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:19.605 [2024-11-26 20:55:23.179920] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:19.605 [2024-11-26 20:55:23.191964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:19.605 [2024-11-26 20:55:23.192322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.605 [2024-11-26 20:55:23.192351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:19.605 [2024-11-26 20:55:23.192366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:19.605 [2024-11-26 20:55:23.192604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:19.605 [2024-11-26 20:55:23.192808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:19.605 [2024-11-26 20:55:23.192828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:19.605 [2024-11-26 20:55:23.192842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:19.605 [2024-11-26 20:55:23.192854] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:19.605 [2024-11-26 20:55:23.204978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:19.605 [2024-11-26 20:55:23.205389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.605 [2024-11-26 20:55:23.205417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:19.605 [2024-11-26 20:55:23.205433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:19.605 [2024-11-26 20:55:23.205671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:19.605 [2024-11-26 20:55:23.205874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:19.605 [2024-11-26 20:55:23.205895] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:19.605 [2024-11-26 20:55:23.205907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:19.605 [2024-11-26 20:55:23.205921] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:19.605 [2024-11-26 20:55:23.218020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:19.605 [2024-11-26 20:55:23.218370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.605 [2024-11-26 20:55:23.218399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:19.605 [2024-11-26 20:55:23.218415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:19.605 [2024-11-26 20:55:23.218651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:19.605 [2024-11-26 20:55:23.218855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:19.605 [2024-11-26 20:55:23.218875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:19.605 [2024-11-26 20:55:23.218889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:19.605 [2024-11-26 20:55:23.218901] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:19.605 [2024-11-26 20:55:23.231118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:19.605 [2024-11-26 20:55:23.231481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.605 [2024-11-26 20:55:23.231523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:19.605 [2024-11-26 20:55:23.231538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:19.605 [2024-11-26 20:55:23.231754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:19.605 [2024-11-26 20:55:23.231958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:19.605 [2024-11-26 20:55:23.231977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:19.605 [2024-11-26 20:55:23.231990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:19.606 [2024-11-26 20:55:23.232002] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:19.606 [2024-11-26 20:55:23.244166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:19.606 [2024-11-26 20:55:23.244501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.606 [2024-11-26 20:55:23.244530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:19.606 [2024-11-26 20:55:23.244546] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:19.606 [2024-11-26 20:55:23.244779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:19.606 [2024-11-26 20:55:23.244967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:19.606 [2024-11-26 20:55:23.244987] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:19.606 [2024-11-26 20:55:23.244999] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:19.606 [2024-11-26 20:55:23.245012] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:19.606 [2024-11-26 20:55:23.257848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:19.606 [2024-11-26 20:55:23.258205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.606 [2024-11-26 20:55:23.258233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:19.606 [2024-11-26 20:55:23.258249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:19.606 [2024-11-26 20:55:23.258495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:19.606 [2024-11-26 20:55:23.258732] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:19.606 [2024-11-26 20:55:23.258753] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:19.606 [2024-11-26 20:55:23.258766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:19.606 [2024-11-26 20:55:23.258778] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:19.606 [2024-11-26 20:55:23.271100] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:19.606 [2024-11-26 20:55:23.271443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.606 [2024-11-26 20:55:23.271473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:19.606 [2024-11-26 20:55:23.271489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:19.606 [2024-11-26 20:55:23.271727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:19.606 [2024-11-26 20:55:23.271930] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:19.606 [2024-11-26 20:55:23.271950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:19.606 [2024-11-26 20:55:23.271963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:19.606 [2024-11-26 20:55:23.271975] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:19.606 [2024-11-26 20:55:23.284274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:19.606 [2024-11-26 20:55:23.284736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.606 [2024-11-26 20:55:23.284764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:19.606 [2024-11-26 20:55:23.284780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:19.606 [2024-11-26 20:55:23.284997] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:19.606 [2024-11-26 20:55:23.285201] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:19.606 [2024-11-26 20:55:23.285222] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:19.606 [2024-11-26 20:55:23.285234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:19.606 [2024-11-26 20:55:23.285247] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:19.865 [2024-11-26 20:55:23.297688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:19.865 [2024-11-26 20:55:23.298043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.865 [2024-11-26 20:55:23.298092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:19.865 [2024-11-26 20:55:23.298108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:19.865 [2024-11-26 20:55:23.298369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:19.865 [2024-11-26 20:55:23.298569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:19.865 [2024-11-26 20:55:23.298610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:19.865 [2024-11-26 20:55:23.298624] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:19.865 [2024-11-26 20:55:23.298637] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:19.865 [2024-11-26 20:55:23.310907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:19.865 [2024-11-26 20:55:23.311312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.865 [2024-11-26 20:55:23.311356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:19.865 [2024-11-26 20:55:23.311373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:19.865 [2024-11-26 20:55:23.311617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:19.865 [2024-11-26 20:55:23.311822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:19.865 [2024-11-26 20:55:23.311842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:19.865 [2024-11-26 20:55:23.311855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:19.865 [2024-11-26 20:55:23.311868] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:19.865 [2024-11-26 20:55:23.323984] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:19.865 [2024-11-26 20:55:23.324459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.865 [2024-11-26 20:55:23.324488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:19.865 [2024-11-26 20:55:23.324503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:19.865 [2024-11-26 20:55:23.324744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:19.865 [2024-11-26 20:55:23.324952] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:19.865 [2024-11-26 20:55:23.324987] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:19.865 [2024-11-26 20:55:23.325000] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:19.865 [2024-11-26 20:55:23.325012] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:19.865 [2024-11-26 20:55:23.337120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:19.865 [2024-11-26 20:55:23.337451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.865 [2024-11-26 20:55:23.337493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:19.865 [2024-11-26 20:55:23.337510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:19.865 [2024-11-26 20:55:23.337753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:19.865 [2024-11-26 20:55:23.337998] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:19.865 [2024-11-26 20:55:23.338020] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:19.865 [2024-11-26 20:55:23.338034] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:19.865 [2024-11-26 20:55:23.338052] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:19.865 [2024-11-26 20:55:23.351040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:19.865 [2024-11-26 20:55:23.351410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.865 [2024-11-26 20:55:23.351440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:19.865 [2024-11-26 20:55:23.351457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:19.865 [2024-11-26 20:55:23.351700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:19.865 [2024-11-26 20:55:23.351899] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:19.865 [2024-11-26 20:55:23.351921] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:19.865 [2024-11-26 20:55:23.351934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:19.865 [2024-11-26 20:55:23.351963] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:19.865 [2024-11-26 20:55:23.364639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:19.865 [2024-11-26 20:55:23.365000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.865 [2024-11-26 20:55:23.365028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:19.865 [2024-11-26 20:55:23.365045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:19.865 [2024-11-26 20:55:23.365280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:19.865 [2024-11-26 20:55:23.365503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:19.865 [2024-11-26 20:55:23.365526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:19.866 [2024-11-26 20:55:23.365540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:19.866 [2024-11-26 20:55:23.365553] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:19.866 [2024-11-26 20:55:23.377881] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:19.866 [2024-11-26 20:55:23.378222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.866 [2024-11-26 20:55:23.378249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:19.866 [2024-11-26 20:55:23.378265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:19.866 [2024-11-26 20:55:23.378547] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:19.866 [2024-11-26 20:55:23.378769] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:19.866 [2024-11-26 20:55:23.378789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:19.866 [2024-11-26 20:55:23.378802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:19.866 [2024-11-26 20:55:23.378814] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:19.866 [2024-11-26 20:55:23.390842] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:19.866 [2024-11-26 20:55:23.391247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.866 [2024-11-26 20:55:23.391280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:19.866 [2024-11-26 20:55:23.391297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:19.866 [2024-11-26 20:55:23.391566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:19.866 [2024-11-26 20:55:23.391788] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:19.866 [2024-11-26 20:55:23.391809] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:19.866 [2024-11-26 20:55:23.391821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:19.866 [2024-11-26 20:55:23.391833] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:19.866 [2024-11-26 20:55:23.403947] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:19.866 [2024-11-26 20:55:23.404353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.866 [2024-11-26 20:55:23.404382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:19.866 [2024-11-26 20:55:23.404397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:19.866 [2024-11-26 20:55:23.404632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:19.866 [2024-11-26 20:55:23.404836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:19.866 [2024-11-26 20:55:23.404857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:19.866 [2024-11-26 20:55:23.404870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:19.866 [2024-11-26 20:55:23.404883] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:19.866 [2024-11-26 20:55:23.417064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:19.866 [2024-11-26 20:55:23.417439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.866 [2024-11-26 20:55:23.417468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:19.866 [2024-11-26 20:55:23.417484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:19.866 [2024-11-26 20:55:23.417700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:19.866 [2024-11-26 20:55:23.417904] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:19.866 [2024-11-26 20:55:23.417924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:19.866 [2024-11-26 20:55:23.417937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:19.866 [2024-11-26 20:55:23.417950] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:19.866 [2024-11-26 20:55:23.430072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:19.866 [2024-11-26 20:55:23.430420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.866 [2024-11-26 20:55:23.430449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:19.866 [2024-11-26 20:55:23.430465] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:19.866 [2024-11-26 20:55:23.430705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:19.866 [2024-11-26 20:55:23.430909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:19.866 [2024-11-26 20:55:23.430929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:19.866 [2024-11-26 20:55:23.430942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:19.866 [2024-11-26 20:55:23.430955] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:19.866 [2024-11-26 20:55:23.443184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:19.866 [2024-11-26 20:55:23.443513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.866 [2024-11-26 20:55:23.443543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:19.866 [2024-11-26 20:55:23.443559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:19.866 [2024-11-26 20:55:23.443785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:19.866 [2024-11-26 20:55:23.443987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:19.866 [2024-11-26 20:55:23.444007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:19.866 [2024-11-26 20:55:23.444019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:19.866 [2024-11-26 20:55:23.444030] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:19.866 [2024-11-26 20:55:23.456247] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:19.866 [2024-11-26 20:55:23.456576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.866 [2024-11-26 20:55:23.456604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:19.866 [2024-11-26 20:55:23.456620] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:19.866 [2024-11-26 20:55:23.456840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:19.866 [2024-11-26 20:55:23.457045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:19.866 [2024-11-26 20:55:23.457066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:19.866 [2024-11-26 20:55:23.457078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:19.866 [2024-11-26 20:55:23.457090] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:19.866 [2024-11-26 20:55:23.469439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:19.866 [2024-11-26 20:55:23.469771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.866 [2024-11-26 20:55:23.469799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:19.866 [2024-11-26 20:55:23.469815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:19.866 [2024-11-26 20:55:23.470033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:19.866 [2024-11-26 20:55:23.470236] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:19.866 [2024-11-26 20:55:23.470261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:19.866 [2024-11-26 20:55:23.470274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:19.866 [2024-11-26 20:55:23.470287] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:19.866 [2024-11-26 20:55:23.482466] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:19.866 [2024-11-26 20:55:23.482776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.866 [2024-11-26 20:55:23.482804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:19.866 [2024-11-26 20:55:23.482820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:19.866 [2024-11-26 20:55:23.483038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:19.866 [2024-11-26 20:55:23.483243] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:19.866 [2024-11-26 20:55:23.483264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:19.866 [2024-11-26 20:55:23.483279] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:19.866 [2024-11-26 20:55:23.483291] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:19.866 [2024-11-26 20:55:23.495622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:19.866 [2024-11-26 20:55:23.495965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.866 [2024-11-26 20:55:23.495993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:19.866 [2024-11-26 20:55:23.496009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:19.867 [2024-11-26 20:55:23.496245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:19.867 [2024-11-26 20:55:23.496479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:19.867 [2024-11-26 20:55:23.496501] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:19.867 [2024-11-26 20:55:23.496515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:19.867 [2024-11-26 20:55:23.496528] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:19.867 [2024-11-26 20:55:23.508808] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:19.867 [2024-11-26 20:55:23.509209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.867 [2024-11-26 20:55:23.509238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:19.867 [2024-11-26 20:55:23.509254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:19.867 [2024-11-26 20:55:23.509510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:19.867 [2024-11-26 20:55:23.509752] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:19.867 [2024-11-26 20:55:23.509773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:19.867 [2024-11-26 20:55:23.509786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:19.867 [2024-11-26 20:55:23.509798] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:19.867 [2024-11-26 20:55:23.521911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:19.867 [2024-11-26 20:55:23.522257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.867 [2024-11-26 20:55:23.522286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:19.867 [2024-11-26 20:55:23.522313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:19.867 [2024-11-26 20:55:23.522574] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:19.867 [2024-11-26 20:55:23.522797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:19.867 [2024-11-26 20:55:23.522818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:19.867 [2024-11-26 20:55:23.522831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:19.867 [2024-11-26 20:55:23.522843] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:19.867 [2024-11-26 20:55:23.535121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:19.867 [2024-11-26 20:55:23.535477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.867 [2024-11-26 20:55:23.535506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:19.867 [2024-11-26 20:55:23.535523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:19.867 [2024-11-26 20:55:23.535756] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:19.867 [2024-11-26 20:55:23.535962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:19.867 [2024-11-26 20:55:23.535983] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:19.867 [2024-11-26 20:55:23.535996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:19.867 [2024-11-26 20:55:23.536008] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:19.867 [2024-11-26 20:55:23.548260] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:19.867 [2024-11-26 20:55:23.548652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.867 [2024-11-26 20:55:23.548681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:19.867 [2024-11-26 20:55:23.548696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:19.867 [2024-11-26 20:55:23.548931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:19.867 [2024-11-26 20:55:23.549135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:19.867 [2024-11-26 20:55:23.549156] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:19.867 [2024-11-26 20:55:23.549168] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:19.867 [2024-11-26 20:55:23.549180] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.127 [2024-11-26 20:55:23.561738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.127 [2024-11-26 20:55:23.562081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.127 [2024-11-26 20:55:23.562116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.127 [2024-11-26 20:55:23.562133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.127 [2024-11-26 20:55:23.562377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.127 [2024-11-26 20:55:23.562593] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.127 [2024-11-26 20:55:23.562615] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.127 [2024-11-26 20:55:23.562628] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.127 [2024-11-26 20:55:23.562641] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.127 [2024-11-26 20:55:23.574745] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.127 [2024-11-26 20:55:23.575089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.127 [2024-11-26 20:55:23.575118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.127 [2024-11-26 20:55:23.575133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.127 [2024-11-26 20:55:23.575390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.127 [2024-11-26 20:55:23.575621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.127 [2024-11-26 20:55:23.575642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.127 [2024-11-26 20:55:23.575655] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.127 [2024-11-26 20:55:23.575683] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.127 [2024-11-26 20:55:23.587760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.127 [2024-11-26 20:55:23.588210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.127 [2024-11-26 20:55:23.588264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.127 [2024-11-26 20:55:23.588281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.127 [2024-11-26 20:55:23.588521] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.127 [2024-11-26 20:55:23.588771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.127 [2024-11-26 20:55:23.588792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.127 [2024-11-26 20:55:23.588805] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.127 [2024-11-26 20:55:23.588819] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.127 [2024-11-26 20:55:23.600994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.127 [2024-11-26 20:55:23.601400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.127 [2024-11-26 20:55:23.601429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.127 [2024-11-26 20:55:23.601445] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.127 [2024-11-26 20:55:23.601693] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.127 [2024-11-26 20:55:23.601881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.127 [2024-11-26 20:55:23.601902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.127 [2024-11-26 20:55:23.601915] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.127 [2024-11-26 20:55:23.601928] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.127 [2024-11-26 20:55:23.614207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.127 [2024-11-26 20:55:23.614590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.127 [2024-11-26 20:55:23.614635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.127 [2024-11-26 20:55:23.614652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.127 [2024-11-26 20:55:23.614889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.127 [2024-11-26 20:55:23.615094] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.127 [2024-11-26 20:55:23.615114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.127 [2024-11-26 20:55:23.615128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.127 [2024-11-26 20:55:23.615140] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.127 7032.33 IOPS, 27.47 MiB/s [2024-11-26T19:55:23.824Z] [2024-11-26 20:55:23.627432] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.127 [2024-11-26 20:55:23.627841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.127 [2024-11-26 20:55:23.627870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.127 [2024-11-26 20:55:23.627886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.127 [2024-11-26 20:55:23.628121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.127 [2024-11-26 20:55:23.628353] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.127 [2024-11-26 20:55:23.628375] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.127 [2024-11-26 20:55:23.628403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.127 [2024-11-26 20:55:23.628417] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.127 [2024-11-26 20:55:23.640560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.127 [2024-11-26 20:55:23.640902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.127 [2024-11-26 20:55:23.640930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.127 [2024-11-26 20:55:23.640946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.127 [2024-11-26 20:55:23.641175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.127 [2024-11-26 20:55:23.641408] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.127 [2024-11-26 20:55:23.641434] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.127 [2024-11-26 20:55:23.641449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.127 [2024-11-26 20:55:23.641461] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.127 [2024-11-26 20:55:23.653598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.127 [2024-11-26 20:55:23.654006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.127 [2024-11-26 20:55:23.654033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.127 [2024-11-26 20:55:23.654049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.127 [2024-11-26 20:55:23.654281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.127 [2024-11-26 20:55:23.654506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.127 [2024-11-26 20:55:23.654529] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.127 [2024-11-26 20:55:23.654542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.127 [2024-11-26 20:55:23.654555] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.127 [2024-11-26 20:55:23.666670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.127 [2024-11-26 20:55:23.667038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.127 [2024-11-26 20:55:23.667065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.127 [2024-11-26 20:55:23.667080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.127 [2024-11-26 20:55:23.667290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.127 [2024-11-26 20:55:23.667514] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.127 [2024-11-26 20:55:23.667534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.127 [2024-11-26 20:55:23.667548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.127 [2024-11-26 20:55:23.667561] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.127 [2024-11-26 20:55:23.679772] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.128 [2024-11-26 20:55:23.680113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.128 [2024-11-26 20:55:23.680141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.128 [2024-11-26 20:55:23.680157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.128 [2024-11-26 20:55:23.680384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.128 [2024-11-26 20:55:23.680584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.128 [2024-11-26 20:55:23.680630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.128 [2024-11-26 20:55:23.680644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.128 [2024-11-26 20:55:23.680672] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.128 [2024-11-26 20:55:23.692961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.128 [2024-11-26 20:55:23.693369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.128 [2024-11-26 20:55:23.693398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.128 [2024-11-26 20:55:23.693414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.128 [2024-11-26 20:55:23.693649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.128 [2024-11-26 20:55:23.693853] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.128 [2024-11-26 20:55:23.693874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.128 [2024-11-26 20:55:23.693887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.128 [2024-11-26 20:55:23.693899] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.128 [2024-11-26 20:55:23.706033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.128 [2024-11-26 20:55:23.706404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.128 [2024-11-26 20:55:23.706432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.128 [2024-11-26 20:55:23.706448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.128 [2024-11-26 20:55:23.706663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.128 [2024-11-26 20:55:23.706867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.128 [2024-11-26 20:55:23.706888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.128 [2024-11-26 20:55:23.706900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.128 [2024-11-26 20:55:23.706912] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.128 [2024-11-26 20:55:23.719046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.128 [2024-11-26 20:55:23.719396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.128 [2024-11-26 20:55:23.719425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.128 [2024-11-26 20:55:23.719442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.128 [2024-11-26 20:55:23.719681] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.128 [2024-11-26 20:55:23.719884] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.128 [2024-11-26 20:55:23.719906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.128 [2024-11-26 20:55:23.719918] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.128 [2024-11-26 20:55:23.719930] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.128 [2024-11-26 20:55:23.732098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.128 [2024-11-26 20:55:23.732443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.128 [2024-11-26 20:55:23.732477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.128 [2024-11-26 20:55:23.732494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.128 [2024-11-26 20:55:23.732728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.128 [2024-11-26 20:55:23.732932] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.128 [2024-11-26 20:55:23.732952] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.128 [2024-11-26 20:55:23.732965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.128 [2024-11-26 20:55:23.732977] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.128 [2024-11-26 20:55:23.745117] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.128 [2024-11-26 20:55:23.745434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.128 [2024-11-26 20:55:23.745462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.128 [2024-11-26 20:55:23.745478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.128 [2024-11-26 20:55:23.745695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.128 [2024-11-26 20:55:23.745900] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.128 [2024-11-26 20:55:23.745921] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.128 [2024-11-26 20:55:23.745933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.128 [2024-11-26 20:55:23.745946] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.128 [2024-11-26 20:55:23.758141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.128 [2024-11-26 20:55:23.758551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.128 [2024-11-26 20:55:23.758579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.128 [2024-11-26 20:55:23.758595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.128 [2024-11-26 20:55:23.758833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.128 [2024-11-26 20:55:23.759036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.128 [2024-11-26 20:55:23.759058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.128 [2024-11-26 20:55:23.759071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.128 [2024-11-26 20:55:23.759083] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.128 [2024-11-26 20:55:23.771234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.128 [2024-11-26 20:55:23.771655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.128 [2024-11-26 20:55:23.771684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.128 [2024-11-26 20:55:23.771699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.128 [2024-11-26 20:55:23.771939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.128 [2024-11-26 20:55:23.772143] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.128 [2024-11-26 20:55:23.772164] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.128 [2024-11-26 20:55:23.772176] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.128 [2024-11-26 20:55:23.772188] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.128 [2024-11-26 20:55:23.784335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.128 [2024-11-26 20:55:23.784737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.128 [2024-11-26 20:55:23.784765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.128 [2024-11-26 20:55:23.784781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.128 [2024-11-26 20:55:23.785016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.128 [2024-11-26 20:55:23.785220] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.128 [2024-11-26 20:55:23.785240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.128 [2024-11-26 20:55:23.785253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.128 [2024-11-26 20:55:23.785266] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.128 [2024-11-26 20:55:23.797506] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.128 [2024-11-26 20:55:23.797949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.128 [2024-11-26 20:55:23.797977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.128 [2024-11-26 20:55:23.797993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.128 [2024-11-26 20:55:23.798229] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.128 [2024-11-26 20:55:23.798466] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.128 [2024-11-26 20:55:23.798489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.128 [2024-11-26 20:55:23.798502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.128 [2024-11-26 20:55:23.798515] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.129 [2024-11-26 20:55:23.810463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.129 [2024-11-26 20:55:23.810806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.129 [2024-11-26 20:55:23.810834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.129 [2024-11-26 20:55:23.810850] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.129 [2024-11-26 20:55:23.811086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.129 [2024-11-26 20:55:23.811291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.129 [2024-11-26 20:55:23.811337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.129 [2024-11-26 20:55:23.811357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.129 [2024-11-26 20:55:23.811372] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.388 [2024-11-26 20:55:23.823699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.388 [2024-11-26 20:55:23.824009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.388 [2024-11-26 20:55:23.824038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.388 [2024-11-26 20:55:23.824054] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.388 [2024-11-26 20:55:23.824272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.388 [2024-11-26 20:55:23.824505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.388 [2024-11-26 20:55:23.824527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.388 [2024-11-26 20:55:23.824541] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.388 [2024-11-26 20:55:23.824553] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.388 [2024-11-26 20:55:23.836740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.388 [2024-11-26 20:55:23.837082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.388 [2024-11-26 20:55:23.837111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.388 [2024-11-26 20:55:23.837127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.388 [2024-11-26 20:55:23.837374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.388 [2024-11-26 20:55:23.837575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.388 [2024-11-26 20:55:23.837611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.388 [2024-11-26 20:55:23.837623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.388 [2024-11-26 20:55:23.837636] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.388 [2024-11-26 20:55:23.849775] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.388 [2024-11-26 20:55:23.850174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.388 [2024-11-26 20:55:23.850202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.388 [2024-11-26 20:55:23.850219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.388 [2024-11-26 20:55:23.850475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.388 [2024-11-26 20:55:23.850716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.388 [2024-11-26 20:55:23.850738] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.388 [2024-11-26 20:55:23.850752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.388 [2024-11-26 20:55:23.850765] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.388 [2024-11-26 20:55:23.863157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.388 [2024-11-26 20:55:23.863546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.388 [2024-11-26 20:55:23.863575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.388 [2024-11-26 20:55:23.863592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.388 [2024-11-26 20:55:23.863842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.388 [2024-11-26 20:55:23.864045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.388 [2024-11-26 20:55:23.864065] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.388 [2024-11-26 20:55:23.864079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.388 [2024-11-26 20:55:23.864091] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.388 [2024-11-26 20:55:23.876633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.388 [2024-11-26 20:55:23.877055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.388 [2024-11-26 20:55:23.877083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.388 [2024-11-26 20:55:23.877098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.388 [2024-11-26 20:55:23.877330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.388 [2024-11-26 20:55:23.877552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.388 [2024-11-26 20:55:23.877573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.388 [2024-11-26 20:55:23.877602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.388 [2024-11-26 20:55:23.877614] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.388 [2024-11-26 20:55:23.889890] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.388 [2024-11-26 20:55:23.890231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.388 [2024-11-26 20:55:23.890258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.388 [2024-11-26 20:55:23.890273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.388 [2024-11-26 20:55:23.890532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.388 [2024-11-26 20:55:23.890755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.388 [2024-11-26 20:55:23.890774] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.388 [2024-11-26 20:55:23.890786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.388 [2024-11-26 20:55:23.890799] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.388 [2024-11-26 20:55:23.903049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.388 [2024-11-26 20:55:23.903410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.388 [2024-11-26 20:55:23.903438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.388 [2024-11-26 20:55:23.903459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.388 [2024-11-26 20:55:23.903662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.388 [2024-11-26 20:55:23.903866] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.388 [2024-11-26 20:55:23.903886] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.388 [2024-11-26 20:55:23.903899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.389 [2024-11-26 20:55:23.903911] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.389 [2024-11-26 20:55:23.916300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.389 [2024-11-26 20:55:23.916727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.389 [2024-11-26 20:55:23.916778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.389 [2024-11-26 20:55:23.916794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.389 [2024-11-26 20:55:23.917034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.389 [2024-11-26 20:55:23.917223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.389 [2024-11-26 20:55:23.917243] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.389 [2024-11-26 20:55:23.917256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.389 [2024-11-26 20:55:23.917268] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.389 [2024-11-26 20:55:23.929585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.389 [2024-11-26 20:55:23.929958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.389 [2024-11-26 20:55:23.929985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.389 [2024-11-26 20:55:23.930001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.389 [2024-11-26 20:55:23.930219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.389 [2024-11-26 20:55:23.930455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.389 [2024-11-26 20:55:23.930476] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.389 [2024-11-26 20:55:23.930489] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.389 [2024-11-26 20:55:23.930502] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.389 [2024-11-26 20:55:23.942815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.389 [2024-11-26 20:55:23.943268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.389 [2024-11-26 20:55:23.943334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.389 [2024-11-26 20:55:23.943350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.389 [2024-11-26 20:55:23.943594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.389 [2024-11-26 20:55:23.943805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.389 [2024-11-26 20:55:23.943826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.389 [2024-11-26 20:55:23.943838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.389 [2024-11-26 20:55:23.943851] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.389 [2024-11-26 20:55:23.955966] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.389 [2024-11-26 20:55:23.956436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.389 [2024-11-26 20:55:23.956466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.389 [2024-11-26 20:55:23.956482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.389 [2024-11-26 20:55:23.956734] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.389 [2024-11-26 20:55:23.956939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.389 [2024-11-26 20:55:23.956959] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.389 [2024-11-26 20:55:23.956971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.389 [2024-11-26 20:55:23.956983] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.389 [2024-11-26 20:55:23.969191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.389 [2024-11-26 20:55:23.969624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.389 [2024-11-26 20:55:23.969667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.389 [2024-11-26 20:55:23.969683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.389 [2024-11-26 20:55:23.969914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.389 [2024-11-26 20:55:23.970119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.389 [2024-11-26 20:55:23.970138] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.389 [2024-11-26 20:55:23.970150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.389 [2024-11-26 20:55:23.970163] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.389 [2024-11-26 20:55:23.982254] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.389 [2024-11-26 20:55:23.982744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.389 [2024-11-26 20:55:23.982796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.389 [2024-11-26 20:55:23.982812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.389 [2024-11-26 20:55:23.983055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.389 [2024-11-26 20:55:23.983244] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.389 [2024-11-26 20:55:23.983263] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.389 [2024-11-26 20:55:23.983281] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.389 [2024-11-26 20:55:23.983293] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.389 [2024-11-26 20:55:23.995449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.389 [2024-11-26 20:55:23.995840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.389 [2024-11-26 20:55:23.995912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.389 [2024-11-26 20:55:23.995927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.389 [2024-11-26 20:55:23.996149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.389 [2024-11-26 20:55:23.996390] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.389 [2024-11-26 20:55:23.996412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.389 [2024-11-26 20:55:23.996426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.389 [2024-11-26 20:55:23.996439] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.389 [2024-11-26 20:55:24.008713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.389 [2024-11-26 20:55:24.009104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.389 [2024-11-26 20:55:24.009158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.389 [2024-11-26 20:55:24.009175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.389 [2024-11-26 20:55:24.009431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.389 [2024-11-26 20:55:24.009628] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.389 [2024-11-26 20:55:24.009648] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.389 [2024-11-26 20:55:24.009661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.389 [2024-11-26 20:55:24.009674] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.389 [2024-11-26 20:55:24.021952] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.389 [2024-11-26 20:55:24.022297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.389 [2024-11-26 20:55:24.022360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.389 [2024-11-26 20:55:24.022376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.389 [2024-11-26 20:55:24.022592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.389 [2024-11-26 20:55:24.022822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.389 [2024-11-26 20:55:24.022843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.389 [2024-11-26 20:55:24.022856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.389 [2024-11-26 20:55:24.022868] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.389 [2024-11-26 20:55:24.035159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.389 [2024-11-26 20:55:24.035498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.389 [2024-11-26 20:55:24.035528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.389 [2024-11-26 20:55:24.035544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.389 [2024-11-26 20:55:24.035814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.389 [2024-11-26 20:55:24.036004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.389 [2024-11-26 20:55:24.036025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.389 [2024-11-26 20:55:24.036038] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.390 [2024-11-26 20:55:24.036050] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.390 [2024-11-26 20:55:24.048493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.390 [2024-11-26 20:55:24.048858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.390 [2024-11-26 20:55:24.048927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.390 [2024-11-26 20:55:24.048944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.390 [2024-11-26 20:55:24.049175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.390 [2024-11-26 20:55:24.049421] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.390 [2024-11-26 20:55:24.049445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.390 [2024-11-26 20:55:24.049461] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.390 [2024-11-26 20:55:24.049475] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.390 [2024-11-26 20:55:24.061618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.390 [2024-11-26 20:55:24.061963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.390 [2024-11-26 20:55:24.061994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.390 [2024-11-26 20:55:24.062011] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.390 [2024-11-26 20:55:24.062245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.390 [2024-11-26 20:55:24.062483] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.390 [2024-11-26 20:55:24.062506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.390 [2024-11-26 20:55:24.062519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.390 [2024-11-26 20:55:24.062532] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.390 [2024-11-26 20:55:24.074772] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.390 [2024-11-26 20:55:24.075090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.390 [2024-11-26 20:55:24.075119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.390 [2024-11-26 20:55:24.075140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.390 [2024-11-26 20:55:24.075366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.390 [2024-11-26 20:55:24.075576] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.390 [2024-11-26 20:55:24.075596] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.390 [2024-11-26 20:55:24.075610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.390 [2024-11-26 20:55:24.075637] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.652 [2024-11-26 20:55:24.088447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.652 [2024-11-26 20:55:24.088826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.652 [2024-11-26 20:55:24.088886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.652 [2024-11-26 20:55:24.088901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.652 [2024-11-26 20:55:24.089126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.652 [2024-11-26 20:55:24.089358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.652 [2024-11-26 20:55:24.089381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.652 [2024-11-26 20:55:24.089394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.652 [2024-11-26 20:55:24.089408] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.652 [2024-11-26 20:55:24.101931] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.652 [2024-11-26 20:55:24.102284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.652 [2024-11-26 20:55:24.102325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.652 [2024-11-26 20:55:24.102343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.652 [2024-11-26 20:55:24.102558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.652 [2024-11-26 20:55:24.102807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.652 [2024-11-26 20:55:24.102828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.652 [2024-11-26 20:55:24.102842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.653 [2024-11-26 20:55:24.102854] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.653 [2024-11-26 20:55:24.115514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.653 [2024-11-26 20:55:24.115906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.653 [2024-11-26 20:55:24.115935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.653 [2024-11-26 20:55:24.115951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.653 [2024-11-26 20:55:24.116188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.653 [2024-11-26 20:55:24.116431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.653 [2024-11-26 20:55:24.116455] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.653 [2024-11-26 20:55:24.116469] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.653 [2024-11-26 20:55:24.116482] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.653 [2024-11-26 20:55:24.128998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.653 [2024-11-26 20:55:24.129373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.653 [2024-11-26 20:55:24.129402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.653 [2024-11-26 20:55:24.129419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.653 [2024-11-26 20:55:24.129633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.653 [2024-11-26 20:55:24.129848] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.653 [2024-11-26 20:55:24.129869] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.653 [2024-11-26 20:55:24.129882] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.653 [2024-11-26 20:55:24.129896] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.653 [2024-11-26 20:55:24.142368] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.653 [2024-11-26 20:55:24.142754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.653 [2024-11-26 20:55:24.142782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.653 [2024-11-26 20:55:24.142798] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.653 [2024-11-26 20:55:24.143035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.653 [2024-11-26 20:55:24.143229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.653 [2024-11-26 20:55:24.143250] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.653 [2024-11-26 20:55:24.143263] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.653 [2024-11-26 20:55:24.143275] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.653 [2024-11-26 20:55:24.155653] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.653 [2024-11-26 20:55:24.156000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.653 [2024-11-26 20:55:24.156029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.653 [2024-11-26 20:55:24.156045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.653 [2024-11-26 20:55:24.156279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.653 [2024-11-26 20:55:24.156509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.653 [2024-11-26 20:55:24.156531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.653 [2024-11-26 20:55:24.156560] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.653 [2024-11-26 20:55:24.156574] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.653 [2024-11-26 20:55:24.168871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.653 [2024-11-26 20:55:24.169217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.653 [2024-11-26 20:55:24.169245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.653 [2024-11-26 20:55:24.169260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.653 [2024-11-26 20:55:24.169515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.653 [2024-11-26 20:55:24.169765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.653 [2024-11-26 20:55:24.169786] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.653 [2024-11-26 20:55:24.169799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.653 [2024-11-26 20:55:24.169811] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.653 [2024-11-26 20:55:24.182104] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.653 [2024-11-26 20:55:24.182464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.653 [2024-11-26 20:55:24.182493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.653 [2024-11-26 20:55:24.182509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.653 [2024-11-26 20:55:24.182744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.653 [2024-11-26 20:55:24.182954] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.653 [2024-11-26 20:55:24.182974] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.653 [2024-11-26 20:55:24.182987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.653 [2024-11-26 20:55:24.183000] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.653 [2024-11-26 20:55:24.195397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.653 [2024-11-26 20:55:24.195786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.653 [2024-11-26 20:55:24.195815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.653 [2024-11-26 20:55:24.195831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.653 [2024-11-26 20:55:24.196066] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.653 [2024-11-26 20:55:24.196258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.653 [2024-11-26 20:55:24.196278] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.653 [2024-11-26 20:55:24.196328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.653 [2024-11-26 20:55:24.196358] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.653 [2024-11-26 20:55:24.208697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.653 [2024-11-26 20:55:24.209018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.653 [2024-11-26 20:55:24.209046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.653 [2024-11-26 20:55:24.209062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.653 [2024-11-26 20:55:24.209278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.653 [2024-11-26 20:55:24.209515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.653 [2024-11-26 20:55:24.209536] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.653 [2024-11-26 20:55:24.209549] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.653 [2024-11-26 20:55:24.209573] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.653 [2024-11-26 20:55:24.222024] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.653 [2024-11-26 20:55:24.222380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.653 [2024-11-26 20:55:24.222409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.653 [2024-11-26 20:55:24.222426] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.653 [2024-11-26 20:55:24.222654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.653 [2024-11-26 20:55:24.222865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.653 [2024-11-26 20:55:24.222887] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.653 [2024-11-26 20:55:24.222900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.653 [2024-11-26 20:55:24.222912] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.653 [2024-11-26 20:55:24.235203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.653 [2024-11-26 20:55:24.235588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.653 [2024-11-26 20:55:24.235632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.653 [2024-11-26 20:55:24.235648] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.653 [2024-11-26 20:55:24.235882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.653 [2024-11-26 20:55:24.236076] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.653 [2024-11-26 20:55:24.236097] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.653 [2024-11-26 20:55:24.236111] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.654 [2024-11-26 20:55:24.236124] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.654 [2024-11-26 20:55:24.248440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.654 [2024-11-26 20:55:24.248827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.654 [2024-11-26 20:55:24.248856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.654 [2024-11-26 20:55:24.248878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.654 [2024-11-26 20:55:24.249130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.654 [2024-11-26 20:55:24.249351] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.654 [2024-11-26 20:55:24.249374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.654 [2024-11-26 20:55:24.249389] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.654 [2024-11-26 20:55:24.249402] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.654 [2024-11-26 20:55:24.261793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.654 [2024-11-26 20:55:24.262145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.654 [2024-11-26 20:55:24.262174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.654 [2024-11-26 20:55:24.262190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.654 [2024-11-26 20:55:24.262443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.654 [2024-11-26 20:55:24.262685] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.654 [2024-11-26 20:55:24.262705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.654 [2024-11-26 20:55:24.262718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.654 [2024-11-26 20:55:24.262730] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.654 [2024-11-26 20:55:24.275188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.654 [2024-11-26 20:55:24.275546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.654 [2024-11-26 20:55:24.275576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.654 [2024-11-26 20:55:24.275608] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.654 [2024-11-26 20:55:24.275829] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.654 [2024-11-26 20:55:24.276055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.654 [2024-11-26 20:55:24.276077] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.654 [2024-11-26 20:55:24.276090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.654 [2024-11-26 20:55:24.276103] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.654 [2024-11-26 20:55:24.288597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.654 [2024-11-26 20:55:24.288973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.654 [2024-11-26 20:55:24.289001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.654 [2024-11-26 20:55:24.289017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.654 [2024-11-26 20:55:24.289253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.654 [2024-11-26 20:55:24.289485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.654 [2024-11-26 20:55:24.289508] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.654 [2024-11-26 20:55:24.289521] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.654 [2024-11-26 20:55:24.289534] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.654 [2024-11-26 20:55:24.301799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.654 [2024-11-26 20:55:24.302148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.654 [2024-11-26 20:55:24.302177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.654 [2024-11-26 20:55:24.302193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.654 [2024-11-26 20:55:24.302447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.654 [2024-11-26 20:55:24.302680] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.654 [2024-11-26 20:55:24.302702] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.654 [2024-11-26 20:55:24.302715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.654 [2024-11-26 20:55:24.302727] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.654 [2024-11-26 20:55:24.315077] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.654 [2024-11-26 20:55:24.315456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.654 [2024-11-26 20:55:24.315484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.654 [2024-11-26 20:55:24.315500] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.654 [2024-11-26 20:55:24.315747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.654 [2024-11-26 20:55:24.315940] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.654 [2024-11-26 20:55:24.315961] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.654 [2024-11-26 20:55:24.315974] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.654 [2024-11-26 20:55:24.315987] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.654 [2024-11-26 20:55:24.328253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.654 [2024-11-26 20:55:24.328587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.654 [2024-11-26 20:55:24.328617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.654 [2024-11-26 20:55:24.328632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.654 [2024-11-26 20:55:24.328849] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.654 [2024-11-26 20:55:24.329059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.654 [2024-11-26 20:55:24.329080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.654 [2024-11-26 20:55:24.329098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.654 [2024-11-26 20:55:24.329111] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.654 [2024-11-26 20:55:24.341887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.654 [2024-11-26 20:55:24.342309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.654 [2024-11-26 20:55:24.342339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.654 [2024-11-26 20:55:24.342356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.654 [2024-11-26 20:55:24.342599] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.654 [2024-11-26 20:55:24.342810] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.654 [2024-11-26 20:55:24.342831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.654 [2024-11-26 20:55:24.342844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.654 [2024-11-26 20:55:24.342857] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.936 [2024-11-26 20:55:24.355634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.936 [2024-11-26 20:55:24.355989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.936 [2024-11-26 20:55:24.356020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.936 [2024-11-26 20:55:24.356037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.936 [2024-11-26 20:55:24.356253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.936 [2024-11-26 20:55:24.356482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.936 [2024-11-26 20:55:24.356506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.936 [2024-11-26 20:55:24.356521] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.936 [2024-11-26 20:55:24.356536] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.936 [2024-11-26 20:55:24.369049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.936 [2024-11-26 20:55:24.369393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.936 [2024-11-26 20:55:24.369424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.936 [2024-11-26 20:55:24.369440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.936 [2024-11-26 20:55:24.369684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.936 [2024-11-26 20:55:24.369885] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.936 [2024-11-26 20:55:24.369907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.936 [2024-11-26 20:55:24.369920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.936 [2024-11-26 20:55:24.369934] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.936 [2024-11-26 20:55:24.382476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.936 [2024-11-26 20:55:24.382853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.936 [2024-11-26 20:55:24.382880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.936 [2024-11-26 20:55:24.382896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.936 [2024-11-26 20:55:24.383112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.936 [2024-11-26 20:55:24.383347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.936 [2024-11-26 20:55:24.383369] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.936 [2024-11-26 20:55:24.383382] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.936 [2024-11-26 20:55:24.383395] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.936 [2024-11-26 20:55:24.395748] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.936 [2024-11-26 20:55:24.396079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.936 [2024-11-26 20:55:24.396107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.936 [2024-11-26 20:55:24.396122] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.936 [2024-11-26 20:55:24.396352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.936 [2024-11-26 20:55:24.396575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.936 [2024-11-26 20:55:24.396597] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.936 [2024-11-26 20:55:24.396626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.936 [2024-11-26 20:55:24.396640] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.936 [2024-11-26 20:55:24.409008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.936 [2024-11-26 20:55:24.409429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.936 [2024-11-26 20:55:24.409459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.936 [2024-11-26 20:55:24.409475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.936 [2024-11-26 20:55:24.409715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.936 [2024-11-26 20:55:24.409924] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.936 [2024-11-26 20:55:24.409946] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.936 [2024-11-26 20:55:24.409959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.936 [2024-11-26 20:55:24.409972] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.936 [2024-11-26 20:55:24.422239] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.936 [2024-11-26 20:55:24.422593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.936 [2024-11-26 20:55:24.422621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.936 [2024-11-26 20:55:24.422641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.936 [2024-11-26 20:55:24.422853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.936 [2024-11-26 20:55:24.423047] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.936 [2024-11-26 20:55:24.423068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.936 [2024-11-26 20:55:24.423082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.936 [2024-11-26 20:55:24.423095] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.936 [2024-11-26 20:55:24.435511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.936 [2024-11-26 20:55:24.435852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.936 [2024-11-26 20:55:24.435879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.936 [2024-11-26 20:55:24.435895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.936 [2024-11-26 20:55:24.436111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.937 [2024-11-26 20:55:24.436348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.937 [2024-11-26 20:55:24.436371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.937 [2024-11-26 20:55:24.436400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.937 [2024-11-26 20:55:24.436415] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.937 [2024-11-26 20:55:24.448840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.937 [2024-11-26 20:55:24.449236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.937 [2024-11-26 20:55:24.449264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.937 [2024-11-26 20:55:24.449280] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.937 [2024-11-26 20:55:24.449512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.937 [2024-11-26 20:55:24.449742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.937 [2024-11-26 20:55:24.449764] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.937 [2024-11-26 20:55:24.449777] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.937 [2024-11-26 20:55:24.449789] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.937 [2024-11-26 20:55:24.462138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.937 [2024-11-26 20:55:24.462487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.937 [2024-11-26 20:55:24.462516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.937 [2024-11-26 20:55:24.462531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.937 [2024-11-26 20:55:24.462766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.937 [2024-11-26 20:55:24.462980] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.937 [2024-11-26 20:55:24.463002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.937 [2024-11-26 20:55:24.463015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.937 [2024-11-26 20:55:24.463027] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.937 [2024-11-26 20:55:24.475432] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.937 [2024-11-26 20:55:24.475808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.937 [2024-11-26 20:55:24.475837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.937 [2024-11-26 20:55:24.475853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.937 [2024-11-26 20:55:24.476088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.937 [2024-11-26 20:55:24.476310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.937 [2024-11-26 20:55:24.476347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.937 [2024-11-26 20:55:24.476362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.937 [2024-11-26 20:55:24.476375] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.937 [2024-11-26 20:55:24.488739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.937 [2024-11-26 20:55:24.489089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.937 [2024-11-26 20:55:24.489117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.937 [2024-11-26 20:55:24.489132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.937 [2024-11-26 20:55:24.489368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.937 [2024-11-26 20:55:24.489612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.937 [2024-11-26 20:55:24.489635] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.937 [2024-11-26 20:55:24.489649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.937 [2024-11-26 20:55:24.489677] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.937 [2024-11-26 20:55:24.502081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.937 [2024-11-26 20:55:24.502437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.937 [2024-11-26 20:55:24.502466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.937 [2024-11-26 20:55:24.502482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.937 [2024-11-26 20:55:24.502717] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.937 [2024-11-26 20:55:24.502927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.937 [2024-11-26 20:55:24.502948] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.937 [2024-11-26 20:55:24.502961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.937 [2024-11-26 20:55:24.502979] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.937 [2024-11-26 20:55:24.515392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.937 [2024-11-26 20:55:24.515723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.937 [2024-11-26 20:55:24.515751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.937 [2024-11-26 20:55:24.515766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.937 [2024-11-26 20:55:24.515969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.937 [2024-11-26 20:55:24.516194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.937 [2024-11-26 20:55:24.516216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.937 [2024-11-26 20:55:24.516229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.937 [2024-11-26 20:55:24.516242] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.937 [2024-11-26 20:55:24.528700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.937 [2024-11-26 20:55:24.529054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.937 [2024-11-26 20:55:24.529084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.937 [2024-11-26 20:55:24.529100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.937 [2024-11-26 20:55:24.529333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.937 [2024-11-26 20:55:24.529540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.937 [2024-11-26 20:55:24.529562] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.937 [2024-11-26 20:55:24.529576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.937 [2024-11-26 20:55:24.529590] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.937 [2024-11-26 20:55:24.542013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.937 [2024-11-26 20:55:24.542370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.937 [2024-11-26 20:55:24.542399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.937 [2024-11-26 20:55:24.542415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.937 [2024-11-26 20:55:24.542651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.937 [2024-11-26 20:55:24.542845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.937 [2024-11-26 20:55:24.542866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.937 [2024-11-26 20:55:24.542878] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.937 [2024-11-26 20:55:24.542890] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.937 [2024-11-26 20:55:24.555312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.937 [2024-11-26 20:55:24.555674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.937 [2024-11-26 20:55:24.555703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.937 [2024-11-26 20:55:24.555719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.937 [2024-11-26 20:55:24.555940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.937 [2024-11-26 20:55:24.556149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.937 [2024-11-26 20:55:24.556180] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.937 [2024-11-26 20:55:24.556194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.937 [2024-11-26 20:55:24.556207] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.937 [2024-11-26 20:55:24.568619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.937 [2024-11-26 20:55:24.568971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.938 [2024-11-26 20:55:24.569000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.938 [2024-11-26 20:55:24.569016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.938 [2024-11-26 20:55:24.569252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.938 [2024-11-26 20:55:24.569493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.938 [2024-11-26 20:55:24.569516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.938 [2024-11-26 20:55:24.569530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.938 [2024-11-26 20:55:24.569543] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.938 [2024-11-26 20:55:24.581827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.938 [2024-11-26 20:55:24.582150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.938 [2024-11-26 20:55:24.582177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.938 [2024-11-26 20:55:24.582193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.938 [2024-11-26 20:55:24.582424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.938 [2024-11-26 20:55:24.582659] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.938 [2024-11-26 20:55:24.582680] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.938 [2024-11-26 20:55:24.582692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.938 [2024-11-26 20:55:24.582704] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.938 [2024-11-26 20:55:24.595026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.938 [2024-11-26 20:55:24.595348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.938 [2024-11-26 20:55:24.595378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.938 [2024-11-26 20:55:24.595394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.938 [2024-11-26 20:55:24.595622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.938 [2024-11-26 20:55:24.595832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.938 [2024-11-26 20:55:24.595852] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.938 [2024-11-26 20:55:24.595865] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.938 [2024-11-26 20:55:24.595878] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.938 [2024-11-26 20:55:24.608235] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.938 [2024-11-26 20:55:24.608659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.938 [2024-11-26 20:55:24.608689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.938 [2024-11-26 20:55:24.608705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.938 [2024-11-26 20:55:24.608962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:20.938 [2024-11-26 20:55:24.609161] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:20.938 [2024-11-26 20:55:24.609182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:20.938 [2024-11-26 20:55:24.609212] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:20.938 [2024-11-26 20:55:24.609226] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:20.938 5274.25 IOPS, 20.60 MiB/s [2024-11-26T19:55:24.635Z] [2024-11-26 20:55:24.623362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:20.938 [2024-11-26 20:55:24.623693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.938 [2024-11-26 20:55:24.623723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:20.938 [2024-11-26 20:55:24.623740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:20.938 [2024-11-26 20:55:24.623954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.199 [2024-11-26 20:55:24.624173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.199 [2024-11-26 20:55:24.624197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.199 [2024-11-26 20:55:24.624213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.199 [2024-11-26 20:55:24.624227] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.199 [2024-11-26 20:55:24.637091] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.199 [2024-11-26 20:55:24.637432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.199 [2024-11-26 20:55:24.637462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.199 [2024-11-26 20:55:24.637479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.199 [2024-11-26 20:55:24.637735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.199 [2024-11-26 20:55:24.637931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.199 [2024-11-26 20:55:24.637957] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.199 [2024-11-26 20:55:24.637970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.199 [2024-11-26 20:55:24.637982] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.199 [2024-11-26 20:55:24.650294] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.199 [2024-11-26 20:55:24.650631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.199 [2024-11-26 20:55:24.650659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.199 [2024-11-26 20:55:24.650675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.199 [2024-11-26 20:55:24.650891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.199 [2024-11-26 20:55:24.651101] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.199 [2024-11-26 20:55:24.651121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.199 [2024-11-26 20:55:24.651134] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.199 [2024-11-26 20:55:24.651147] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.199 [2024-11-26 20:55:24.663538] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.199 [2024-11-26 20:55:24.663875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.199 [2024-11-26 20:55:24.663904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.199 [2024-11-26 20:55:24.663920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.199 [2024-11-26 20:55:24.664136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.199 [2024-11-26 20:55:24.664390] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.199 [2024-11-26 20:55:24.664413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.199 [2024-11-26 20:55:24.664428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.199 [2024-11-26 20:55:24.664441] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.199 [2024-11-26 20:55:24.676906] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.199 [2024-11-26 20:55:24.677281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.199 [2024-11-26 20:55:24.677316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.199 [2024-11-26 20:55:24.677334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.199 [2024-11-26 20:55:24.677571] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.199 [2024-11-26 20:55:24.677782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.199 [2024-11-26 20:55:24.677803] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.199 [2024-11-26 20:55:24.677816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.200 [2024-11-26 20:55:24.677833] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.200 [2024-11-26 20:55:24.690168] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.200 [2024-11-26 20:55:24.690505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.200 [2024-11-26 20:55:24.690533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.200 [2024-11-26 20:55:24.690549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.200 [2024-11-26 20:55:24.690767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.200 [2024-11-26 20:55:24.690976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.200 [2024-11-26 20:55:24.690997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.200 [2024-11-26 20:55:24.691009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.200 [2024-11-26 20:55:24.691022] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.200 [2024-11-26 20:55:24.703432] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.200 [2024-11-26 20:55:24.703769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.200 [2024-11-26 20:55:24.703798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.200 [2024-11-26 20:55:24.703813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.200 [2024-11-26 20:55:24.704016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.200 [2024-11-26 20:55:24.704241] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.200 [2024-11-26 20:55:24.704262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.200 [2024-11-26 20:55:24.704276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.200 [2024-11-26 20:55:24.704313] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.200 [2024-11-26 20:55:24.716689] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.200 [2024-11-26 20:55:24.717045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.200 [2024-11-26 20:55:24.717073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.200 [2024-11-26 20:55:24.717089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.200 [2024-11-26 20:55:24.717335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.200 [2024-11-26 20:55:24.717554] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.200 [2024-11-26 20:55:24.717576] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.200 [2024-11-26 20:55:24.717590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.200 [2024-11-26 20:55:24.717603] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.200 [2024-11-26 20:55:24.729912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.200 [2024-11-26 20:55:24.730267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.200 [2024-11-26 20:55:24.730318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.200 [2024-11-26 20:55:24.730337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.200 [2024-11-26 20:55:24.730569] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.200 [2024-11-26 20:55:24.730780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.200 [2024-11-26 20:55:24.730802] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.200 [2024-11-26 20:55:24.730815] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.200 [2024-11-26 20:55:24.730827] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.200 [2024-11-26 20:55:24.743236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.200 [2024-11-26 20:55:24.743587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.200 [2024-11-26 20:55:24.743616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.200 [2024-11-26 20:55:24.743632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.200 [2024-11-26 20:55:24.743864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.200 [2024-11-26 20:55:24.744074] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.200 [2024-11-26 20:55:24.744095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.200 [2024-11-26 20:55:24.744108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.200 [2024-11-26 20:55:24.744120] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.200 [2024-11-26 20:55:24.756539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.200 [2024-11-26 20:55:24.756906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.200 [2024-11-26 20:55:24.756935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.200 [2024-11-26 20:55:24.756951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.200 [2024-11-26 20:55:24.757186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.200 [2024-11-26 20:55:24.757427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.200 [2024-11-26 20:55:24.757450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.200 [2024-11-26 20:55:24.757463] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.200 [2024-11-26 20:55:24.757476] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.200 [2024-11-26 20:55:24.769868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.200 [2024-11-26 20:55:24.770188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.200 [2024-11-26 20:55:24.770216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.200 [2024-11-26 20:55:24.770232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.200 [2024-11-26 20:55:24.770485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.200 [2024-11-26 20:55:24.770716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.200 [2024-11-26 20:55:24.770737] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.200 [2024-11-26 20:55:24.770751] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.200 [2024-11-26 20:55:24.770763] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.200 [2024-11-26 20:55:24.783214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.200 [2024-11-26 20:55:24.783597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.200 [2024-11-26 20:55:24.783640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.200 [2024-11-26 20:55:24.783656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.200 [2024-11-26 20:55:24.783892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.200 [2024-11-26 20:55:24.784085] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.200 [2024-11-26 20:55:24.784106] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.200 [2024-11-26 20:55:24.784119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.200 [2024-11-26 20:55:24.784131] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.200 [2024-11-26 20:55:24.796529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.200 [2024-11-26 20:55:24.796900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.201 [2024-11-26 20:55:24.796928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.201 [2024-11-26 20:55:24.796943] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.201 [2024-11-26 20:55:24.797166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.201 [2024-11-26 20:55:24.797419] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.201 [2024-11-26 20:55:24.797442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.201 [2024-11-26 20:55:24.797456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.201 [2024-11-26 20:55:24.797469] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.201 [2024-11-26 20:55:24.809716] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.201 [2024-11-26 20:55:24.810078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.201 [2024-11-26 20:55:24.810105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.201 [2024-11-26 20:55:24.810120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.201 [2024-11-26 20:55:24.810346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.201 [2024-11-26 20:55:24.810567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.201 [2024-11-26 20:55:24.810594] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.201 [2024-11-26 20:55:24.810608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.201 [2024-11-26 20:55:24.810622] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.201 [2024-11-26 20:55:24.823100] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.201 [2024-11-26 20:55:24.823480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.201 [2024-11-26 20:55:24.823511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.201 [2024-11-26 20:55:24.823528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.201 [2024-11-26 20:55:24.823782] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.201 [2024-11-26 20:55:24.823977] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.201 [2024-11-26 20:55:24.823998] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.201 [2024-11-26 20:55:24.824011] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.201 [2024-11-26 20:55:24.824024] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.201 [2024-11-26 20:55:24.836510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.201 [2024-11-26 20:55:24.836840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.201 [2024-11-26 20:55:24.836870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.201 [2024-11-26 20:55:24.836886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.201 [2024-11-26 20:55:24.837116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.201 [2024-11-26 20:55:24.837358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.201 [2024-11-26 20:55:24.837383] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.201 [2024-11-26 20:55:24.837397] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.201 [2024-11-26 20:55:24.837411] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.201 [2024-11-26 20:55:24.849924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.201 [2024-11-26 20:55:24.850282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.201 [2024-11-26 20:55:24.850318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.201 [2024-11-26 20:55:24.850337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.201 [2024-11-26 20:55:24.850568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.201 [2024-11-26 20:55:24.850779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.201 [2024-11-26 20:55:24.850800] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.201 [2024-11-26 20:55:24.850813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.201 [2024-11-26 20:55:24.850835] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.201 [2024-11-26 20:55:24.863183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.201 [2024-11-26 20:55:24.863551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.201 [2024-11-26 20:55:24.863580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.201 [2024-11-26 20:55:24.863597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.201 [2024-11-26 20:55:24.863812] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.201 [2024-11-26 20:55:24.864051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.201 [2024-11-26 20:55:24.864073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.201 [2024-11-26 20:55:24.864088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.201 [2024-11-26 20:55:24.864101] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.201 [2024-11-26 20:55:24.876685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.201 [2024-11-26 20:55:24.877056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.201 [2024-11-26 20:55:24.877099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.201 [2024-11-26 20:55:24.877115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.201 [2024-11-26 20:55:24.877345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.201 [2024-11-26 20:55:24.877550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.201 [2024-11-26 20:55:24.877570] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.201 [2024-11-26 20:55:24.877584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.201 [2024-11-26 20:55:24.877612] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.201 [2024-11-26 20:55:24.890054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.201 [2024-11-26 20:55:24.890446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.201 [2024-11-26 20:55:24.890476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.201 [2024-11-26 20:55:24.890492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.201 [2024-11-26 20:55:24.890731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.201 [2024-11-26 20:55:24.890940] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.201 [2024-11-26 20:55:24.890975] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.201 [2024-11-26 20:55:24.890990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.201 [2024-11-26 20:55:24.891004] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.462 [2024-11-26 20:55:24.903384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.462 [2024-11-26 20:55:24.903736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.462 [2024-11-26 20:55:24.903764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.462 [2024-11-26 20:55:24.903779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.462 [2024-11-26 20:55:24.903996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.462 [2024-11-26 20:55:24.904206] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.462 [2024-11-26 20:55:24.904227] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.462 [2024-11-26 20:55:24.904240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.462 [2024-11-26 20:55:24.904253] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.462 [2024-11-26 20:55:24.916713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.462 [2024-11-26 20:55:24.917072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.462 [2024-11-26 20:55:24.917100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.462 [2024-11-26 20:55:24.917116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.462 [2024-11-26 20:55:24.917349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.462 [2024-11-26 20:55:24.917555] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.462 [2024-11-26 20:55:24.917576] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.462 [2024-11-26 20:55:24.917604] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.462 [2024-11-26 20:55:24.917617] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.462 [2024-11-26 20:55:24.930043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.462 [2024-11-26 20:55:24.930459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.462 [2024-11-26 20:55:24.930489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.462 [2024-11-26 20:55:24.930505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.462 [2024-11-26 20:55:24.930743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.462 [2024-11-26 20:55:24.930937] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.462 [2024-11-26 20:55:24.930958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.462 [2024-11-26 20:55:24.930971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.462 [2024-11-26 20:55:24.930983] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.462 [2024-11-26 20:55:24.943248] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.462 [2024-11-26 20:55:24.943627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.462 [2024-11-26 20:55:24.943656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.462 [2024-11-26 20:55:24.943673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.462 [2024-11-26 20:55:24.943913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.462 [2024-11-26 20:55:24.944107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.462 [2024-11-26 20:55:24.944128] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.462 [2024-11-26 20:55:24.944140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.462 [2024-11-26 20:55:24.944153] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.462 [2024-11-26 20:55:24.956439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.462 [2024-11-26 20:55:24.956810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.462 [2024-11-26 20:55:24.956838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.462 [2024-11-26 20:55:24.956854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.462 [2024-11-26 20:55:24.957091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.462 [2024-11-26 20:55:24.957287] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.462 [2024-11-26 20:55:24.957331] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.462 [2024-11-26 20:55:24.957347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.462 [2024-11-26 20:55:24.957361] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.462 [2024-11-26 20:55:24.969706] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.462 [2024-11-26 20:55:24.970056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.462 [2024-11-26 20:55:24.970084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.463 [2024-11-26 20:55:24.970099] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.463 [2024-11-26 20:55:24.970346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.463 [2024-11-26 20:55:24.970567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.463 [2024-11-26 20:55:24.970589] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.463 [2024-11-26 20:55:24.970603] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.463 [2024-11-26 20:55:24.970616] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.463 [2024-11-26 20:55:24.983019] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.463 [2024-11-26 20:55:24.983374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.463 [2024-11-26 20:55:24.983403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.463 [2024-11-26 20:55:24.983420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.463 [2024-11-26 20:55:24.983662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.463 [2024-11-26 20:55:24.983855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.463 [2024-11-26 20:55:24.983880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.463 [2024-11-26 20:55:24.983894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.463 [2024-11-26 20:55:24.983907] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.463 [2024-11-26 20:55:24.996422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.463 [2024-11-26 20:55:24.996774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.463 [2024-11-26 20:55:24.996800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.463 [2024-11-26 20:55:24.996815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.463 [2024-11-26 20:55:24.997024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.463 [2024-11-26 20:55:24.997228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.463 [2024-11-26 20:55:24.997246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.463 [2024-11-26 20:55:24.997259] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.463 [2024-11-26 20:55:24.997271] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.463 [2024-11-26 20:55:25.009784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.463 [2024-11-26 20:55:25.010177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.463 [2024-11-26 20:55:25.010233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.463 [2024-11-26 20:55:25.010249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.463 [2024-11-26 20:55:25.010504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.463 [2024-11-26 20:55:25.010734] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.463 [2024-11-26 20:55:25.010754] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.463 [2024-11-26 20:55:25.010767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.463 [2024-11-26 20:55:25.010779] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.463 [2024-11-26 20:55:25.023175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.463 [2024-11-26 20:55:25.023545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.463 [2024-11-26 20:55:25.023575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.463 [2024-11-26 20:55:25.023592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.463 [2024-11-26 20:55:25.023842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.463 [2024-11-26 20:55:25.024036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.463 [2024-11-26 20:55:25.024067] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.463 [2024-11-26 20:55:25.024080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.463 [2024-11-26 20:55:25.024098] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.463 [2024-11-26 20:55:25.036746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.463 [2024-11-26 20:55:25.037099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.463 [2024-11-26 20:55:25.037129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.463 [2024-11-26 20:55:25.037145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.463 [2024-11-26 20:55:25.037396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.463 [2024-11-26 20:55:25.037616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.463 [2024-11-26 20:55:25.037639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.463 [2024-11-26 20:55:25.037669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.463 [2024-11-26 20:55:25.037683] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.463 [2024-11-26 20:55:25.050250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.463 [2024-11-26 20:55:25.050702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.463 [2024-11-26 20:55:25.050730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.463 [2024-11-26 20:55:25.050746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.463 [2024-11-26 20:55:25.050990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.463 [2024-11-26 20:55:25.051179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.463 [2024-11-26 20:55:25.051208] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.463 [2024-11-26 20:55:25.051221] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.463 [2024-11-26 20:55:25.051233] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.463 [2024-11-26 20:55:25.063813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.463 [2024-11-26 20:55:25.064227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.463 [2024-11-26 20:55:25.064279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.463 [2024-11-26 20:55:25.064325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.463 [2024-11-26 20:55:25.064542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.463 [2024-11-26 20:55:25.064782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.463 [2024-11-26 20:55:25.064803] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.463 [2024-11-26 20:55:25.064817] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.463 [2024-11-26 20:55:25.064844] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.463 [2024-11-26 20:55:25.077610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.463 [2024-11-26 20:55:25.078026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.463 [2024-11-26 20:55:25.078084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.463 [2024-11-26 20:55:25.078101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.463 [2024-11-26 20:55:25.078358] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.463 [2024-11-26 20:55:25.078577] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.463 [2024-11-26 20:55:25.078603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.463 [2024-11-26 20:55:25.078633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.463 [2024-11-26 20:55:25.078649] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.463 [2024-11-26 20:55:25.091079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.463 [2024-11-26 20:55:25.091429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.463 [2024-11-26 20:55:25.091457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.463 [2024-11-26 20:55:25.091473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.463 [2024-11-26 20:55:25.091722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.463 [2024-11-26 20:55:25.091922] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.463 [2024-11-26 20:55:25.091942] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.463 [2024-11-26 20:55:25.091955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.463 [2024-11-26 20:55:25.091967] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.463 [2024-11-26 20:55:25.104684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.464 [2024-11-26 20:55:25.105091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.464 [2024-11-26 20:55:25.105121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.464 [2024-11-26 20:55:25.105137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.464 [2024-11-26 20:55:25.105387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.464 [2024-11-26 20:55:25.105607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.464 [2024-11-26 20:55:25.105630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.464 [2024-11-26 20:55:25.105644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.464 [2024-11-26 20:55:25.105658] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.464 [2024-11-26 20:55:25.117995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.464 [2024-11-26 20:55:25.118408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.464 [2024-11-26 20:55:25.118460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.464 [2024-11-26 20:55:25.118477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.464 [2024-11-26 20:55:25.118723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.464 [2024-11-26 20:55:25.118917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.464 [2024-11-26 20:55:25.118943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.464 [2024-11-26 20:55:25.118956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.464 [2024-11-26 20:55:25.118969] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.464 [2024-11-26 20:55:25.131358] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.464 [2024-11-26 20:55:25.131739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.464 [2024-11-26 20:55:25.131787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.464 [2024-11-26 20:55:25.131803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.464 [2024-11-26 20:55:25.132036] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.464 [2024-11-26 20:55:25.132253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.464 [2024-11-26 20:55:25.132272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.464 [2024-11-26 20:55:25.132300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.464 [2024-11-26 20:55:25.132326] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.464 [2024-11-26 20:55:25.144687] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.464 [2024-11-26 20:55:25.145102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.464 [2024-11-26 20:55:25.145129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.464 [2024-11-26 20:55:25.145144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.464 [2024-11-26 20:55:25.145384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.464 [2024-11-26 20:55:25.145579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.464 [2024-11-26 20:55:25.145613] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.464 [2024-11-26 20:55:25.145626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.464 [2024-11-26 20:55:25.145638] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.723 [2024-11-26 20:55:25.158221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.723 [2024-11-26 20:55:25.158714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.723 [2024-11-26 20:55:25.158768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.723 [2024-11-26 20:55:25.158783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.723 [2024-11-26 20:55:25.159022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.723 [2024-11-26 20:55:25.159209] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.723 [2024-11-26 20:55:25.159235] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.723 [2024-11-26 20:55:25.159248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.723 [2024-11-26 20:55:25.159261] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.723 [2024-11-26 20:55:25.171419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.723 [2024-11-26 20:55:25.171752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.723 [2024-11-26 20:55:25.171780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.723 [2024-11-26 20:55:25.171796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.723 [2024-11-26 20:55:25.172014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.723 [2024-11-26 20:55:25.172218] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.724 [2024-11-26 20:55:25.172238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.724 [2024-11-26 20:55:25.172252] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.724 [2024-11-26 20:55:25.172264] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.724 [2024-11-26 20:55:25.184491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.724 [2024-11-26 20:55:25.184894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.724 [2024-11-26 20:55:25.184921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.724 [2024-11-26 20:55:25.184936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.724 [2024-11-26 20:55:25.185169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.724 [2024-11-26 20:55:25.185406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.724 [2024-11-26 20:55:25.185429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.724 [2024-11-26 20:55:25.185443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.724 [2024-11-26 20:55:25.185457] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.724 [2024-11-26 20:55:25.197619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.724 [2024-11-26 20:55:25.198025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.724 [2024-11-26 20:55:25.198052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.724 [2024-11-26 20:55:25.198067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.724 [2024-11-26 20:55:25.198282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.724 [2024-11-26 20:55:25.198528] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.724 [2024-11-26 20:55:25.198550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.724 [2024-11-26 20:55:25.198565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.724 [2024-11-26 20:55:25.198578] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.724 [2024-11-26 20:55:25.210762] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.724 [2024-11-26 20:55:25.211106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.724 [2024-11-26 20:55:25.211133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.724 [2024-11-26 20:55:25.211149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.724 [2024-11-26 20:55:25.211397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.724 [2024-11-26 20:55:25.211627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.724 [2024-11-26 20:55:25.211649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.724 [2024-11-26 20:55:25.211662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.724 [2024-11-26 20:55:25.211675] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.724 [2024-11-26 20:55:25.224012] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.724 [2024-11-26 20:55:25.224366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.724 [2024-11-26 20:55:25.224396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.724 [2024-11-26 20:55:25.224412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.724 [2024-11-26 20:55:25.224655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.724 [2024-11-26 20:55:25.224860] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.724 [2024-11-26 20:55:25.224880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.724 [2024-11-26 20:55:25.224892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.724 [2024-11-26 20:55:25.224904] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.724 [2024-11-26 20:55:25.237217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.724 [2024-11-26 20:55:25.237590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.724 [2024-11-26 20:55:25.237653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.724 [2024-11-26 20:55:25.237669] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.724 [2024-11-26 20:55:25.237933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.724 [2024-11-26 20:55:25.238122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.724 [2024-11-26 20:55:25.238142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.724 [2024-11-26 20:55:25.238154] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.724 [2024-11-26 20:55:25.238166] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.724 [2024-11-26 20:55:25.250423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.724 [2024-11-26 20:55:25.250792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.724 [2024-11-26 20:55:25.250824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.724 [2024-11-26 20:55:25.250841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.724 [2024-11-26 20:55:25.251075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.724 [2024-11-26 20:55:25.251281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.724 [2024-11-26 20:55:25.251323] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.724 [2024-11-26 20:55:25.251338] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.724 [2024-11-26 20:55:25.251365] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.724 [2024-11-26 20:55:25.263581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.724 [2024-11-26 20:55:25.263966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.724 [2024-11-26 20:55:25.263993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.724 [2024-11-26 20:55:25.264009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.724 [2024-11-26 20:55:25.264225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.724 [2024-11-26 20:55:25.264476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.724 [2024-11-26 20:55:25.264499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.724 [2024-11-26 20:55:25.264513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.724 [2024-11-26 20:55:25.264527] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.724 [2024-11-26 20:55:25.276784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.724 [2024-11-26 20:55:25.277116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.724 [2024-11-26 20:55:25.277145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.724 [2024-11-26 20:55:25.277162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.724 [2024-11-26 20:55:25.277422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.724 [2024-11-26 20:55:25.277646] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.724 [2024-11-26 20:55:25.277666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.724 [2024-11-26 20:55:25.277693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.724 [2024-11-26 20:55:25.277705] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.724 [2024-11-26 20:55:25.289972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.724 [2024-11-26 20:55:25.290315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.724 [2024-11-26 20:55:25.290368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.724 [2024-11-26 20:55:25.290385] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.724 [2024-11-26 20:55:25.290620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.724 [2024-11-26 20:55:25.290831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.724 [2024-11-26 20:55:25.290851] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.724 [2024-11-26 20:55:25.290863] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.724 [2024-11-26 20:55:25.290887] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.724 [2024-11-26 20:55:25.303094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.724 [2024-11-26 20:55:25.303445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.724 [2024-11-26 20:55:25.303474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.725 [2024-11-26 20:55:25.303490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.725 [2024-11-26 20:55:25.303712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.725 [2024-11-26 20:55:25.303918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.725 [2024-11-26 20:55:25.303938] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.725 [2024-11-26 20:55:25.303950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.725 [2024-11-26 20:55:25.303962] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.725 [2024-11-26 20:55:25.316358] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.725 [2024-11-26 20:55:25.316703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.725 [2024-11-26 20:55:25.316731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.725 [2024-11-26 20:55:25.316747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.725 [2024-11-26 20:55:25.316982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.725 [2024-11-26 20:55:25.317188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.725 [2024-11-26 20:55:25.317208] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.725 [2024-11-26 20:55:25.317221] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.725 [2024-11-26 20:55:25.317232] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.725 [2024-11-26 20:55:25.329540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.725 [2024-11-26 20:55:25.329918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.725 [2024-11-26 20:55:25.329946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.725 [2024-11-26 20:55:25.329962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.725 [2024-11-26 20:55:25.330201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.725 [2024-11-26 20:55:25.330455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.725 [2024-11-26 20:55:25.330477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.725 [2024-11-26 20:55:25.330496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.725 [2024-11-26 20:55:25.330510] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.725 [2024-11-26 20:55:25.342695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.725 [2024-11-26 20:55:25.343037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.725 [2024-11-26 20:55:25.343064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.725 [2024-11-26 20:55:25.343079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.725 [2024-11-26 20:55:25.343318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.725 [2024-11-26 20:55:25.343525] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.725 [2024-11-26 20:55:25.343546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.725 [2024-11-26 20:55:25.343560] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.725 [2024-11-26 20:55:25.343573] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.725 [2024-11-26 20:55:25.355918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.725 [2024-11-26 20:55:25.356396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.725 [2024-11-26 20:55:25.356425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.725 [2024-11-26 20:55:25.356441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.725 [2024-11-26 20:55:25.356697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.725 [2024-11-26 20:55:25.356885] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.725 [2024-11-26 20:55:25.356905] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.725 [2024-11-26 20:55:25.356919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.725 [2024-11-26 20:55:25.356931] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.725 [2024-11-26 20:55:25.369026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.725 [2024-11-26 20:55:25.369400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.725 [2024-11-26 20:55:25.369430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.725 [2024-11-26 20:55:25.369447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.725 [2024-11-26 20:55:25.369678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.725 [2024-11-26 20:55:25.369905] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.725 [2024-11-26 20:55:25.369943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.725 [2024-11-26 20:55:25.369957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.725 [2024-11-26 20:55:25.369970] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.725 [2024-11-26 20:55:25.382349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.725 [2024-11-26 20:55:25.382760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.725 [2024-11-26 20:55:25.382789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.725 [2024-11-26 20:55:25.382804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.725 [2024-11-26 20:55:25.383042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.725 [2024-11-26 20:55:25.383246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.725 [2024-11-26 20:55:25.383266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.725 [2024-11-26 20:55:25.383293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.725 [2024-11-26 20:55:25.383318] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.725 [2024-11-26 20:55:25.395706] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.725 [2024-11-26 20:55:25.396164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.725 [2024-11-26 20:55:25.396219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.725 [2024-11-26 20:55:25.396235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.725 [2024-11-26 20:55:25.396522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.725 [2024-11-26 20:55:25.396733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.725 [2024-11-26 20:55:25.396754] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.725 [2024-11-26 20:55:25.396768] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.725 [2024-11-26 20:55:25.396781] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.725 [2024-11-26 20:55:25.408870] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.725 [2024-11-26 20:55:25.409222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.725 [2024-11-26 20:55:25.409252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.725 [2024-11-26 20:55:25.409268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.725 [2024-11-26 20:55:25.409521] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.725 [2024-11-26 20:55:25.409734] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.725 [2024-11-26 20:55:25.409755] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.725 [2024-11-26 20:55:25.409767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.725 [2024-11-26 20:55:25.409779] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.984 [2024-11-26 20:55:25.422009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.984 [2024-11-26 20:55:25.422425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.984 [2024-11-26 20:55:25.422454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.984 [2024-11-26 20:55:25.422476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.984 [2024-11-26 20:55:25.422721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.984 [2024-11-26 20:55:25.422948] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.984 [2024-11-26 20:55:25.422971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.985 [2024-11-26 20:55:25.422985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.985 [2024-11-26 20:55:25.422999] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.985 [2024-11-26 20:55:25.435095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.985 [2024-11-26 20:55:25.435488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.985 [2024-11-26 20:55:25.435516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.985 [2024-11-26 20:55:25.435531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.985 [2024-11-26 20:55:25.435747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.985 [2024-11-26 20:55:25.435968] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.985 [2024-11-26 20:55:25.435990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.985 [2024-11-26 20:55:25.436003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.985 [2024-11-26 20:55:25.436015] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.985 [2024-11-26 20:55:25.448226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.985 [2024-11-26 20:55:25.448579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.985 [2024-11-26 20:55:25.448608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.985 [2024-11-26 20:55:25.448624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.985 [2024-11-26 20:55:25.448858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.985 [2024-11-26 20:55:25.449061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.985 [2024-11-26 20:55:25.449082] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.985 [2024-11-26 20:55:25.449096] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.985 [2024-11-26 20:55:25.449109] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.985 [2024-11-26 20:55:25.461245] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.985 [2024-11-26 20:55:25.461594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.985 [2024-11-26 20:55:25.461622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.985 [2024-11-26 20:55:25.461638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.985 [2024-11-26 20:55:25.461873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.985 [2024-11-26 20:55:25.462081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.985 [2024-11-26 20:55:25.462112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.985 [2024-11-26 20:55:25.462125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.985 [2024-11-26 20:55:25.462138] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.985 [2024-11-26 20:55:25.474309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.985 [2024-11-26 20:55:25.474592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.985 [2024-11-26 20:55:25.474635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.985 [2024-11-26 20:55:25.474651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.985 [2024-11-26 20:55:25.474848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.985 [2024-11-26 20:55:25.475068] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.985 [2024-11-26 20:55:25.475088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.985 [2024-11-26 20:55:25.475100] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.985 [2024-11-26 20:55:25.475112] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.985 [2024-11-26 20:55:25.487470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.985 [2024-11-26 20:55:25.487898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.985 [2024-11-26 20:55:25.487928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.985 [2024-11-26 20:55:25.487944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.985 [2024-11-26 20:55:25.488186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.985 [2024-11-26 20:55:25.488437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.985 [2024-11-26 20:55:25.488459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.985 [2024-11-26 20:55:25.488473] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.985 [2024-11-26 20:55:25.488487] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.985 [2024-11-26 20:55:25.500467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.985 [2024-11-26 20:55:25.500829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.985 [2024-11-26 20:55:25.500858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.985 [2024-11-26 20:55:25.500874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.985 [2024-11-26 20:55:25.501109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.985 [2024-11-26 20:55:25.501338] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.985 [2024-11-26 20:55:25.501360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.985 [2024-11-26 20:55:25.501395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.985 [2024-11-26 20:55:25.501409] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.985 [2024-11-26 20:55:25.513522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.985 [2024-11-26 20:55:25.513925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.985 [2024-11-26 20:55:25.513952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.985 [2024-11-26 20:55:25.513967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.985 [2024-11-26 20:55:25.514181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.985 [2024-11-26 20:55:25.514414] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.985 [2024-11-26 20:55:25.514436] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.985 [2024-11-26 20:55:25.514449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.985 [2024-11-26 20:55:25.514462] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.985 [2024-11-26 20:55:25.526610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.985 [2024-11-26 20:55:25.527019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.985 [2024-11-26 20:55:25.527047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.985 [2024-11-26 20:55:25.527063] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.985 [2024-11-26 20:55:25.527298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.985 [2024-11-26 20:55:25.527527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.985 [2024-11-26 20:55:25.527548] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.985 [2024-11-26 20:55:25.527561] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.985 [2024-11-26 20:55:25.527575] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.985 [2024-11-26 20:55:25.539723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.985 [2024-11-26 20:55:25.540077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.985 [2024-11-26 20:55:25.540106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.985 [2024-11-26 20:55:25.540121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.985 [2024-11-26 20:55:25.540369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.985 [2024-11-26 20:55:25.540570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.985 [2024-11-26 20:55:25.540590] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.985 [2024-11-26 20:55:25.540618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.985 [2024-11-26 20:55:25.540631] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.985 [2024-11-26 20:55:25.552766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.985 [2024-11-26 20:55:25.553109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.985 [2024-11-26 20:55:25.553137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.985 [2024-11-26 20:55:25.553153] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.986 [2024-11-26 20:55:25.553401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.986 [2024-11-26 20:55:25.553624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.986 [2024-11-26 20:55:25.553645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.986 [2024-11-26 20:55:25.553657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.986 [2024-11-26 20:55:25.553670] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.986 [2024-11-26 20:55:25.565903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.986 [2024-11-26 20:55:25.566313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.986 [2024-11-26 20:55:25.566341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.986 [2024-11-26 20:55:25.566358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.986 [2024-11-26 20:55:25.566593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.986 [2024-11-26 20:55:25.566797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.986 [2024-11-26 20:55:25.566818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.986 [2024-11-26 20:55:25.566831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.986 [2024-11-26 20:55:25.566843] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.986 [2024-11-26 20:55:25.578966] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.986 [2024-11-26 20:55:25.579359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.986 [2024-11-26 20:55:25.579389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.986 [2024-11-26 20:55:25.579406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.986 [2024-11-26 20:55:25.579652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.986 [2024-11-26 20:55:25.579840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.986 [2024-11-26 20:55:25.579860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.986 [2024-11-26 20:55:25.579873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.986 [2024-11-26 20:55:25.579885] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.986 [2024-11-26 20:55:25.592036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.986 [2024-11-26 20:55:25.592336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.986 [2024-11-26 20:55:25.592364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.986 [2024-11-26 20:55:25.592385] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.986 [2024-11-26 20:55:25.592596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.986 [2024-11-26 20:55:25.592801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.986 [2024-11-26 20:55:25.592822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.986 [2024-11-26 20:55:25.592834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.986 [2024-11-26 20:55:25.592846] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.986 [2024-11-26 20:55:25.605167] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.986 [2024-11-26 20:55:25.605519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.986 [2024-11-26 20:55:25.605548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.986 [2024-11-26 20:55:25.605564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.986 [2024-11-26 20:55:25.605799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.986 [2024-11-26 20:55:25.606004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.986 [2024-11-26 20:55:25.606024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.986 [2024-11-26 20:55:25.606036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.986 [2024-11-26 20:55:25.606047] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.986 [2024-11-26 20:55:25.618199] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.986 [2024-11-26 20:55:25.618516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.986 [2024-11-26 20:55:25.618545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.986 [2024-11-26 20:55:25.618560] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.986 [2024-11-26 20:55:25.618782] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.986 [2024-11-26 20:55:25.618986] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.986 [2024-11-26 20:55:25.619007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.986 [2024-11-26 20:55:25.619020] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.986 [2024-11-26 20:55:25.619032] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.986 4219.40 IOPS, 16.48 MiB/s [2024-11-26T19:55:25.683Z] [2024-11-26 20:55:25.631676] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.986 [2024-11-26 20:55:25.632072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.986 [2024-11-26 20:55:25.632137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.986 [2024-11-26 20:55:25.632153] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.986 [2024-11-26 20:55:25.632405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.986 [2024-11-26 20:55:25.632632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.986 [2024-11-26 20:55:25.632668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.986 [2024-11-26 20:55:25.632682] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.986 [2024-11-26 20:55:25.632694] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.986 [2024-11-26 20:55:25.644809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.986 [2024-11-26 20:55:25.645221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.986 [2024-11-26 20:55:25.645249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.986 [2024-11-26 20:55:25.645264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.986 [2024-11-26 20:55:25.645529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.986 [2024-11-26 20:55:25.645751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.986 [2024-11-26 20:55:25.645771] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.986 [2024-11-26 20:55:25.645784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.986 [2024-11-26 20:55:25.645796] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.986 [2024-11-26 20:55:25.657933] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.986 [2024-11-26 20:55:25.658311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.986 [2024-11-26 20:55:25.658339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.986 [2024-11-26 20:55:25.658354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.986 [2024-11-26 20:55:25.658592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.986 [2024-11-26 20:55:25.658813] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.986 [2024-11-26 20:55:25.658833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.986 [2024-11-26 20:55:25.658845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.986 [2024-11-26 20:55:25.658857] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:21.986 [2024-11-26 20:55:25.670944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:21.986 [2024-11-26 20:55:25.671325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.986 [2024-11-26 20:55:25.671354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:21.986 [2024-11-26 20:55:25.671383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:21.986 [2024-11-26 20:55:25.671605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:21.986 [2024-11-26 20:55:25.671826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:21.986 [2024-11-26 20:55:25.671846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:21.986 [2024-11-26 20:55:25.671864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:21.986 [2024-11-26 20:55:25.671877] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:22.245 [2024-11-26 20:55:25.684337] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:22.245 [2024-11-26 20:55:25.684736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.245 [2024-11-26 20:55:25.684765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:22.245 [2024-11-26 20:55:25.684781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:22.245 [2024-11-26 20:55:25.685006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:22.245 [2024-11-26 20:55:25.685210] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:22.245 [2024-11-26 20:55:25.685231] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:22.245 [2024-11-26 20:55:25.685244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:22.245 [2024-11-26 20:55:25.685256] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:22.245 [2024-11-26 20:55:25.697420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:22.245 [2024-11-26 20:55:25.697763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.245 [2024-11-26 20:55:25.697791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:22.245 [2024-11-26 20:55:25.697807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:22.245 [2024-11-26 20:55:25.698042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:22.245 [2024-11-26 20:55:25.698246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:22.245 [2024-11-26 20:55:25.698277] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:22.245 [2024-11-26 20:55:25.698290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:22.245 [2024-11-26 20:55:25.698312] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:22.245 [2024-11-26 20:55:25.710429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:22.245 [2024-11-26 20:55:25.710884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.245 [2024-11-26 20:55:25.710939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:22.245 [2024-11-26 20:55:25.710954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:22.246 [2024-11-26 20:55:25.711194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:22.246 [2024-11-26 20:55:25.711411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:22.246 [2024-11-26 20:55:25.711432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:22.246 [2024-11-26 20:55:25.711444] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:22.246 [2024-11-26 20:55:25.711458] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:22.246 [2024-11-26 20:55:25.723572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:22.246 [2024-11-26 20:55:25.724015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.246 [2024-11-26 20:55:25.724067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:22.246 [2024-11-26 20:55:25.724083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:22.246 [2024-11-26 20:55:25.724339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:22.246 [2024-11-26 20:55:25.724547] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:22.246 [2024-11-26 20:55:25.724568] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:22.246 [2024-11-26 20:55:25.724581] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:22.246 [2024-11-26 20:55:25.724594] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:22.246 [2024-11-26 20:55:25.736741] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:22.246 [2024-11-26 20:55:25.737147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.246 [2024-11-26 20:55:25.737176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:22.246 [2024-11-26 20:55:25.737191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:22.246 [2024-11-26 20:55:25.737461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:22.246 [2024-11-26 20:55:25.737674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:22.246 [2024-11-26 20:55:25.737695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:22.246 [2024-11-26 20:55:25.737707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:22.246 [2024-11-26 20:55:25.737719] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:22.246 [2024-11-26 20:55:25.749912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:22.246 [2024-11-26 20:55:25.750338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.246 [2024-11-26 20:55:25.750367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:22.246 [2024-11-26 20:55:25.750382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:22.246 [2024-11-26 20:55:25.750619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:22.246 [2024-11-26 20:55:25.750823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:22.246 [2024-11-26 20:55:25.750843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:22.246 [2024-11-26 20:55:25.750857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:22.246 [2024-11-26 20:55:25.750869] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:22.246 [2024-11-26 20:55:25.762917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:22.246 [2024-11-26 20:55:25.763322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.246 [2024-11-26 20:55:25.763351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:22.246 [2024-11-26 20:55:25.763372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:22.246 [2024-11-26 20:55:25.763608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:22.246 [2024-11-26 20:55:25.763796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:22.246 [2024-11-26 20:55:25.763816] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:22.246 [2024-11-26 20:55:25.763829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:22.246 [2024-11-26 20:55:25.763842] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:22.246 [2024-11-26 20:55:25.775943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:22.246 [2024-11-26 20:55:25.776394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.246 [2024-11-26 20:55:25.776423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:22.246 [2024-11-26 20:55:25.776438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:22.246 [2024-11-26 20:55:25.776691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:22.246 [2024-11-26 20:55:25.776879] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:22.246 [2024-11-26 20:55:25.776899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:22.246 [2024-11-26 20:55:25.776911] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:22.246 [2024-11-26 20:55:25.776924] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:22.246 [2024-11-26 20:55:25.789090] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:22.246 [2024-11-26 20:55:25.789443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.246 [2024-11-26 20:55:25.789473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:22.246 [2024-11-26 20:55:25.789488] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:22.246 [2024-11-26 20:55:25.789723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:22.246 [2024-11-26 20:55:25.789928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:22.246 [2024-11-26 20:55:25.789949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:22.246 [2024-11-26 20:55:25.789961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:22.246 [2024-11-26 20:55:25.789973] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:22.246 [2024-11-26 20:55:25.802101] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:22.246 [2024-11-26 20:55:25.802427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.246 [2024-11-26 20:55:25.802456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:22.246 [2024-11-26 20:55:25.802473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:22.246 [2024-11-26 20:55:25.802696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:22.246 [2024-11-26 20:55:25.802908] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:22.246 [2024-11-26 20:55:25.802929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:22.246 [2024-11-26 20:55:25.802943] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:22.246 [2024-11-26 20:55:25.802955] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:22.246 [2024-11-26 20:55:25.815204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:22.246 [2024-11-26 20:55:25.815520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.246 [2024-11-26 20:55:25.815548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:22.246 [2024-11-26 20:55:25.815564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:22.246 [2024-11-26 20:55:25.815784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:22.246 [2024-11-26 20:55:25.815987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:22.246 [2024-11-26 20:55:25.816008] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:22.246 [2024-11-26 20:55:25.816021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:22.246 [2024-11-26 20:55:25.816034] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:22.246 [2024-11-26 20:55:25.828170] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:22.246 [2024-11-26 20:55:25.828491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.246 [2024-11-26 20:55:25.828520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:22.246 [2024-11-26 20:55:25.828536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:22.246 [2024-11-26 20:55:25.828752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:22.246 [2024-11-26 20:55:25.828976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:22.246 [2024-11-26 20:55:25.828996] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:22.246 [2024-11-26 20:55:25.829009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:22.247 [2024-11-26 20:55:25.829022] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:22.247 [2024-11-26 20:55:25.841267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:22.247 [2024-11-26 20:55:25.841706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.247 [2024-11-26 20:55:25.841749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:22.247 [2024-11-26 20:55:25.841765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:22.247 [2024-11-26 20:55:25.841999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:22.247 [2024-11-26 20:55:25.842204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:22.247 [2024-11-26 20:55:25.842225] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:22.247 [2024-11-26 20:55:25.842242] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:22.247 [2024-11-26 20:55:25.842255] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:22.247 [2024-11-26 20:55:25.854292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:22.247 [2024-11-26 20:55:25.854713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.247 [2024-11-26 20:55:25.854741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:22.247 [2024-11-26 20:55:25.854757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:22.247 [2024-11-26 20:55:25.854993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:22.247 [2024-11-26 20:55:25.855198] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:22.247 [2024-11-26 20:55:25.855217] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:22.247 [2024-11-26 20:55:25.855230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:22.247 [2024-11-26 20:55:25.855242] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:22.247 [2024-11-26 20:55:25.867380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:22.247 [2024-11-26 20:55:25.867793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.247 [2024-11-26 20:55:25.867822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:22.247 [2024-11-26 20:55:25.867838] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:22.247 [2024-11-26 20:55:25.868073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:22.247 [2024-11-26 20:55:25.868277] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:22.247 [2024-11-26 20:55:25.868297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:22.247 [2024-11-26 20:55:25.868337] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:22.247 [2024-11-26 20:55:25.868352] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:22.247 [2024-11-26 20:55:25.880457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:22.247 [2024-11-26 20:55:25.880862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.247 [2024-11-26 20:55:25.880891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:22.247 [2024-11-26 20:55:25.880907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:22.247 [2024-11-26 20:55:25.881147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:22.247 [2024-11-26 20:55:25.881387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:22.247 [2024-11-26 20:55:25.881409] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:22.247 [2024-11-26 20:55:25.881423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:22.247 [2024-11-26 20:55:25.881436] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:22.247 [2024-11-26 20:55:25.893663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:22.247 [2024-11-26 20:55:25.894041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.247 [2024-11-26 20:55:25.894069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:22.247 [2024-11-26 20:55:25.894085] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:22.247 [2024-11-26 20:55:25.894311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:22.247 [2024-11-26 20:55:25.894511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:22.247 [2024-11-26 20:55:25.894534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:22.247 [2024-11-26 20:55:25.894547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:22.247 [2024-11-26 20:55:25.894562] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:22.247 [2024-11-26 20:55:25.906923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:22.247 [2024-11-26 20:55:25.907346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.247 [2024-11-26 20:55:25.907376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:22.247 [2024-11-26 20:55:25.907393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:22.247 [2024-11-26 20:55:25.907657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:22.247 [2024-11-26 20:55:25.907847] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:22.247 [2024-11-26 20:55:25.907867] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:22.247 [2024-11-26 20:55:25.907881] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:22.247 [2024-11-26 20:55:25.907893] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:22.247 [2024-11-26 20:55:25.919922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:22.247 [2024-11-26 20:55:25.920233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.247 [2024-11-26 20:55:25.920313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:22.247 [2024-11-26 20:55:25.920331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:22.247 [2024-11-26 20:55:25.920559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:22.247 [2024-11-26 20:55:25.920762] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:22.247 [2024-11-26 20:55:25.920782] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:22.247 [2024-11-26 20:55:25.920795] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:22.247 [2024-11-26 20:55:25.920807] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:22.247 [2024-11-26 20:55:25.932920] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:22.247 [2024-11-26 20:55:25.933370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.247 [2024-11-26 20:55:25.933398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:22.247 [2024-11-26 20:55:25.933420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:22.247 [2024-11-26 20:55:25.933648] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:22.247 [2024-11-26 20:55:25.933851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:22.247 [2024-11-26 20:55:25.933871] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:22.247 [2024-11-26 20:55:25.933884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:22.247 [2024-11-26 20:55:25.933896] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:22.507 [2024-11-26 20:55:25.946534] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:22.507 [2024-11-26 20:55:25.946894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.507 [2024-11-26 20:55:25.946922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:22.507 [2024-11-26 20:55:25.946938] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:22.507 [2024-11-26 20:55:25.947173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:22.507 [2024-11-26 20:55:25.947429] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:22.507 [2024-11-26 20:55:25.947452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:22.507 [2024-11-26 20:55:25.947466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:22.507 [2024-11-26 20:55:25.947479] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:22.507 [2024-11-26 20:55:25.959593] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:22.507 [2024-11-26 20:55:25.959913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.507 [2024-11-26 20:55:25.959942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:22.507 [2024-11-26 20:55:25.959958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:22.507 [2024-11-26 20:55:25.960174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:22.507 [2024-11-26 20:55:25.960423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:22.507 [2024-11-26 20:55:25.960446] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:22.507 [2024-11-26 20:55:25.960460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:22.507 [2024-11-26 20:55:25.960473] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:22.507 [2024-11-26 20:55:25.972789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:22.507 [2024-11-26 20:55:25.973129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.507 [2024-11-26 20:55:25.973157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:22.507 [2024-11-26 20:55:25.973173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:22.507 [2024-11-26 20:55:25.973414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:22.507 [2024-11-26 20:55:25.973631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:22.507 [2024-11-26 20:55:25.973651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:22.507 [2024-11-26 20:55:25.973664] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:22.507 [2024-11-26 20:55:25.973676] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:22.507 [2024-11-26 20:55:25.985926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:22.507 [2024-11-26 20:55:25.986242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.507 [2024-11-26 20:55:25.986271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:22.507 [2024-11-26 20:55:25.986288] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:22.507 [2024-11-26 20:55:25.986542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:22.507 [2024-11-26 20:55:25.986773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:22.507 [2024-11-26 20:55:25.986794] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:22.507 [2024-11-26 20:55:25.986808] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:22.507 [2024-11-26 20:55:25.986821] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:22.507 [2024-11-26 20:55:25.999246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:22.507 [2024-11-26 20:55:25.999628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.508 [2024-11-26 20:55:25.999671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:22.508 [2024-11-26 20:55:25.999687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:22.508 [2024-11-26 20:55:25.999924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:22.508 [2024-11-26 20:55:26.000133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:22.508 [2024-11-26 20:55:26.000154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:22.508 [2024-11-26 20:55:26.000168] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:22.508 [2024-11-26 20:55:26.000181] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:22.508 [2024-11-26 20:55:26.012554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:22.508 [2024-11-26 20:55:26.013020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.508 [2024-11-26 20:55:26.013075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:22.508 [2024-11-26 20:55:26.013091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:22.508 [2024-11-26 20:55:26.013356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:22.508 [2024-11-26 20:55:26.013563] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:22.508 [2024-11-26 20:55:26.013599] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:22.508 [2024-11-26 20:55:26.013612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:22.508 [2024-11-26 20:55:26.013630] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:22.508 [2024-11-26 20:55:26.025914] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:22.508 [2024-11-26 20:55:26.026238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.508 [2024-11-26 20:55:26.026275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:22.508 [2024-11-26 20:55:26.026317] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:22.508 [2024-11-26 20:55:26.026562] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:22.508 [2024-11-26 20:55:26.026803] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:22.508 [2024-11-26 20:55:26.026823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:22.508 [2024-11-26 20:55:26.026835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:22.508 [2024-11-26 20:55:26.026847] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:22.508 [2024-11-26 20:55:26.039491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:22.508 [2024-11-26 20:55:26.039873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.508 [2024-11-26 20:55:26.039901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:22.508 [2024-11-26 20:55:26.039916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:22.508 [2024-11-26 20:55:26.040146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:22.508 [2024-11-26 20:55:26.040398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:22.508 [2024-11-26 20:55:26.040420] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:22.508 [2024-11-26 20:55:26.040433] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:22.508 [2024-11-26 20:55:26.040446] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:22.508 [2024-11-26 20:55:26.052704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:22.508 [2024-11-26 20:55:26.053044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.508 [2024-11-26 20:55:26.053073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:22.508 [2024-11-26 20:55:26.053090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:22.508 [2024-11-26 20:55:26.053318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:22.508 [2024-11-26 20:55:26.053548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:22.508 [2024-11-26 20:55:26.053569] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:22.508 [2024-11-26 20:55:26.053583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:22.508 [2024-11-26 20:55:26.053595] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:22.508 [2024-11-26 20:55:26.065827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:22.508 [2024-11-26 20:55:26.066207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.508 [2024-11-26 20:55:26.066235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:22.508 [2024-11-26 20:55:26.066250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:22.508 [2024-11-26 20:55:26.066530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:22.508 [2024-11-26 20:55:26.066759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:22.508 [2024-11-26 20:55:26.066778] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:22.508 [2024-11-26 20:55:26.066790] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:22.508 [2024-11-26 20:55:26.066803] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:22.508 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1767990 Killed "${NVMF_APP[@]}" "$@" 00:25:22.508 20:55:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:25:22.508 20:55:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:22.508 20:55:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:22.508 20:55:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:22.508 20:55:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:22.508 20:55:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1768947 00:25:22.508 20:55:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:22.508 20:55:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1768947 00:25:22.508 20:55:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1768947 ']' 00:25:22.508 20:55:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:22.508 20:55:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:22.508 20:55:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:22.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:22.508 20:55:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:22.508 20:55:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:22.508 [2024-11-26 20:55:26.079101] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:22.508 [2024-11-26 20:55:26.079511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.508 [2024-11-26 20:55:26.079541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:22.508 [2024-11-26 20:55:26.079557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:22.508 [2024-11-26 20:55:26.079801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:22.508 [2024-11-26 20:55:26.080016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:22.509 [2024-11-26 20:55:26.080038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:22.509 [2024-11-26 20:55:26.080051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:22.509 [2024-11-26 20:55:26.080068] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:22.509 [2024-11-26 20:55:26.092608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:22.509 [2024-11-26 20:55:26.093041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.509 [2024-11-26 20:55:26.093071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:22.509 [2024-11-26 20:55:26.093088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:22.509 [2024-11-26 20:55:26.093341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:22.509 [2024-11-26 20:55:26.093548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:22.509 [2024-11-26 20:55:26.093569] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:22.509 [2024-11-26 20:55:26.093583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:22.509 [2024-11-26 20:55:26.093596] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:22.509 [2024-11-26 20:55:26.106011] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:22.509 [2024-11-26 20:55:26.106404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.509 [2024-11-26 20:55:26.106434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:22.509 [2024-11-26 20:55:26.106450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:22.509 [2024-11-26 20:55:26.106692] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:22.509 [2024-11-26 20:55:26.106887] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:22.509 [2024-11-26 20:55:26.106907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:22.509 [2024-11-26 20:55:26.106920] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:22.509 [2024-11-26 20:55:26.106932] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:22.509 [2024-11-26 20:55:26.119312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:22.509 [2024-11-26 20:55:26.119609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.509 [2024-11-26 20:55:26.119652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:22.509 [2024-11-26 20:55:26.119668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:22.509 [2024-11-26 20:55:26.119891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:22.509 [2024-11-26 20:55:26.120102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:22.509 [2024-11-26 20:55:26.120122] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:22.509 [2024-11-26 20:55:26.120135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:22.509 [2024-11-26 20:55:26.120147] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:22.509 [2024-11-26 20:55:26.127866] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:25:22.509 [2024-11-26 20:55:26.127952] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:22.509 [2024-11-26 20:55:26.132765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:22.509 [2024-11-26 20:55:26.133153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.509 [2024-11-26 20:55:26.133182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:22.509 [2024-11-26 20:55:26.133200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:22.509 [2024-11-26 20:55:26.133425] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:22.509 [2024-11-26 20:55:26.133674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:22.509 [2024-11-26 20:55:26.133695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:22.509 [2024-11-26 20:55:26.133709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:22.509 [2024-11-26 20:55:26.133722] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:22.509 [2024-11-26 20:55:26.146166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:22.509 [2024-11-26 20:55:26.146485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.509 [2024-11-26 20:55:26.146514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:22.509 [2024-11-26 20:55:26.146531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:22.509 [2024-11-26 20:55:26.146771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:22.509 [2024-11-26 20:55:26.146981] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:22.509 [2024-11-26 20:55:26.147001] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:22.509 [2024-11-26 20:55:26.147014] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:22.509 [2024-11-26 20:55:26.147027] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:22.509 [2024-11-26 20:55:26.159780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:22.509 [2024-11-26 20:55:26.160154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.509 [2024-11-26 20:55:26.160181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:22.509 [2024-11-26 20:55:26.160208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:22.509 [2024-11-26 20:55:26.160449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:22.509 [2024-11-26 20:55:26.160693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:22.509 [2024-11-26 20:55:26.160713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:22.509 [2024-11-26 20:55:26.160726] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:22.509 [2024-11-26 20:55:26.160739] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:22.509 [2024-11-26 20:55:26.173242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:22.509 [2024-11-26 20:55:26.173606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.509 [2024-11-26 20:55:26.173646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:22.509 [2024-11-26 20:55:26.173666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:22.509 [2024-11-26 20:55:26.173897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:22.509 [2024-11-26 20:55:26.174112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:22.509 [2024-11-26 20:55:26.174133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:22.509 [2024-11-26 20:55:26.174146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:22.509 [2024-11-26 20:55:26.174159] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:22.509 [2024-11-26 20:55:26.186894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:22.509 [2024-11-26 20:55:26.187327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.509 [2024-11-26 20:55:26.187356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:22.509 [2024-11-26 20:55:26.187372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:22.510 [2024-11-26 20:55:26.187591] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:22.510 [2024-11-26 20:55:26.187814] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:22.510 [2024-11-26 20:55:26.187834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:22.510 [2024-11-26 20:55:26.187848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:22.510 [2024-11-26 20:55:26.187872] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:22.510 [2024-11-26 20:55:26.200601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:22.510 [2024-11-26 20:55:26.200969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.510 [2024-11-26 20:55:26.200998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:22.510 [2024-11-26 20:55:26.201014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:22.769 [2024-11-26 20:55:26.201244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:22.769 [2024-11-26 20:55:26.201501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:22.769 [2024-11-26 20:55:26.201525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:22.769 [2024-11-26 20:55:26.201540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:22.769 [2024-11-26 20:55:26.201553] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:22.769 [2024-11-26 20:55:26.205768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:22.769 [2024-11-26 20:55:26.214171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:22.769 [2024-11-26 20:55:26.214640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.769 [2024-11-26 20:55:26.214686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:22.769 [2024-11-26 20:55:26.214704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:22.769 [2024-11-26 20:55:26.214981] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:22.769 [2024-11-26 20:55:26.215190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:22.769 [2024-11-26 20:55:26.215212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:22.769 [2024-11-26 20:55:26.215227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:22.769 [2024-11-26 20:55:26.215241] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:22.769 [2024-11-26 20:55:26.227821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:22.769 [2024-11-26 20:55:26.228317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.769 [2024-11-26 20:55:26.228354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:22.769 [2024-11-26 20:55:26.228374] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:22.769 [2024-11-26 20:55:26.228619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:22.769 [2024-11-26 20:55:26.228839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:22.769 [2024-11-26 20:55:26.228860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:22.769 [2024-11-26 20:55:26.228875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:22.769 [2024-11-26 20:55:26.228889] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:22.769 [2024-11-26 20:55:26.241214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:22.769 [2024-11-26 20:55:26.241590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.769 [2024-11-26 20:55:26.241621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:22.769 [2024-11-26 20:55:26.241638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:22.769 [2024-11-26 20:55:26.241868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:22.769 [2024-11-26 20:55:26.242075] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:22.769 [2024-11-26 20:55:26.242103] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:22.769 [2024-11-26 20:55:26.242116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:22.769 [2024-11-26 20:55:26.242129] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:22.769 [2024-11-26 20:55:26.254726] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:22.769 [2024-11-26 20:55:26.255104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.769 [2024-11-26 20:55:26.255140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:22.769 [2024-11-26 20:55:26.255156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:22.769 [2024-11-26 20:55:26.255397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:22.769 [2024-11-26 20:55:26.255628] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:22.769 [2024-11-26 20:55:26.255683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:22.769 [2024-11-26 20:55:26.255698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:22.769 [2024-11-26 20:55:26.255711] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:22.769 [2024-11-26 20:55:26.266666] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:22.769 [2024-11-26 20:55:26.266701] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:22.769 [2024-11-26 20:55:26.266722] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:22.769 [2024-11-26 20:55:26.266734] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:22.769 [2024-11-26 20:55:26.266743] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:22.769 [2024-11-26 20:55:26.268157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:22.769 [2024-11-26 20:55:26.268180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:22.769 [2024-11-26 20:55:26.268223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:22.769 [2024-11-26 20:55:26.268226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:22.769 [2024-11-26 20:55:26.268548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.769 [2024-11-26 20:55:26.268577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:22.769 [2024-11-26 20:55:26.268593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:22.769 [2024-11-26 20:55:26.268831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:22.769 [2024-11-26 20:55:26.269054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:22.769 [2024-11-26 20:55:26.269075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:22.769 [2024-11-26 20:55:26.269089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:22.769 [2024-11-26 20:55:26.269103] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:22.769 [2024-11-26 20:55:26.281622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:22.769 [2024-11-26 20:55:26.282156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.769 [2024-11-26 20:55:26.282205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:22.770 [2024-11-26 20:55:26.282225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:22.770 [2024-11-26 20:55:26.282458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:22.770 [2024-11-26 20:55:26.282707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:22.770 [2024-11-26 20:55:26.282729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:22.770 [2024-11-26 20:55:26.282745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:22.770 [2024-11-26 20:55:26.282760] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:22.770 [2024-11-26 20:55:26.295251] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:22.770 [2024-11-26 20:55:26.295792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.770 [2024-11-26 20:55:26.295850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:22.770 [2024-11-26 20:55:26.295873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:22.770 [2024-11-26 20:55:26.296124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:22.770 [2024-11-26 20:55:26.296362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:22.770 [2024-11-26 20:55:26.296386] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:22.770 [2024-11-26 20:55:26.296402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:22.770 [2024-11-26 20:55:26.296417] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:22.770 [2024-11-26 20:55:26.308902] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:22.770 [2024-11-26 20:55:26.309398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.770 [2024-11-26 20:55:26.309439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:22.770 [2024-11-26 20:55:26.309459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:22.770 [2024-11-26 20:55:26.309698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:22.770 [2024-11-26 20:55:26.309908] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:22.770 [2024-11-26 20:55:26.309929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:22.770 [2024-11-26 20:55:26.309944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:22.770 [2024-11-26 20:55:26.309960] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:22.770 [2024-11-26 20:55:26.322427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:22.770 [2024-11-26 20:55:26.322924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.770 [2024-11-26 20:55:26.322969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:22.770 [2024-11-26 20:55:26.322988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:22.770 [2024-11-26 20:55:26.323236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:22.770 [2024-11-26 20:55:26.323474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:22.770 [2024-11-26 20:55:26.323497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:22.770 [2024-11-26 20:55:26.323514] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:22.770 [2024-11-26 20:55:26.323528] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:22.770 [2024-11-26 20:55:26.335961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:22.770 [2024-11-26 20:55:26.336469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.770 [2024-11-26 20:55:26.336519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:22.770 [2024-11-26 20:55:26.336539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:22.770 [2024-11-26 20:55:26.336784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:22.770 [2024-11-26 20:55:26.336994] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:22.770 [2024-11-26 20:55:26.337015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:22.770 [2024-11-26 20:55:26.337031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:22.770 [2024-11-26 20:55:26.337046] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:22.770 [2024-11-26 20:55:26.349427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:22.770 [2024-11-26 20:55:26.349918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.770 [2024-11-26 20:55:26.349965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:22.770 [2024-11-26 20:55:26.349984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:22.770 [2024-11-26 20:55:26.350233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:22.770 [2024-11-26 20:55:26.350496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:22.770 [2024-11-26 20:55:26.350519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:22.770 [2024-11-26 20:55:26.350535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:22.770 [2024-11-26 20:55:26.350551] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:22.770 [2024-11-26 20:55:26.362940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:22.770 [2024-11-26 20:55:26.363327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.770 [2024-11-26 20:55:26.363368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:22.770 [2024-11-26 20:55:26.363384] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:22.770 [2024-11-26 20:55:26.363614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:22.770 [2024-11-26 20:55:26.363821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:22.770 [2024-11-26 20:55:26.363842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:22.770 [2024-11-26 20:55:26.363857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:22.770 [2024-11-26 20:55:26.363869] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:22.770 [2024-11-26 20:55:26.376497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:22.770 [2024-11-26 20:55:26.376868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.770 [2024-11-26 20:55:26.376897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:22.770 [2024-11-26 20:55:26.376914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:22.770 [2024-11-26 20:55:26.377145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:22.770 [2024-11-26 20:55:26.377412] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:22.771 [2024-11-26 20:55:26.377436] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:22.771 [2024-11-26 20:55:26.377459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:22.771 [2024-11-26 20:55:26.377473] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:22.771 [2024-11-26 20:55:26.389917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:22.771 [2024-11-26 20:55:26.390253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.771 [2024-11-26 20:55:26.390282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:22.771 [2024-11-26 20:55:26.390298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:22.771 [2024-11-26 20:55:26.390522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:22.771 [2024-11-26 20:55:26.390753] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:22.771 [2024-11-26 20:55:26.390775] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:22.771 [2024-11-26 20:55:26.390789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:22.771 [2024-11-26 20:55:26.390802] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:22.771 [2024-11-26 20:55:26.403470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:22.771 [2024-11-26 20:55:26.403809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.771 [2024-11-26 20:55:26.403852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:22.771 [2024-11-26 20:55:26.403869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:22.771 [2024-11-26 20:55:26.404098] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:22.771 [2024-11-26 20:55:26.404350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:22.771 [2024-11-26 20:55:26.404373] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:22.771 [2024-11-26 20:55:26.404387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:22.771 [2024-11-26 20:55:26.404401] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:22.771 20:55:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:22.771 20:55:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:25:22.771 20:55:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:22.771 20:55:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:22.771 20:55:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:22.771 [2024-11-26 20:55:26.417024] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:22.771 [2024-11-26 20:55:26.417384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.771 [2024-11-26 20:55:26.417413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:22.771 [2024-11-26 20:55:26.417429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:22.771 [2024-11-26 20:55:26.417659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:22.771 [2024-11-26 20:55:26.417892] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:22.771 [2024-11-26 20:55:26.417919] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:22.771 [2024-11-26 20:55:26.417933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:22.771 [2024-11-26 20:55:26.417946] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:22.771 [2024-11-26 20:55:26.430551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:22.771 [2024-11-26 20:55:26.430919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.771 [2024-11-26 20:55:26.430949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:22.771 [2024-11-26 20:55:26.430966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:22.771 20:55:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:22.771 [2024-11-26 20:55:26.431195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): B 20:55:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:22.771 ad file descriptor 00:25:22.771 20:55:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.771 [2024-11-26 20:55:26.431455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:22.771 [2024-11-26 20:55:26.431480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:22.771 20:55:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:22.771 [2024-11-26 20:55:26.431495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:22.771 [2024-11-26 20:55:26.431510] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:22.771 [2024-11-26 20:55:26.433242] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:22.771 20:55:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.771 20:55:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:22.771 20:55:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.771 20:55:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:22.771 [2024-11-26 20:55:26.444055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:22.771 [2024-11-26 20:55:26.444447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.771 [2024-11-26 20:55:26.444476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:22.771 [2024-11-26 20:55:26.444493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:22.771 [2024-11-26 20:55:26.444709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:22.771 [2024-11-26 20:55:26.444949] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:22.771 [2024-11-26 20:55:26.444970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:22.771 [2024-11-26 20:55:26.444984] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:22.771 [2024-11-26 20:55:26.444997] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:22.771 [2024-11-26 20:55:26.457505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:22.771 [2024-11-26 20:55:26.457981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.771 [2024-11-26 20:55:26.458011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:22.771 [2024-11-26 20:55:26.458038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:22.771 [2024-11-26 20:55:26.458286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:22.771 [2024-11-26 20:55:26.458527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:22.771 [2024-11-26 20:55:26.458550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:22.771 [2024-11-26 20:55:26.458565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:22.772 [2024-11-26 20:55:26.458578] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:23.063 [2024-11-26 20:55:26.471146] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:23.063 [2024-11-26 20:55:26.471515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:23.063 [2024-11-26 20:55:26.471545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:23.063 [2024-11-26 20:55:26.471561] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:23.063 [2024-11-26 20:55:26.471792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:23.063 [2024-11-26 20:55:26.472016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:23.063 [2024-11-26 20:55:26.472037] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:23.063 [2024-11-26 20:55:26.472052] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:23.063 [2024-11-26 20:55:26.472065] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:23.063 Malloc0 00:25:23.063 20:55:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.063 20:55:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:23.063 20:55:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.063 20:55:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:23.063 [2024-11-26 20:55:26.484674] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:23.063 [2024-11-26 20:55:26.485088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:23.063 [2024-11-26 20:55:26.485119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:23.063 [2024-11-26 20:55:26.485136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:23.063 [2024-11-26 20:55:26.485384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:23.063 [2024-11-26 20:55:26.485598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:23.063 [2024-11-26 20:55:26.485634] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:23.063 [2024-11-26 20:55:26.485647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:23.063 [2024-11-26 20:55:26.485661] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:23.063 20:55:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.063 20:55:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:23.063 20:55:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.063 20:55:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:23.063 20:55:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.063 20:55:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:23.063 20:55:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.063 20:55:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:23.063 [2024-11-26 20:55:26.498168] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:23.063 [2024-11-26 20:55:26.498537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:23.063 [2024-11-26 20:55:26.498567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99ca50 with addr=10.0.0.2, port=4420 00:25:23.063 [2024-11-26 20:55:26.498584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x99ca50 is same with the state(6) to be set 00:25:23.063 [2024-11-26 20:55:26.498814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99ca50 (9): Bad file descriptor 00:25:23.063 [2024-11-26 20:55:26.499036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:23.063 [2024-11-26 20:55:26.499057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:23.063 [2024-11-26 20:55:26.499071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:23.063 [2024-11-26 20:55:26.499084] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:23.063 [2024-11-26 20:55:26.499957] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:23.063 20:55:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.063 20:55:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1768275 00:25:23.063 [2024-11-26 20:55:26.511650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:23.063 [2024-11-26 20:55:26.540948] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:25:23.996 3619.17 IOPS, 14.14 MiB/s [2024-11-26T19:55:29.067Z] 4341.29 IOPS, 16.96 MiB/s [2024-11-26T19:55:30.000Z] 4892.50 IOPS, 19.11 MiB/s [2024-11-26T19:55:30.933Z] 5307.56 IOPS, 20.73 MiB/s [2024-11-26T19:55:31.865Z] 5639.10 IOPS, 22.03 MiB/s [2024-11-26T19:55:32.799Z] 5899.91 IOPS, 23.05 MiB/s [2024-11-26T19:55:33.734Z] 6128.83 IOPS, 23.94 MiB/s [2024-11-26T19:55:34.665Z] 6315.00 IOPS, 24.67 MiB/s [2024-11-26T19:55:36.038Z] 6477.07 IOPS, 25.30 MiB/s [2024-11-26T19:55:36.038Z] 6621.47 IOPS, 25.87 MiB/s 00:25:32.341 Latency(us) 00:25:32.341 [2024-11-26T19:55:36.038Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:32.341 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:32.341 Verification LBA range: start 0x0 length 0x4000 00:25:32.341 Nvme1n1 : 15.01 6625.64 25.88 10127.97 0.00 7617.49 819.20 16408.27 00:25:32.341 [2024-11-26T19:55:36.038Z] =================================================================================================================== 00:25:32.341 [2024-11-26T19:55:36.038Z] Total : 6625.64 25.88 10127.97 0.00 7617.49 819.20 16408.27 00:25:32.341 20:55:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:25:32.341 20:55:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:32.341 20:55:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.341 20:55:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:32.341 20:55:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.341 20:55:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:25:32.341 20:55:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:25:32.341 20:55:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:32.341 20:55:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:25:32.341 20:55:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:32.341 20:55:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:25:32.341 20:55:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:32.341 20:55:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:32.341 rmmod nvme_tcp 00:25:32.341 rmmod nvme_fabrics 00:25:32.341 rmmod nvme_keyring 00:25:32.341 20:55:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:32.341 20:55:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:25:32.341 20:55:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:25:32.341 20:55:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 1768947 ']' 00:25:32.341 20:55:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 1768947 00:25:32.341 20:55:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 1768947 ']' 00:25:32.341 20:55:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 1768947 00:25:32.341 20:55:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:25:32.341 20:55:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:32.341 20:55:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1768947 00:25:32.341 20:55:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:32.341 20:55:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:32.341 20:55:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1768947' 00:25:32.342 killing process with pid 1768947 00:25:32.342 20:55:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 1768947 00:25:32.342 20:55:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 1768947 00:25:32.600 20:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:32.600 20:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:32.600 20:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:32.600 20:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:25:32.600 20:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:25:32.600 20:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:32.600 20:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:25:32.600 20:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:32.600 20:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:32.600 20:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:32.600 20:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:32.600 20:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:35.135 00:25:35.135 real 0m22.740s 00:25:35.135 user 1m0.785s 00:25:35.135 sys 0m4.259s 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:35.135 ************************************ 00:25:35.135 END TEST nvmf_bdevperf 00:25:35.135 ************************************ 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.135 ************************************ 00:25:35.135 START TEST nvmf_target_disconnect 00:25:35.135 ************************************ 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:25:35.135 * Looking for test storage... 00:25:35.135 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:35.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.135 --rc genhtml_branch_coverage=1 00:25:35.135 --rc genhtml_function_coverage=1 00:25:35.135 --rc genhtml_legend=1 00:25:35.135 --rc geninfo_all_blocks=1 00:25:35.135 --rc geninfo_unexecuted_blocks=1 00:25:35.135 00:25:35.135 ' 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:35.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.135 --rc genhtml_branch_coverage=1 00:25:35.135 --rc genhtml_function_coverage=1 00:25:35.135 --rc genhtml_legend=1 00:25:35.135 --rc geninfo_all_blocks=1 00:25:35.135 --rc geninfo_unexecuted_blocks=1 00:25:35.135 00:25:35.135 ' 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:35.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.135 --rc genhtml_branch_coverage=1 00:25:35.135 --rc genhtml_function_coverage=1 00:25:35.135 --rc genhtml_legend=1 00:25:35.135 --rc geninfo_all_blocks=1 00:25:35.135 --rc geninfo_unexecuted_blocks=1 00:25:35.135 00:25:35.135 ' 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:35.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.135 --rc genhtml_branch_coverage=1 00:25:35.135 --rc genhtml_function_coverage=1 00:25:35.135 --rc genhtml_legend=1 00:25:35.135 --rc geninfo_all_blocks=1 00:25:35.135 --rc geninfo_unexecuted_blocks=1 00:25:35.135 00:25:35.135 ' 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.135 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.136 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.136 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:25:35.136 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.136 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:25:35.136 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:35.136 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:35.136 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:35.136 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:35.136 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:35.136 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:35.136 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:35.136 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:35.136 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:35.136 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:35.136 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:35.136 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:25:35.136 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:25:35.136 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:25:35.136 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:35.136 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:35.136 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:35.136 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:35.136 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:35.136 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:35.136 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:35.136 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:35.136 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:35.136 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:35.136 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:25:35.136 20:55:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:37.037 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:37.037 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:25:37.037 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:37.037 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:37.037 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:37.037 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:37.037 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:37.037 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:25:37.037 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:37.037 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:25:37.037 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:25:37.037 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:25:37.037 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:25:37.037 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:25:37.037 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:25:37.037 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:37.037 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:37.037 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:37.037 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:37.037 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:37.037 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:37.037 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:37.037 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:37.037 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:37.037 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:37.037 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:37.037 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:37.037 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:37.037 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:37.037 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:37.037 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:37.037 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:37.037 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:37.037 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:37.037 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:37.037 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:37.037 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:37.037 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:37.037 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:37.037 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:37.037 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:37.037 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:37.037 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:37.037 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:37.037 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:37.037 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:37.037 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:37.037 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:37.037 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:37.037 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:37.037 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:37.037 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:37.037 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:37.037 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:37.037 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:37.037 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:37.037 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:37.037 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:37.037 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:37.037 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:37.037 Found net devices under 0000:09:00.0: cvl_0_0 00:25:37.037 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:37.037 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:37.037 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:37.038 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:37.038 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:37.038 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:37.038 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:37.038 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:37.038 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:37.038 Found net devices under 0000:09:00.1: cvl_0_1 00:25:37.038 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:37.038 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:37.038 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:25:37.038 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:37.038 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:37.038 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:37.038 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:37.038 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:37.038 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:37.038 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:37.038 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:37.038 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:37.038 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:37.038 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:37.038 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:37.038 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:37.038 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:37.038 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:37.038 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:37.038 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:37.038 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:37.301 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:37.301 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:37.301 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:37.301 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:37.301 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:37.301 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:37.301 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:37.301 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:37.301 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:37.301 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:25:37.301 00:25:37.301 --- 10.0.0.2 ping statistics --- 00:25:37.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:37.301 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:25:37.301 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:37.301 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:37.301 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:25:37.301 00:25:37.301 --- 10.0.0.1 ping statistics --- 00:25:37.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:37.301 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:25:37.301 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:37.301 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:25:37.301 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:37.301 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:37.301 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:37.301 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:37.301 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:37.301 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:37.301 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:37.301 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:25:37.301 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:37.301 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:37.301 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:37.301 ************************************ 00:25:37.301 START TEST nvmf_target_disconnect_tc1 00:25:37.301 ************************************ 00:25:37.301 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:25:37.301 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:37.301 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:25:37.301 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:37.301 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:37.301 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:37.301 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:37.301 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:37.301 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:37.301 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:37.301 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:37.301 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:25:37.301 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:37.301 [2024-11-26 20:55:40.964360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:37.301 [2024-11-26 20:55:40.964421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd94f40 with addr=10.0.0.2, port=4420 00:25:37.301 [2024-11-26 20:55:40.964454] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:37.301 [2024-11-26 20:55:40.964478] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:37.301 [2024-11-26 20:55:40.964492] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:25:37.301 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:25:37.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:25:37.301 Initializing NVMe Controllers 00:25:37.301 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:25:37.301 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:37.301 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:37.301 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:37.301 00:25:37.301 real 0m0.099s 00:25:37.301 user 0m0.051s 00:25:37.301 sys 0m0.047s 00:25:37.301 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:37.301 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:37.301 ************************************ 00:25:37.301 END TEST nvmf_target_disconnect_tc1 00:25:37.301 ************************************ 00:25:37.560 20:55:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:25:37.560 20:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:37.560 20:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:37.560 20:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:37.560 ************************************ 00:25:37.560 START TEST nvmf_target_disconnect_tc2 00:25:37.560 ************************************ 00:25:37.560 20:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:25:37.560 20:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:25:37.560 20:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:37.560 20:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:37.560 20:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:37.560 20:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:37.560 20:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1772141 00:25:37.560 20:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:37.560 20:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1772141 00:25:37.560 20:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1772141 ']' 00:25:37.560 20:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:37.560 20:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:37.560 20:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:37.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:37.560 20:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:37.560 20:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:37.560 [2024-11-26 20:55:41.086319] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:25:37.560 [2024-11-26 20:55:41.086428] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:37.560 [2024-11-26 20:55:41.160225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:37.560 [2024-11-26 20:55:41.217961] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:37.560 [2024-11-26 20:55:41.218014] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:37.560 [2024-11-26 20:55:41.218038] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:37.560 [2024-11-26 20:55:41.218048] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:37.560 [2024-11-26 20:55:41.218057] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:37.560 [2024-11-26 20:55:41.219580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:25:37.560 [2024-11-26 20:55:41.219693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:25:37.560 [2024-11-26 20:55:41.219821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:37.560 [2024-11-26 20:55:41.219812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:25:37.819 20:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:37.819 20:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:25:37.819 20:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:37.819 20:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:37.819 20:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:37.819 20:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:37.819 20:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:37.819 20:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.819 20:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:37.819 Malloc0 00:25:37.819 20:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.819 20:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:37.819 20:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.819 20:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:37.819 [2024-11-26 20:55:41.403265] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:37.819 20:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.819 20:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:37.819 20:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.819 20:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:37.819 20:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.819 20:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:37.819 20:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.819 20:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:37.819 20:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.819 20:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:37.819 20:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.819 20:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:37.819 [2024-11-26 20:55:41.431552] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:37.819 20:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.819 20:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:37.819 20:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.819 20:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:37.819 20:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.819 20:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1772254 00:25:37.819 20:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:37.819 20:55:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:25:40.383 20:55:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1772141 00:25:40.383 20:55:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:25:40.383 Read completed with error (sct=0, sc=8) 00:25:40.383 starting I/O failed 00:25:40.383 Read completed with error (sct=0, sc=8) 00:25:40.383 starting I/O failed 00:25:40.383 Read completed with error (sct=0, sc=8) 00:25:40.383 starting I/O failed 00:25:40.383 Read completed with error (sct=0, sc=8) 00:25:40.383 starting I/O failed 00:25:40.383 Read completed with error (sct=0, sc=8) 00:25:40.383 starting I/O failed 00:25:40.383 Read completed with error (sct=0, sc=8) 00:25:40.383 starting I/O failed 00:25:40.383 Read completed with error (sct=0, sc=8) 00:25:40.383 starting I/O failed 00:25:40.383 Read completed with error (sct=0, sc=8) 00:25:40.383 starting I/O failed 00:25:40.383 Read completed with error (sct=0, sc=8) 00:25:40.383 starting I/O failed 00:25:40.383 Read completed with error (sct=0, sc=8) 00:25:40.383 starting I/O failed 00:25:40.383 Read completed with error (sct=0, sc=8) 00:25:40.383 starting I/O failed 00:25:40.383 Write completed with error (sct=0, sc=8) 00:25:40.383 starting I/O failed 00:25:40.383 Read completed with error (sct=0, sc=8) 00:25:40.383 starting I/O failed 00:25:40.383 Write completed with error (sct=0, sc=8) 00:25:40.383 starting I/O failed 00:25:40.383 Write completed with error (sct=0, sc=8) 00:25:40.383 starting I/O failed 00:25:40.383 Read completed with error (sct=0, sc=8) 00:25:40.383 starting I/O failed 00:25:40.383 Write completed with error (sct=0, sc=8) 00:25:40.383 starting I/O failed 00:25:40.383 Read completed with error (sct=0, sc=8) 00:25:40.383 starting I/O failed 00:25:40.383 Write completed with error (sct=0, sc=8) 00:25:40.383 starting I/O failed 00:25:40.383 Read completed with error (sct=0, sc=8) 00:25:40.383 starting I/O failed 00:25:40.383 Write completed with error (sct=0, sc=8) 00:25:40.383 starting I/O failed 00:25:40.383 Write completed with error (sct=0, sc=8) 00:25:40.383 starting I/O failed 00:25:40.383 Read completed with error (sct=0, sc=8) 00:25:40.383 starting I/O failed 00:25:40.383 Read completed with error (sct=0, sc=8) 00:25:40.383 starting I/O failed 00:25:40.383 Read completed with error (sct=0, sc=8) 00:25:40.383 starting I/O failed 00:25:40.383 Write completed with error (sct=0, sc=8) 00:25:40.383 starting I/O failed 00:25:40.383 Write completed with error (sct=0, sc=8) 00:25:40.383 starting I/O failed 00:25:40.383 Write completed with error (sct=0, sc=8) 00:25:40.383 starting I/O failed 00:25:40.383 Read completed with error (sct=0, sc=8) 00:25:40.383 starting I/O failed 00:25:40.383 Read completed with error (sct=0, sc=8) 00:25:40.383 starting I/O failed 00:25:40.383 Read completed with error (sct=0, sc=8) 00:25:40.383 starting I/O failed 00:25:40.383 Read completed with error (sct=0, sc=8) 00:25:40.383 starting I/O failed 00:25:40.383 Read completed with error (sct=0, sc=8) 00:25:40.383 starting I/O failed 00:25:40.383 [2024-11-26 20:55:43.456623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:40.383 Read completed with error (sct=0, sc=8) 00:25:40.383 starting I/O failed 00:25:40.383 Write completed with error (sct=0, sc=8) 00:25:40.383 starting I/O failed 00:25:40.383 Write completed with error (sct=0, sc=8) 00:25:40.383 starting I/O failed 00:25:40.383 Write completed with error (sct=0, sc=8) 00:25:40.383 starting I/O failed 00:25:40.383 Write completed with error (sct=0, sc=8) 00:25:40.383 starting I/O failed 00:25:40.383 Write completed with error (sct=0, sc=8) 00:25:40.383 starting I/O failed 00:25:40.384 Read completed with error (sct=0, sc=8) 00:25:40.384 starting I/O failed 00:25:40.384 Read completed with error (sct=0, sc=8) 00:25:40.384 starting I/O failed 00:25:40.384 Write completed with error (sct=0, sc=8) 00:25:40.384 starting I/O failed 00:25:40.384 Read completed with error (sct=0, sc=8) 00:25:40.384 starting I/O failed 00:25:40.384 Read completed with error (sct=0, sc=8) 00:25:40.384 starting I/O failed 00:25:40.384 Read completed with error (sct=0, sc=8) 00:25:40.384 starting I/O failed 00:25:40.384 Write completed with error (sct=0, sc=8) 00:25:40.384 starting I/O failed 00:25:40.384 Read completed with error (sct=0, sc=8) 00:25:40.384 starting I/O failed 00:25:40.384 Write completed with error (sct=0, sc=8) 00:25:40.384 starting I/O failed 00:25:40.384 Write completed with error (sct=0, sc=8) 00:25:40.384 starting I/O failed 00:25:40.384 Read completed with error (sct=0, sc=8) 00:25:40.384 starting I/O failed 00:25:40.384 Read completed with error (sct=0, sc=8) 00:25:40.384 starting I/O failed 00:25:40.384 Read completed with error (sct=0, sc=8) 00:25:40.384 starting I/O failed 00:25:40.384 Write completed with error (sct=0, sc=8) 00:25:40.384 starting I/O failed 00:25:40.384 Read completed with error (sct=0, sc=8) 00:25:40.384 starting I/O failed 00:25:40.384 Read completed with error (sct=0, sc=8) 00:25:40.384 starting I/O failed 00:25:40.384 Write completed with error (sct=0, sc=8) 00:25:40.384 starting I/O failed 00:25:40.384 Read completed with error (sct=0, sc=8) 00:25:40.384 starting I/O failed 00:25:40.384 Read completed with error (sct=0, sc=8) 00:25:40.384 starting I/O failed 00:25:40.384 Write completed with error (sct=0, sc=8) 00:25:40.384 starting I/O failed 00:25:40.384 Write completed with error (sct=0, sc=8) 00:25:40.384 starting I/O failed 00:25:40.384 Read completed with error (sct=0, sc=8) 00:25:40.384 starting I/O failed 00:25:40.384 Read completed with error (sct=0, sc=8) 00:25:40.384 starting I/O failed 00:25:40.384 Write completed with error (sct=0, sc=8) 00:25:40.384 starting I/O failed 00:25:40.384 Write completed with error (sct=0, sc=8) 00:25:40.384 starting I/O failed 00:25:40.384 [2024-11-26 20:55:43.456939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:40.384 Read completed with error (sct=0, sc=8) 00:25:40.384 starting I/O failed 00:25:40.384 Read completed with error (sct=0, sc=8) 00:25:40.384 starting I/O failed 00:25:40.384 Read completed with error (sct=0, sc=8) 00:25:40.384 starting I/O failed 00:25:40.384 Read completed with error (sct=0, sc=8) 00:25:40.384 starting I/O failed 00:25:40.384 Read completed with error (sct=0, sc=8) 00:25:40.384 starting I/O failed 00:25:40.384 Read completed with error (sct=0, sc=8) 00:25:40.384 starting I/O failed 00:25:40.384 Read completed with error (sct=0, sc=8) 00:25:40.384 starting I/O failed 00:25:40.384 Read completed with error (sct=0, sc=8) 00:25:40.384 starting I/O failed 00:25:40.384 Write completed with error (sct=0, sc=8) 00:25:40.384 starting I/O failed 00:25:40.384 Read completed with error (sct=0, sc=8) 00:25:40.384 starting I/O failed 00:25:40.384 Read completed with error (sct=0, sc=8) 00:25:40.384 starting I/O failed 00:25:40.384 Read completed with error (sct=0, sc=8) 00:25:40.384 starting I/O failed 00:25:40.384 Write completed with error (sct=0, sc=8) 00:25:40.384 starting I/O failed 00:25:40.384 Write completed with error (sct=0, sc=8) 00:25:40.384 starting I/O failed 00:25:40.384 Write completed with error (sct=0, sc=8) 00:25:40.384 starting I/O failed 00:25:40.384 Write completed with error (sct=0, sc=8) 00:25:40.384 starting I/O failed 00:25:40.384 Write completed with error (sct=0, sc=8) 00:25:40.384 starting I/O failed 00:25:40.384 Write completed with error (sct=0, sc=8) 00:25:40.384 starting I/O failed 00:25:40.384 Write completed with error (sct=0, sc=8) 00:25:40.384 starting I/O failed 00:25:40.384 Read completed with error (sct=0, sc=8) 00:25:40.384 starting I/O failed 00:25:40.384 Read completed with error (sct=0, sc=8) 00:25:40.384 starting I/O failed 00:25:40.384 Read completed with error (sct=0, sc=8) 00:25:40.384 starting I/O failed 00:25:40.384 Write completed with error (sct=0, sc=8) 00:25:40.384 starting I/O failed 00:25:40.384 Write completed with error (sct=0, sc=8) 00:25:40.384 starting I/O failed 00:25:40.384 Write completed with error (sct=0, sc=8) 00:25:40.384 starting I/O failed 00:25:40.384 Read completed with error (sct=0, sc=8) 00:25:40.384 starting I/O failed 00:25:40.384 Read completed with error (sct=0, sc=8) 00:25:40.384 starting I/O failed 00:25:40.384 Write completed with error (sct=0, sc=8) 00:25:40.384 starting I/O failed 00:25:40.384 Write completed with error (sct=0, sc=8) 00:25:40.384 starting I/O failed 00:25:40.384 Write completed with error (sct=0, sc=8) 00:25:40.384 starting I/O failed 00:25:40.384 Read completed with error (sct=0, sc=8) 00:25:40.384 starting I/O failed 00:25:40.384 Write completed with error (sct=0, sc=8) 00:25:40.384 starting I/O failed 00:25:40.384 [2024-11-26 20:55:43.457259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:40.385 Read completed with error (sct=0, sc=8) 00:25:40.385 starting I/O failed 00:25:40.385 Read completed with error (sct=0, sc=8) 00:25:40.385 starting I/O failed 00:25:40.385 Read completed with error (sct=0, sc=8) 00:25:40.385 starting I/O failed 00:25:40.385 Read completed with error (sct=0, sc=8) 00:25:40.385 starting I/O failed 00:25:40.385 Read completed with error (sct=0, sc=8) 00:25:40.385 starting I/O failed 00:25:40.385 Read completed with error (sct=0, sc=8) 00:25:40.385 starting I/O failed 00:25:40.385 Read completed with error (sct=0, sc=8) 00:25:40.385 starting I/O failed 00:25:40.385 Write completed with error (sct=0, sc=8) 00:25:40.385 starting I/O failed 00:25:40.385 Read completed with error (sct=0, sc=8) 00:25:40.385 starting I/O failed 00:25:40.385 Write completed with error (sct=0, sc=8) 00:25:40.385 starting I/O failed 00:25:40.385 Read completed with error (sct=0, sc=8) 00:25:40.385 starting I/O failed 00:25:40.385 Read completed with error (sct=0, sc=8) 00:25:40.385 starting I/O failed 00:25:40.385 Write completed with error (sct=0, sc=8) 00:25:40.385 starting I/O failed 00:25:40.385 Read completed with error (sct=0, sc=8) 00:25:40.385 starting I/O failed 00:25:40.385 Write completed with error (sct=0, sc=8) 00:25:40.385 starting I/O failed 00:25:40.385 Read completed with error (sct=0, sc=8) 00:25:40.385 starting I/O failed 00:25:40.385 Read completed with error (sct=0, sc=8) 00:25:40.385 starting I/O failed 00:25:40.385 Read completed with error (sct=0, sc=8) 00:25:40.385 starting I/O failed 00:25:40.385 Write completed with error (sct=0, sc=8) 00:25:40.385 starting I/O failed 00:25:40.385 Write completed with error (sct=0, sc=8) 00:25:40.385 starting I/O failed 00:25:40.385 Write completed with error (sct=0, sc=8) 00:25:40.385 starting I/O failed 00:25:40.385 Read completed with error (sct=0, sc=8) 00:25:40.385 starting I/O failed 00:25:40.385 Read completed with error (sct=0, sc=8) 00:25:40.385 starting I/O failed 00:25:40.385 Read completed with error (sct=0, sc=8) 00:25:40.385 starting I/O failed 00:25:40.385 Read completed with error (sct=0, sc=8) 00:25:40.385 starting I/O failed 00:25:40.385 Write completed with error (sct=0, sc=8) 00:25:40.385 starting I/O failed 00:25:40.385 Read completed with error (sct=0, sc=8) 00:25:40.385 starting I/O failed 00:25:40.385 Write completed with error (sct=0, sc=8) 00:25:40.385 starting I/O failed 00:25:40.385 Write completed with error (sct=0, sc=8) 00:25:40.385 starting I/O failed 00:25:40.385 Write completed with error (sct=0, sc=8) 00:25:40.385 starting I/O failed 00:25:40.385 Write completed with error (sct=0, sc=8) 00:25:40.385 starting I/O failed 00:25:40.386 Read completed with error (sct=0, sc=8) 00:25:40.386 starting I/O failed 00:25:40.386 [2024-11-26 20:55:43.457599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:40.386 [2024-11-26 20:55:43.457825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.386 [2024-11-26 20:55:43.457859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.386 qpair failed and we were unable to recover it. 00:25:40.386 [2024-11-26 20:55:43.458049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.386 [2024-11-26 20:55:43.458100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.386 qpair failed and we were unable to recover it. 00:25:40.386 [2024-11-26 20:55:43.458205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.386 [2024-11-26 20:55:43.458231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.386 qpair failed and we were unable to recover it. 00:25:40.386 [2024-11-26 20:55:43.458336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.386 [2024-11-26 20:55:43.458365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.386 qpair failed and we were unable to recover it. 00:25:40.386 [2024-11-26 20:55:43.458463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.386 [2024-11-26 20:55:43.458489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.386 qpair failed and we were unable to recover it. 00:25:40.386 [2024-11-26 20:55:43.458572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.386 [2024-11-26 20:55:43.458599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.386 qpair failed and we were unable to recover it. 00:25:40.386 [2024-11-26 20:55:43.458687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.386 [2024-11-26 20:55:43.458713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.387 qpair failed and we were unable to recover it. 00:25:40.387 [2024-11-26 20:55:43.458803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.387 [2024-11-26 20:55:43.458828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.387 qpair failed and we were unable to recover it. 00:25:40.387 [2024-11-26 20:55:43.458968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.387 [2024-11-26 20:55:43.458993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.387 qpair failed and we were unable to recover it. 00:25:40.387 [2024-11-26 20:55:43.459106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.387 [2024-11-26 20:55:43.459132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.387 qpair failed and we were unable to recover it. 00:25:40.387 [2024-11-26 20:55:43.459236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.387 [2024-11-26 20:55:43.459286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.387 qpair failed and we were unable to recover it. 00:25:40.387 [2024-11-26 20:55:43.459396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.387 [2024-11-26 20:55:43.459424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.387 qpair failed and we were unable to recover it. 00:25:40.387 [2024-11-26 20:55:43.459526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.387 [2024-11-26 20:55:43.459553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.387 qpair failed and we were unable to recover it. 00:25:40.387 [2024-11-26 20:55:43.459636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.387 [2024-11-26 20:55:43.459662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.387 qpair failed and we were unable to recover it. 00:25:40.387 [2024-11-26 20:55:43.459753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.387 [2024-11-26 20:55:43.459779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.387 qpair failed and we were unable to recover it. 00:25:40.387 [2024-11-26 20:55:43.459863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.387 [2024-11-26 20:55:43.459897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.387 qpair failed and we were unable to recover it. 00:25:40.388 [2024-11-26 20:55:43.460009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.388 [2024-11-26 20:55:43.460036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.388 qpair failed and we were unable to recover it. 00:25:40.388 [2024-11-26 20:55:43.460166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.388 [2024-11-26 20:55:43.460192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.388 qpair failed and we were unable to recover it. 00:25:40.388 [2024-11-26 20:55:43.460298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.388 [2024-11-26 20:55:43.460357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.388 qpair failed and we were unable to recover it. 00:25:40.388 [2024-11-26 20:55:43.460447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.388 [2024-11-26 20:55:43.460476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.388 qpair failed and we were unable to recover it. 00:25:40.388 [2024-11-26 20:55:43.460622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.388 [2024-11-26 20:55:43.460649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.388 qpair failed and we were unable to recover it. 00:25:40.388 [2024-11-26 20:55:43.460741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.388 [2024-11-26 20:55:43.460769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.388 qpair failed and we were unable to recover it. 00:25:40.388 [2024-11-26 20:55:43.460975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.389 [2024-11-26 20:55:43.461002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.389 qpair failed and we were unable to recover it. 00:25:40.389 [2024-11-26 20:55:43.461088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.389 [2024-11-26 20:55:43.461116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.389 qpair failed and we were unable to recover it. 00:25:40.389 [2024-11-26 20:55:43.461231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.389 [2024-11-26 20:55:43.461259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.389 qpair failed and we were unable to recover it. 00:25:40.389 [2024-11-26 20:55:43.461373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.389 [2024-11-26 20:55:43.461413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.389 qpair failed and we were unable to recover it. 00:25:40.389 [2024-11-26 20:55:43.461496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.389 [2024-11-26 20:55:43.461524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.389 qpair failed and we were unable to recover it. 00:25:40.389 [2024-11-26 20:55:43.461621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.389 [2024-11-26 20:55:43.461647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.389 qpair failed and we were unable to recover it. 00:25:40.389 [2024-11-26 20:55:43.461869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.389 [2024-11-26 20:55:43.461932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.389 qpair failed and we were unable to recover it. 00:25:40.389 [2024-11-26 20:55:43.462128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.389 [2024-11-26 20:55:43.462155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.389 qpair failed and we were unable to recover it. 00:25:40.389 [2024-11-26 20:55:43.462263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.389 [2024-11-26 20:55:43.462289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.389 qpair failed and we were unable to recover it. 00:25:40.389 [2024-11-26 20:55:43.462390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.389 [2024-11-26 20:55:43.462416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.389 qpair failed and we were unable to recover it. 00:25:40.390 [2024-11-26 20:55:43.462511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.390 [2024-11-26 20:55:43.462538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.390 qpair failed and we were unable to recover it. 00:25:40.390 [2024-11-26 20:55:43.462664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.390 [2024-11-26 20:55:43.462690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.390 qpair failed and we were unable to recover it. 00:25:40.390 [2024-11-26 20:55:43.462772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.390 [2024-11-26 20:55:43.462801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.390 qpair failed and we were unable to recover it. 00:25:40.390 [2024-11-26 20:55:43.462913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.390 [2024-11-26 20:55:43.462940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.390 qpair failed and we were unable to recover it. 00:25:40.390 [2024-11-26 20:55:43.463062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.390 [2024-11-26 20:55:43.463092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.390 qpair failed and we were unable to recover it. 00:25:40.390 [2024-11-26 20:55:43.463176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.390 [2024-11-26 20:55:43.463202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.390 qpair failed and we were unable to recover it. 00:25:40.390 [2024-11-26 20:55:43.463298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.390 [2024-11-26 20:55:43.463356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.390 qpair failed and we were unable to recover it. 00:25:40.390 [2024-11-26 20:55:43.463459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.390 [2024-11-26 20:55:43.463487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.390 qpair failed and we were unable to recover it. 00:25:40.390 [2024-11-26 20:55:43.463596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.390 [2024-11-26 20:55:43.463623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.390 qpair failed and we were unable to recover it. 00:25:40.390 [2024-11-26 20:55:43.463833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.390 [2024-11-26 20:55:43.463888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.390 qpair failed and we were unable to recover it. 00:25:40.390 [2024-11-26 20:55:43.464015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.390 [2024-11-26 20:55:43.464082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.390 qpair failed and we were unable to recover it. 00:25:40.390 [2024-11-26 20:55:43.464175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.390 [2024-11-26 20:55:43.464202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.390 qpair failed and we were unable to recover it. 00:25:40.390 [2024-11-26 20:55:43.464294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.390 [2024-11-26 20:55:43.464329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.390 qpair failed and we were unable to recover it. 00:25:40.390 [2024-11-26 20:55:43.464425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.390 [2024-11-26 20:55:43.464451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.390 qpair failed and we were unable to recover it. 00:25:40.390 [2024-11-26 20:55:43.464565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.390 [2024-11-26 20:55:43.464591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.390 qpair failed and we were unable to recover it. 00:25:40.390 [2024-11-26 20:55:43.464673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.390 [2024-11-26 20:55:43.464699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.390 qpair failed and we were unable to recover it. 00:25:40.390 [2024-11-26 20:55:43.464802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.390 [2024-11-26 20:55:43.464828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.390 qpair failed and we were unable to recover it. 00:25:40.390 [2024-11-26 20:55:43.464957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.390 [2024-11-26 20:55:43.464985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.390 qpair failed and we were unable to recover it. 00:25:40.390 [2024-11-26 20:55:43.465126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.390 [2024-11-26 20:55:43.465152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.390 qpair failed and we were unable to recover it. 00:25:40.390 [2024-11-26 20:55:43.465236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.390 [2024-11-26 20:55:43.465262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.390 qpair failed and we were unable to recover it. 00:25:40.390 [2024-11-26 20:55:43.465364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.390 [2024-11-26 20:55:43.465391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.390 qpair failed and we were unable to recover it. 00:25:40.390 [2024-11-26 20:55:43.465473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.390 [2024-11-26 20:55:43.465500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.390 qpair failed and we were unable to recover it. 00:25:40.391 [2024-11-26 20:55:43.465578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.391 [2024-11-26 20:55:43.465604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.391 qpair failed and we were unable to recover it. 00:25:40.391 [2024-11-26 20:55:43.465692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.391 [2024-11-26 20:55:43.465719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.391 qpair failed and we were unable to recover it. 00:25:40.391 [2024-11-26 20:55:43.465863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.391 [2024-11-26 20:55:43.465889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.391 qpair failed and we were unable to recover it. 00:25:40.391 [2024-11-26 20:55:43.466014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.391 [2024-11-26 20:55:43.466055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.391 qpair failed and we were unable to recover it. 00:25:40.391 [2024-11-26 20:55:43.466147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.391 [2024-11-26 20:55:43.466174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.391 qpair failed and we were unable to recover it. 00:25:40.391 [2024-11-26 20:55:43.466325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.391 [2024-11-26 20:55:43.466365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.391 qpair failed and we were unable to recover it. 00:25:40.391 [2024-11-26 20:55:43.466467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.391 [2024-11-26 20:55:43.466496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.391 qpair failed and we were unable to recover it. 00:25:40.391 [2024-11-26 20:55:43.466604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.391 [2024-11-26 20:55:43.466631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.391 qpair failed and we were unable to recover it. 00:25:40.391 [2024-11-26 20:55:43.466708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.391 [2024-11-26 20:55:43.466734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.391 qpair failed and we were unable to recover it. 00:25:40.391 [2024-11-26 20:55:43.466874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.391 [2024-11-26 20:55:43.466900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.391 qpair failed and we were unable to recover it. 00:25:40.391 [2024-11-26 20:55:43.466993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.391 [2024-11-26 20:55:43.467020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.391 qpair failed and we were unable to recover it. 00:25:40.391 [2024-11-26 20:55:43.467118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.391 [2024-11-26 20:55:43.467145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.391 qpair failed and we were unable to recover it. 00:25:40.391 [2024-11-26 20:55:43.467268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.391 [2024-11-26 20:55:43.467296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.392 qpair failed and we were unable to recover it. 00:25:40.392 [2024-11-26 20:55:43.467399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.392 [2024-11-26 20:55:43.467426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.392 qpair failed and we were unable to recover it. 00:25:40.392 [2024-11-26 20:55:43.467537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.392 [2024-11-26 20:55:43.467563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.392 qpair failed and we were unable to recover it. 00:25:40.392 [2024-11-26 20:55:43.467654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.392 [2024-11-26 20:55:43.467680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.392 qpair failed and we were unable to recover it. 00:25:40.392 [2024-11-26 20:55:43.467767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.392 [2024-11-26 20:55:43.467793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.392 qpair failed and we were unable to recover it. 00:25:40.392 [2024-11-26 20:55:43.467916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.392 [2024-11-26 20:55:43.467944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.392 qpair failed and we were unable to recover it. 00:25:40.392 [2024-11-26 20:55:43.468068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.392 [2024-11-26 20:55:43.468094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.392 qpair failed and we were unable to recover it. 00:25:40.392 [2024-11-26 20:55:43.468190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.392 [2024-11-26 20:55:43.468230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.392 qpair failed and we were unable to recover it. 00:25:40.392 [2024-11-26 20:55:43.468324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.392 [2024-11-26 20:55:43.468353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.393 qpair failed and we were unable to recover it. 00:25:40.393 [2024-11-26 20:55:43.468446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.393 [2024-11-26 20:55:43.468485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.393 qpair failed and we were unable to recover it. 00:25:40.393 [2024-11-26 20:55:43.468599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.393 [2024-11-26 20:55:43.468627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.393 qpair failed and we were unable to recover it. 00:25:40.393 [2024-11-26 20:55:43.468741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.393 [2024-11-26 20:55:43.468768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.393 qpair failed and we were unable to recover it. 00:25:40.393 [2024-11-26 20:55:43.468879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.393 [2024-11-26 20:55:43.468904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.393 qpair failed and we were unable to recover it. 00:25:40.393 [2024-11-26 20:55:43.468993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.393 [2024-11-26 20:55:43.469018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.393 qpair failed and we were unable to recover it. 00:25:40.393 [2024-11-26 20:55:43.469126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.393 [2024-11-26 20:55:43.469151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.393 qpair failed and we were unable to recover it. 00:25:40.393 [2024-11-26 20:55:43.469278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.393 [2024-11-26 20:55:43.469327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.393 qpair failed and we were unable to recover it. 00:25:40.393 [2024-11-26 20:55:43.469445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.393 [2024-11-26 20:55:43.469474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.393 qpair failed and we were unable to recover it. 00:25:40.393 [2024-11-26 20:55:43.469575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.393 [2024-11-26 20:55:43.469603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.393 qpair failed and we were unable to recover it. 00:25:40.393 [2024-11-26 20:55:43.469716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.393 [2024-11-26 20:55:43.469743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.394 qpair failed and we were unable to recover it. 00:25:40.394 [2024-11-26 20:55:43.469858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.394 [2024-11-26 20:55:43.469884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.394 qpair failed and we were unable to recover it. 00:25:40.394 [2024-11-26 20:55:43.469996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.394 [2024-11-26 20:55:43.470023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.394 qpair failed and we were unable to recover it. 00:25:40.394 [2024-11-26 20:55:43.470111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.394 [2024-11-26 20:55:43.470150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.394 qpair failed and we were unable to recover it. 00:25:40.394 [2024-11-26 20:55:43.470243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.394 [2024-11-26 20:55:43.470272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.394 qpair failed and we were unable to recover it. 00:25:40.394 [2024-11-26 20:55:43.470370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.394 [2024-11-26 20:55:43.470399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.394 qpair failed and we were unable to recover it. 00:25:40.394 [2024-11-26 20:55:43.470481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.394 [2024-11-26 20:55:43.470508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.394 qpair failed and we were unable to recover it. 00:25:40.394 [2024-11-26 20:55:43.470619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.394 [2024-11-26 20:55:43.470645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.394 qpair failed and we were unable to recover it. 00:25:40.394 [2024-11-26 20:55:43.470730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.394 [2024-11-26 20:55:43.470756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.394 qpair failed and we were unable to recover it. 00:25:40.394 [2024-11-26 20:55:43.470903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.394 [2024-11-26 20:55:43.470932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.394 qpair failed and we were unable to recover it. 00:25:40.394 [2024-11-26 20:55:43.471106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.394 [2024-11-26 20:55:43.471159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.395 qpair failed and we were unable to recover it. 00:25:40.395 [2024-11-26 20:55:43.471253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.395 [2024-11-26 20:55:43.471293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.395 qpair failed and we were unable to recover it. 00:25:40.395 [2024-11-26 20:55:43.471407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.395 [2024-11-26 20:55:43.471436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.395 qpair failed and we were unable to recover it. 00:25:40.395 [2024-11-26 20:55:43.471557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.395 [2024-11-26 20:55:43.471585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.395 qpair failed and we were unable to recover it. 00:25:40.395 [2024-11-26 20:55:43.471690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.395 [2024-11-26 20:55:43.471717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.395 qpair failed and we were unable to recover it. 00:25:40.395 [2024-11-26 20:55:43.471825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.395 [2024-11-26 20:55:43.471852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.395 qpair failed and we were unable to recover it. 00:25:40.395 [2024-11-26 20:55:43.471945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.395 [2024-11-26 20:55:43.471972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.395 qpair failed and we were unable to recover it. 00:25:40.395 [2024-11-26 20:55:43.472125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.395 [2024-11-26 20:55:43.472166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.395 qpair failed and we were unable to recover it. 00:25:40.395 [2024-11-26 20:55:43.472269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.395 [2024-11-26 20:55:43.472297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.395 qpair failed and we were unable to recover it. 00:25:40.396 [2024-11-26 20:55:43.472416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.396 [2024-11-26 20:55:43.472443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.396 qpair failed and we were unable to recover it. 00:25:40.396 [2024-11-26 20:55:43.472536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.396 [2024-11-26 20:55:43.472563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.396 qpair failed and we were unable to recover it. 00:25:40.396 [2024-11-26 20:55:43.472679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.396 [2024-11-26 20:55:43.472705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.396 qpair failed and we were unable to recover it. 00:25:40.396 [2024-11-26 20:55:43.473442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.396 [2024-11-26 20:55:43.473469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.396 qpair failed and we were unable to recover it. 00:25:40.396 [2024-11-26 20:55:43.473552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.396 [2024-11-26 20:55:43.473579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.396 qpair failed and we were unable to recover it. 00:25:40.396 [2024-11-26 20:55:43.473699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.396 [2024-11-26 20:55:43.473726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.396 qpair failed and we were unable to recover it. 00:25:40.396 [2024-11-26 20:55:43.473846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.396 [2024-11-26 20:55:43.473877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.396 qpair failed and we were unable to recover it. 00:25:40.396 [2024-11-26 20:55:43.474017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.396 [2024-11-26 20:55:43.474044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.396 qpair failed and we were unable to recover it. 00:25:40.396 [2024-11-26 20:55:43.474130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.396 [2024-11-26 20:55:43.474155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.396 qpair failed and we were unable to recover it. 00:25:40.396 [2024-11-26 20:55:43.474294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.396 [2024-11-26 20:55:43.474329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.396 qpair failed and we were unable to recover it. 00:25:40.396 [2024-11-26 20:55:43.474445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.396 [2024-11-26 20:55:43.474471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.396 qpair failed and we were unable to recover it. 00:25:40.396 [2024-11-26 20:55:43.474557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.396 [2024-11-26 20:55:43.474583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.396 qpair failed and we were unable to recover it. 00:25:40.396 [2024-11-26 20:55:43.474695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.396 [2024-11-26 20:55:43.474722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.396 qpair failed and we were unable to recover it. 00:25:40.396 [2024-11-26 20:55:43.474832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.396 [2024-11-26 20:55:43.474858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.396 qpair failed and we were unable to recover it. 00:25:40.396 [2024-11-26 20:55:43.474968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.397 [2024-11-26 20:55:43.474994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.397 qpair failed and we were unable to recover it. 00:25:40.397 [2024-11-26 20:55:43.475106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.397 [2024-11-26 20:55:43.475132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.397 qpair failed and we were unable to recover it. 00:25:40.397 [2024-11-26 20:55:43.475217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.397 [2024-11-26 20:55:43.475245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.397 qpair failed and we were unable to recover it. 00:25:40.397 [2024-11-26 20:55:43.475351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.397 [2024-11-26 20:55:43.475391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.397 qpair failed and we were unable to recover it. 00:25:40.397 [2024-11-26 20:55:43.475494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.397 [2024-11-26 20:55:43.475534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.397 qpair failed and we were unable to recover it. 00:25:40.397 [2024-11-26 20:55:43.475660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.397 [2024-11-26 20:55:43.475688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.397 qpair failed and we were unable to recover it. 00:25:40.397 [2024-11-26 20:55:43.475831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.397 [2024-11-26 20:55:43.475858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.397 qpair failed and we were unable to recover it. 00:25:40.397 [2024-11-26 20:55:43.475966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.397 [2024-11-26 20:55:43.475993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.397 qpair failed and we were unable to recover it. 00:25:40.397 [2024-11-26 20:55:43.476167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.397 [2024-11-26 20:55:43.476223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.397 qpair failed and we were unable to recover it. 00:25:40.397 [2024-11-26 20:55:43.476317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.397 [2024-11-26 20:55:43.476345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.397 qpair failed and we were unable to recover it. 00:25:40.397 [2024-11-26 20:55:43.476438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.397 [2024-11-26 20:55:43.476465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.397 qpair failed and we were unable to recover it. 00:25:40.397 [2024-11-26 20:55:43.476601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.397 [2024-11-26 20:55:43.476627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.398 qpair failed and we were unable to recover it. 00:25:40.398 [2024-11-26 20:55:43.476730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.398 [2024-11-26 20:55:43.476756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.398 qpair failed and we were unable to recover it. 00:25:40.398 [2024-11-26 20:55:43.476870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.398 [2024-11-26 20:55:43.476896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.398 qpair failed and we were unable to recover it. 00:25:40.398 [2024-11-26 20:55:43.477017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.398 [2024-11-26 20:55:43.477045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.398 qpair failed and we were unable to recover it. 00:25:40.398 [2024-11-26 20:55:43.477141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.398 [2024-11-26 20:55:43.477167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.398 qpair failed and we were unable to recover it. 00:25:40.398 [2024-11-26 20:55:43.477310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.398 [2024-11-26 20:55:43.477337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.398 qpair failed and we were unable to recover it. 00:25:40.398 [2024-11-26 20:55:43.477452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.399 [2024-11-26 20:55:43.477478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.399 qpair failed and we were unable to recover it. 00:25:40.399 [2024-11-26 20:55:43.477563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.399 [2024-11-26 20:55:43.477590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.399 qpair failed and we were unable to recover it. 00:25:40.399 [2024-11-26 20:55:43.477716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.399 [2024-11-26 20:55:43.477756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.399 qpair failed and we were unable to recover it. 00:25:40.399 [2024-11-26 20:55:43.477901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.399 [2024-11-26 20:55:43.477929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.399 qpair failed and we were unable to recover it. 00:25:40.399 [2024-11-26 20:55:43.478072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.399 [2024-11-26 20:55:43.478098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.399 qpair failed and we were unable to recover it. 00:25:40.399 [2024-11-26 20:55:43.478180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.399 [2024-11-26 20:55:43.478206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.399 qpair failed and we were unable to recover it. 00:25:40.399 [2024-11-26 20:55:43.478293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.399 [2024-11-26 20:55:43.478327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.399 qpair failed and we were unable to recover it. 00:25:40.399 [2024-11-26 20:55:43.478412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.399 [2024-11-26 20:55:43.478438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.399 qpair failed and we were unable to recover it. 00:25:40.399 [2024-11-26 20:55:43.478520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.399 [2024-11-26 20:55:43.478545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.399 qpair failed and we were unable to recover it. 00:25:40.399 [2024-11-26 20:55:43.478661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.399 [2024-11-26 20:55:43.478691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.399 qpair failed and we were unable to recover it. 00:25:40.399 [2024-11-26 20:55:43.478862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.399 [2024-11-26 20:55:43.478903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.399 qpair failed and we were unable to recover it. 00:25:40.399 [2024-11-26 20:55:43.479022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.399 [2024-11-26 20:55:43.479049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.399 qpair failed and we were unable to recover it. 00:25:40.399 [2024-11-26 20:55:43.479198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.399 [2024-11-26 20:55:43.479224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.399 qpair failed and we were unable to recover it. 00:25:40.399 [2024-11-26 20:55:43.479338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.399 [2024-11-26 20:55:43.479365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.399 qpair failed and we were unable to recover it. 00:25:40.399 [2024-11-26 20:55:43.479473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.400 [2024-11-26 20:55:43.479499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.400 qpair failed and we were unable to recover it. 00:25:40.400 [2024-11-26 20:55:43.479581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.400 [2024-11-26 20:55:43.479607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.400 qpair failed and we were unable to recover it. 00:25:40.400 [2024-11-26 20:55:43.479723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.400 [2024-11-26 20:55:43.479750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.400 qpair failed and we were unable to recover it. 00:25:40.400 [2024-11-26 20:55:43.479842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.400 [2024-11-26 20:55:43.479869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.400 qpair failed and we were unable to recover it. 00:25:40.400 [2024-11-26 20:55:43.479978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.400 [2024-11-26 20:55:43.480004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.400 qpair failed and we were unable to recover it. 00:25:40.400 [2024-11-26 20:55:43.480096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.400 [2024-11-26 20:55:43.480122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.400 qpair failed and we were unable to recover it. 00:25:40.400 [2024-11-26 20:55:43.480202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.400 [2024-11-26 20:55:43.480228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.400 qpair failed and we were unable to recover it. 00:25:40.400 [2024-11-26 20:55:43.480314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.400 [2024-11-26 20:55:43.480343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.400 qpair failed and we were unable to recover it. 00:25:40.400 [2024-11-26 20:55:43.480430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.400 [2024-11-26 20:55:43.480460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.400 qpair failed and we were unable to recover it. 00:25:40.400 [2024-11-26 20:55:43.480556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.400 [2024-11-26 20:55:43.480584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.400 qpair failed and we were unable to recover it. 00:25:40.400 [2024-11-26 20:55:43.480670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.400 [2024-11-26 20:55:43.480697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.400 qpair failed and we were unable to recover it. 00:25:40.400 [2024-11-26 20:55:43.480787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.400 [2024-11-26 20:55:43.480814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.400 qpair failed and we were unable to recover it. 00:25:40.400 [2024-11-26 20:55:43.480929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.401 [2024-11-26 20:55:43.480956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.401 qpair failed and we were unable to recover it. 00:25:40.401 [2024-11-26 20:55:43.481065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.401 [2024-11-26 20:55:43.481092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.401 qpair failed and we were unable to recover it. 00:25:40.401 [2024-11-26 20:55:43.481201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.401 [2024-11-26 20:55:43.481228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.401 qpair failed and we were unable to recover it. 00:25:40.401 [2024-11-26 20:55:43.481350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.401 [2024-11-26 20:55:43.481377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.401 qpair failed and we were unable to recover it. 00:25:40.401 [2024-11-26 20:55:43.481496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.401 [2024-11-26 20:55:43.481522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.401 qpair failed and we were unable to recover it. 00:25:40.401 [2024-11-26 20:55:43.481600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.401 [2024-11-26 20:55:43.481627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.401 qpair failed and we were unable to recover it. 00:25:40.401 [2024-11-26 20:55:43.481710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.401 [2024-11-26 20:55:43.481737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.401 qpair failed and we were unable to recover it. 00:25:40.401 [2024-11-26 20:55:43.481820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.401 [2024-11-26 20:55:43.481849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.401 qpair failed and we were unable to recover it. 00:25:40.401 [2024-11-26 20:55:43.481934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.401 [2024-11-26 20:55:43.481961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.401 qpair failed and we were unable to recover it. 00:25:40.401 [2024-11-26 20:55:43.482087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.401 [2024-11-26 20:55:43.482126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.401 qpair failed and we were unable to recover it. 00:25:40.401 [2024-11-26 20:55:43.482253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.401 [2024-11-26 20:55:43.482283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.401 qpair failed and we were unable to recover it. 00:25:40.401 [2024-11-26 20:55:43.482382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.401 [2024-11-26 20:55:43.482421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.401 qpair failed and we were unable to recover it. 00:25:40.401 [2024-11-26 20:55:43.482518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.402 [2024-11-26 20:55:43.482546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.402 qpair failed and we were unable to recover it. 00:25:40.402 [2024-11-26 20:55:43.482723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.402 [2024-11-26 20:55:43.482775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.402 qpair failed and we were unable to recover it. 00:25:40.402 [2024-11-26 20:55:43.482885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.402 [2024-11-26 20:55:43.482943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.402 qpair failed and we were unable to recover it. 00:25:40.402 [2024-11-26 20:55:43.483098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.402 [2024-11-26 20:55:43.483153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.402 qpair failed and we were unable to recover it. 00:25:40.402 [2024-11-26 20:55:43.483231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.402 [2024-11-26 20:55:43.483263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.402 qpair failed and we were unable to recover it. 00:25:40.402 [2024-11-26 20:55:43.483373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.402 [2024-11-26 20:55:43.483401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.402 qpair failed and we were unable to recover it. 00:25:40.402 [2024-11-26 20:55:43.483489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.402 [2024-11-26 20:55:43.483515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.402 qpair failed and we were unable to recover it. 00:25:40.402 [2024-11-26 20:55:43.483656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.402 [2024-11-26 20:55:43.483683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.402 qpair failed and we were unable to recover it. 00:25:40.402 [2024-11-26 20:55:43.483851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.402 [2024-11-26 20:55:43.483877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.402 qpair failed and we were unable to recover it. 00:25:40.402 [2024-11-26 20:55:43.484016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.402 [2024-11-26 20:55:43.484042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.403 qpair failed and we were unable to recover it. 00:25:40.403 [2024-11-26 20:55:43.484158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.403 [2024-11-26 20:55:43.484188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.403 qpair failed and we were unable to recover it. 00:25:40.403 [2024-11-26 20:55:43.484354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.403 [2024-11-26 20:55:43.484395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.403 qpair failed and we were unable to recover it. 00:25:40.403 [2024-11-26 20:55:43.484514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.403 [2024-11-26 20:55:43.484543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.403 qpair failed and we were unable to recover it. 00:25:40.403 [2024-11-26 20:55:43.484658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.403 [2024-11-26 20:55:43.484685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.403 qpair failed and we were unable to recover it. 00:25:40.403 [2024-11-26 20:55:43.484801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.403 [2024-11-26 20:55:43.484828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.403 qpair failed and we were unable to recover it. 00:25:40.403 [2024-11-26 20:55:43.484917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.403 [2024-11-26 20:55:43.484943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.403 qpair failed and we were unable to recover it. 00:25:40.403 [2024-11-26 20:55:43.485058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.403 [2024-11-26 20:55:43.485086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.403 qpair failed and we were unable to recover it. 00:25:40.403 [2024-11-26 20:55:43.485183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.403 [2024-11-26 20:55:43.485223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.403 qpair failed and we were unable to recover it. 00:25:40.403 [2024-11-26 20:55:43.485370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.403 [2024-11-26 20:55:43.485411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.403 qpair failed and we were unable to recover it. 00:25:40.403 [2024-11-26 20:55:43.485514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.403 [2024-11-26 20:55:43.485542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.403 qpair failed and we were unable to recover it. 00:25:40.403 [2024-11-26 20:55:43.485681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.403 [2024-11-26 20:55:43.485708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.403 qpair failed and we were unable to recover it. 00:25:40.403 [2024-11-26 20:55:43.485923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.404 [2024-11-26 20:55:43.485986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.404 qpair failed and we were unable to recover it. 00:25:40.404 [2024-11-26 20:55:43.486101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.404 [2024-11-26 20:55:43.486127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.404 qpair failed and we were unable to recover it. 00:25:40.404 [2024-11-26 20:55:43.486249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.404 [2024-11-26 20:55:43.486275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.404 qpair failed and we were unable to recover it. 00:25:40.404 [2024-11-26 20:55:43.486400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.404 [2024-11-26 20:55:43.486427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.404 qpair failed and we were unable to recover it. 00:25:40.404 [2024-11-26 20:55:43.486569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.404 [2024-11-26 20:55:43.486596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.404 qpair failed and we were unable to recover it. 00:25:40.404 [2024-11-26 20:55:43.486709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.404 [2024-11-26 20:55:43.486736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.404 qpair failed and we were unable to recover it. 00:25:40.404 [2024-11-26 20:55:43.486818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.404 [2024-11-26 20:55:43.486845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.404 qpair failed and we were unable to recover it. 00:25:40.404 [2024-11-26 20:55:43.486963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.404 [2024-11-26 20:55:43.486990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.404 qpair failed and we were unable to recover it. 00:25:40.404 [2024-11-26 20:55:43.487122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.404 [2024-11-26 20:55:43.487161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.404 qpair failed and we were unable to recover it. 00:25:40.404 [2024-11-26 20:55:43.487281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.405 [2024-11-26 20:55:43.487316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.405 qpair failed and we were unable to recover it. 00:25:40.405 [2024-11-26 20:55:43.487427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.405 [2024-11-26 20:55:43.487465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.405 qpair failed and we were unable to recover it. 00:25:40.405 [2024-11-26 20:55:43.487565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.405 [2024-11-26 20:55:43.487592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.405 qpair failed and we were unable to recover it. 00:25:40.405 [2024-11-26 20:55:43.487705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.405 [2024-11-26 20:55:43.487732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.405 qpair failed and we were unable to recover it. 00:25:40.405 [2024-11-26 20:55:43.487811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.405 [2024-11-26 20:55:43.487837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.405 qpair failed and we were unable to recover it. 00:25:40.405 [2024-11-26 20:55:43.487972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.405 [2024-11-26 20:55:43.487999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.405 qpair failed and we were unable to recover it. 00:25:40.405 [2024-11-26 20:55:43.488095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.405 [2024-11-26 20:55:43.488134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.405 qpair failed and we were unable to recover it. 00:25:40.405 [2024-11-26 20:55:43.488256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.405 [2024-11-26 20:55:43.488284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.405 qpair failed and we were unable to recover it. 00:25:40.406 [2024-11-26 20:55:43.488427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.406 [2024-11-26 20:55:43.488456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.406 qpair failed and we were unable to recover it. 00:25:40.406 [2024-11-26 20:55:43.488540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.406 [2024-11-26 20:55:43.488568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.406 qpair failed and we were unable to recover it. 00:25:40.406 [2024-11-26 20:55:43.488705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.406 [2024-11-26 20:55:43.488731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.406 qpair failed and we were unable to recover it. 00:25:40.406 [2024-11-26 20:55:43.488816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.406 [2024-11-26 20:55:43.488842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.406 qpair failed and we were unable to recover it. 00:25:40.406 [2024-11-26 20:55:43.489038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.406 [2024-11-26 20:55:43.489091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.406 qpair failed and we were unable to recover it. 00:25:40.406 [2024-11-26 20:55:43.489230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.406 [2024-11-26 20:55:43.489256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.406 qpair failed and we were unable to recover it. 00:25:40.406 [2024-11-26 20:55:43.489373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.406 [2024-11-26 20:55:43.489408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.406 qpair failed and we were unable to recover it. 00:25:40.406 [2024-11-26 20:55:43.489496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.407 [2024-11-26 20:55:43.489524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.407 qpair failed and we were unable to recover it. 00:25:40.407 [2024-11-26 20:55:43.489647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.407 [2024-11-26 20:55:43.489673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.407 qpair failed and we were unable to recover it. 00:25:40.407 [2024-11-26 20:55:43.489756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.407 [2024-11-26 20:55:43.489783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.407 qpair failed and we were unable to recover it. 00:25:40.407 [2024-11-26 20:55:43.489916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.407 [2024-11-26 20:55:43.489971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.407 qpair failed and we were unable to recover it. 00:25:40.407 [2024-11-26 20:55:43.490087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.407 [2024-11-26 20:55:43.490112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.407 qpair failed and we were unable to recover it. 00:25:40.407 [2024-11-26 20:55:43.490201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.407 [2024-11-26 20:55:43.490227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.407 qpair failed and we were unable to recover it. 00:25:40.407 [2024-11-26 20:55:43.490323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.407 [2024-11-26 20:55:43.490352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.407 qpair failed and we were unable to recover it. 00:25:40.407 [2024-11-26 20:55:43.490469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.407 [2024-11-26 20:55:43.490497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.407 qpair failed and we were unable to recover it. 00:25:40.407 [2024-11-26 20:55:43.490590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.407 [2024-11-26 20:55:43.490617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.407 qpair failed and we were unable to recover it. 00:25:40.407 [2024-11-26 20:55:43.490758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.407 [2024-11-26 20:55:43.490785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.407 qpair failed and we were unable to recover it. 00:25:40.407 [2024-11-26 20:55:43.490894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.408 [2024-11-26 20:55:43.490920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.408 qpair failed and we were unable to recover it. 00:25:40.408 [2024-11-26 20:55:43.491030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.408 [2024-11-26 20:55:43.491057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.408 qpair failed and we were unable to recover it. 00:25:40.408 [2024-11-26 20:55:43.491171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.408 [2024-11-26 20:55:43.491197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.408 qpair failed and we were unable to recover it. 00:25:40.408 [2024-11-26 20:55:43.491331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.408 [2024-11-26 20:55:43.491371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.408 qpair failed and we were unable to recover it. 00:25:40.408 [2024-11-26 20:55:43.491489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.408 [2024-11-26 20:55:43.491518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.408 qpair failed and we were unable to recover it. 00:25:40.408 [2024-11-26 20:55:43.491610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.408 [2024-11-26 20:55:43.491637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.408 qpair failed and we were unable to recover it. 00:25:40.408 [2024-11-26 20:55:43.491725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.409 [2024-11-26 20:55:43.491752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.409 qpair failed and we were unable to recover it. 00:25:40.409 [2024-11-26 20:55:43.491865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.409 [2024-11-26 20:55:43.491891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.409 qpair failed and we were unable to recover it. 00:25:40.409 [2024-11-26 20:55:43.491973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.409 [2024-11-26 20:55:43.492000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.409 qpair failed and we were unable to recover it. 00:25:40.409 [2024-11-26 20:55:43.492113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.409 [2024-11-26 20:55:43.492141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.409 qpair failed and we were unable to recover it. 00:25:40.409 [2024-11-26 20:55:43.492271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.409 [2024-11-26 20:55:43.492323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.409 qpair failed and we were unable to recover it. 00:25:40.409 [2024-11-26 20:55:43.492449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.409 [2024-11-26 20:55:43.492478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.409 qpair failed and we were unable to recover it. 00:25:40.409 [2024-11-26 20:55:43.492617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.409 [2024-11-26 20:55:43.492643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.409 qpair failed and we were unable to recover it. 00:25:40.409 [2024-11-26 20:55:43.492810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.409 [2024-11-26 20:55:43.492872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.409 qpair failed and we were unable to recover it. 00:25:40.409 [2024-11-26 20:55:43.493034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.409 [2024-11-26 20:55:43.493088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.409 qpair failed and we were unable to recover it. 00:25:40.409 [2024-11-26 20:55:43.493204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.409 [2024-11-26 20:55:43.493230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.409 qpair failed and we were unable to recover it. 00:25:40.409 [2024-11-26 20:55:43.493339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.409 [2024-11-26 20:55:43.493374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.409 qpair failed and we were unable to recover it. 00:25:40.409 [2024-11-26 20:55:43.493462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.409 [2024-11-26 20:55:43.493488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.409 qpair failed and we were unable to recover it. 00:25:40.409 [2024-11-26 20:55:43.493577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.409 [2024-11-26 20:55:43.493603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.409 qpair failed and we were unable to recover it. 00:25:40.409 [2024-11-26 20:55:43.493721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.409 [2024-11-26 20:55:43.493748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.409 qpair failed and we were unable to recover it. 00:25:40.409 [2024-11-26 20:55:43.493872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.409 [2024-11-26 20:55:43.493898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.409 qpair failed and we were unable to recover it. 00:25:40.409 [2024-11-26 20:55:43.494008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.409 [2024-11-26 20:55:43.494034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.409 qpair failed and we were unable to recover it. 00:25:40.409 [2024-11-26 20:55:43.494175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.409 [2024-11-26 20:55:43.494202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.409 qpair failed and we were unable to recover it. 00:25:40.409 [2024-11-26 20:55:43.494298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.409 [2024-11-26 20:55:43.494347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.409 qpair failed and we were unable to recover it. 00:25:40.409 [2024-11-26 20:55:43.494440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.409 [2024-11-26 20:55:43.494468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.409 qpair failed and we were unable to recover it. 00:25:40.409 [2024-11-26 20:55:43.494577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.409 [2024-11-26 20:55:43.494603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.409 qpair failed and we were unable to recover it. 00:25:40.409 [2024-11-26 20:55:43.494719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.409 [2024-11-26 20:55:43.494745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.409 qpair failed and we were unable to recover it. 00:25:40.409 [2024-11-26 20:55:43.494859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.409 [2024-11-26 20:55:43.494886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.409 qpair failed and we were unable to recover it. 00:25:40.409 [2024-11-26 20:55:43.494997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.409 [2024-11-26 20:55:43.495023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.409 qpair failed and we were unable to recover it. 00:25:40.409 [2024-11-26 20:55:43.495132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.409 [2024-11-26 20:55:43.495157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.409 qpair failed and we were unable to recover it. 00:25:40.409 [2024-11-26 20:55:43.495297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.409 [2024-11-26 20:55:43.495329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.409 qpair failed and we were unable to recover it. 00:25:40.409 [2024-11-26 20:55:43.495441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.409 [2024-11-26 20:55:43.495467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.409 qpair failed and we were unable to recover it. 00:25:40.409 [2024-11-26 20:55:43.495560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.409 [2024-11-26 20:55:43.495588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.409 qpair failed and we were unable to recover it. 00:25:40.409 [2024-11-26 20:55:43.495702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.409 [2024-11-26 20:55:43.495729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.409 qpair failed and we were unable to recover it. 00:25:40.409 [2024-11-26 20:55:43.495823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.409 [2024-11-26 20:55:43.495849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.409 qpair failed and we were unable to recover it. 00:25:40.409 [2024-11-26 20:55:43.495936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.410 [2024-11-26 20:55:43.495962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.410 qpair failed and we were unable to recover it. 00:25:40.410 [2024-11-26 20:55:43.496074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.410 [2024-11-26 20:55:43.496101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.410 qpair failed and we were unable to recover it. 00:25:40.410 [2024-11-26 20:55:43.496181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.410 [2024-11-26 20:55:43.496208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.410 qpair failed and we were unable to recover it. 00:25:40.410 [2024-11-26 20:55:43.496327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.410 [2024-11-26 20:55:43.496355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.410 qpair failed and we were unable to recover it. 00:25:40.410 [2024-11-26 20:55:43.496441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.410 [2024-11-26 20:55:43.496467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.410 qpair failed and we were unable to recover it. 00:25:40.410 [2024-11-26 20:55:43.496578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.410 [2024-11-26 20:55:43.496603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.410 qpair failed and we were unable to recover it. 00:25:40.410 [2024-11-26 20:55:43.496685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.410 [2024-11-26 20:55:43.496711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.410 qpair failed and we were unable to recover it. 00:25:40.410 [2024-11-26 20:55:43.496824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.410 [2024-11-26 20:55:43.496850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.410 qpair failed and we were unable to recover it. 00:25:40.410 [2024-11-26 20:55:43.496946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.410 [2024-11-26 20:55:43.496973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.410 qpair failed and we were unable to recover it. 00:25:40.410 [2024-11-26 20:55:43.497089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.410 [2024-11-26 20:55:43.497115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.410 qpair failed and we were unable to recover it. 00:25:40.410 [2024-11-26 20:55:43.497219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.410 [2024-11-26 20:55:43.497259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.410 qpair failed and we were unable to recover it. 00:25:40.410 [2024-11-26 20:55:43.497391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.410 [2024-11-26 20:55:43.497420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.410 qpair failed and we were unable to recover it. 00:25:40.410 [2024-11-26 20:55:43.497511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.410 [2024-11-26 20:55:43.497539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.410 qpair failed and we were unable to recover it. 00:25:40.410 [2024-11-26 20:55:43.497626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.410 [2024-11-26 20:55:43.497653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.410 qpair failed and we were unable to recover it. 00:25:40.410 [2024-11-26 20:55:43.497783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.410 [2024-11-26 20:55:43.497810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.410 qpair failed and we were unable to recover it. 00:25:40.410 [2024-11-26 20:55:43.497925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.410 [2024-11-26 20:55:43.497952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.410 qpair failed and we were unable to recover it. 00:25:40.410 [2024-11-26 20:55:43.498094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.410 [2024-11-26 20:55:43.498121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.410 qpair failed and we were unable to recover it. 00:25:40.410 [2024-11-26 20:55:43.498269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.410 [2024-11-26 20:55:43.498295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.410 qpair failed and we were unable to recover it. 00:25:40.410 [2024-11-26 20:55:43.498416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.410 [2024-11-26 20:55:43.498443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.410 qpair failed and we were unable to recover it. 00:25:40.410 [2024-11-26 20:55:43.498565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.410 [2024-11-26 20:55:43.498591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.410 qpair failed and we were unable to recover it. 00:25:40.410 [2024-11-26 20:55:43.498678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.410 [2024-11-26 20:55:43.498704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.410 qpair failed and we were unable to recover it. 00:25:40.410 [2024-11-26 20:55:43.498788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.410 [2024-11-26 20:55:43.498814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.410 qpair failed and we were unable to recover it. 00:25:40.410 [2024-11-26 20:55:43.498955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.410 [2024-11-26 20:55:43.498981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.410 qpair failed and we were unable to recover it. 00:25:40.410 [2024-11-26 20:55:43.499096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.410 [2024-11-26 20:55:43.499123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.410 qpair failed and we were unable to recover it. 00:25:40.410 [2024-11-26 20:55:43.499224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.410 [2024-11-26 20:55:43.499263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.410 qpair failed and we were unable to recover it. 00:25:40.410 [2024-11-26 20:55:43.499367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.410 [2024-11-26 20:55:43.499395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.410 qpair failed and we were unable to recover it. 00:25:40.410 [2024-11-26 20:55:43.499489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.410 [2024-11-26 20:55:43.499515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.410 qpair failed and we were unable to recover it. 00:25:40.410 [2024-11-26 20:55:43.499659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.410 [2024-11-26 20:55:43.499685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.410 qpair failed and we were unable to recover it. 00:25:40.410 [2024-11-26 20:55:43.499776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.410 [2024-11-26 20:55:43.499802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.410 qpair failed and we were unable to recover it. 00:25:40.410 [2024-11-26 20:55:43.499915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.410 [2024-11-26 20:55:43.499942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.410 qpair failed and we were unable to recover it. 00:25:40.410 [2024-11-26 20:55:43.500019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.410 [2024-11-26 20:55:43.500045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.410 qpair failed and we were unable to recover it. 00:25:40.410 [2024-11-26 20:55:43.500163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.410 [2024-11-26 20:55:43.500189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.410 qpair failed and we were unable to recover it. 00:25:40.410 [2024-11-26 20:55:43.500275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.410 [2024-11-26 20:55:43.500312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.410 qpair failed and we were unable to recover it. 00:25:40.410 [2024-11-26 20:55:43.500432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.410 [2024-11-26 20:55:43.500458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.410 qpair failed and we were unable to recover it. 00:25:40.410 [2024-11-26 20:55:43.500540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.410 [2024-11-26 20:55:43.500566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.410 qpair failed and we were unable to recover it. 00:25:40.410 [2024-11-26 20:55:43.500685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.410 [2024-11-26 20:55:43.500713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.410 qpair failed and we were unable to recover it. 00:25:40.410 [2024-11-26 20:55:43.500852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.410 [2024-11-26 20:55:43.500879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.411 qpair failed and we were unable to recover it. 00:25:40.411 [2024-11-26 20:55:43.500967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.411 [2024-11-26 20:55:43.500993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.411 qpair failed and we were unable to recover it. 00:25:40.411 [2024-11-26 20:55:43.501115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.411 [2024-11-26 20:55:43.501142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.411 qpair failed and we were unable to recover it. 00:25:40.411 [2024-11-26 20:55:43.501225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.411 [2024-11-26 20:55:43.501251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.411 qpair failed and we were unable to recover it. 00:25:40.411 [2024-11-26 20:55:43.501382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.411 [2024-11-26 20:55:43.501421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.411 qpair failed and we were unable to recover it. 00:25:40.411 [2024-11-26 20:55:43.501540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.411 [2024-11-26 20:55:43.501567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.411 qpair failed and we were unable to recover it. 00:25:40.411 [2024-11-26 20:55:43.501685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.411 [2024-11-26 20:55:43.501711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.411 qpair failed and we were unable to recover it. 00:25:40.411 [2024-11-26 20:55:43.501806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.411 [2024-11-26 20:55:43.501832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.411 qpair failed and we were unable to recover it. 00:25:40.411 [2024-11-26 20:55:43.501943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.411 [2024-11-26 20:55:43.501969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.411 qpair failed and we were unable to recover it. 00:25:40.411 [2024-11-26 20:55:43.502074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.411 [2024-11-26 20:55:43.502100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.411 qpair failed and we were unable to recover it. 00:25:40.411 [2024-11-26 20:55:43.502185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.411 [2024-11-26 20:55:43.502213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.411 qpair failed and we were unable to recover it. 00:25:40.411 [2024-11-26 20:55:43.502361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.411 [2024-11-26 20:55:43.502388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.411 qpair failed and we were unable to recover it. 00:25:40.411 [2024-11-26 20:55:43.502501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.411 [2024-11-26 20:55:43.502533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.411 qpair failed and we were unable to recover it. 00:25:40.411 [2024-11-26 20:55:43.502668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.411 [2024-11-26 20:55:43.502695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.411 qpair failed and we were unable to recover it. 00:25:40.411 [2024-11-26 20:55:43.502783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.411 [2024-11-26 20:55:43.502810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.411 qpair failed and we were unable to recover it. 00:25:40.411 [2024-11-26 20:55:43.502942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.411 [2024-11-26 20:55:43.502968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.411 qpair failed and we were unable to recover it. 00:25:40.411 [2024-11-26 20:55:43.503109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.411 [2024-11-26 20:55:43.503135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.411 qpair failed and we were unable to recover it. 00:25:40.411 [2024-11-26 20:55:43.503269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.411 [2024-11-26 20:55:43.503320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.411 qpair failed and we were unable to recover it. 00:25:40.411 [2024-11-26 20:55:43.503425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.411 [2024-11-26 20:55:43.503454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.411 qpair failed and we were unable to recover it. 00:25:40.411 [2024-11-26 20:55:43.503572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.411 [2024-11-26 20:55:43.503600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.411 qpair failed and we were unable to recover it. 00:25:40.411 [2024-11-26 20:55:43.503763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.411 [2024-11-26 20:55:43.503818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.411 qpair failed and we were unable to recover it. 00:25:40.411 [2024-11-26 20:55:43.503903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.411 [2024-11-26 20:55:43.503930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.411 qpair failed and we were unable to recover it. 00:25:40.411 [2024-11-26 20:55:43.504021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.411 [2024-11-26 20:55:43.504048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.411 qpair failed and we were unable to recover it. 00:25:40.411 [2024-11-26 20:55:43.504183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.411 [2024-11-26 20:55:43.504209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.411 qpair failed and we were unable to recover it. 00:25:40.411 [2024-11-26 20:55:43.504327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.411 [2024-11-26 20:55:43.504355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.411 qpair failed and we were unable to recover it. 00:25:40.411 [2024-11-26 20:55:43.504447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.411 [2024-11-26 20:55:43.504473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.411 qpair failed and we were unable to recover it. 00:25:40.411 [2024-11-26 20:55:43.504561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.411 [2024-11-26 20:55:43.504587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.411 qpair failed and we were unable to recover it. 00:25:40.411 [2024-11-26 20:55:43.504668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.411 [2024-11-26 20:55:43.504694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.411 qpair failed and we were unable to recover it. 00:25:40.411 [2024-11-26 20:55:43.504775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.411 [2024-11-26 20:55:43.504802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.411 qpair failed and we were unable to recover it. 00:25:40.411 [2024-11-26 20:55:43.504910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.411 [2024-11-26 20:55:43.504937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.411 qpair failed and we were unable to recover it. 00:25:40.411 [2024-11-26 20:55:43.505057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.411 [2024-11-26 20:55:43.505083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.411 qpair failed and we were unable to recover it. 00:25:40.411 [2024-11-26 20:55:43.505175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.411 [2024-11-26 20:55:43.505203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.411 qpair failed and we were unable to recover it. 00:25:40.411 [2024-11-26 20:55:43.505321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.411 [2024-11-26 20:55:43.505351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.411 qpair failed and we were unable to recover it. 00:25:40.411 [2024-11-26 20:55:43.505477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.411 [2024-11-26 20:55:43.505504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.411 qpair failed and we were unable to recover it. 00:25:40.411 [2024-11-26 20:55:43.505583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.411 [2024-11-26 20:55:43.505611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.411 qpair failed and we were unable to recover it. 00:25:40.411 [2024-11-26 20:55:43.505723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.411 [2024-11-26 20:55:43.505750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.411 qpair failed and we were unable to recover it. 00:25:40.411 [2024-11-26 20:55:43.505867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.411 [2024-11-26 20:55:43.505894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.411 qpair failed and we were unable to recover it. 00:25:40.411 [2024-11-26 20:55:43.505982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.411 [2024-11-26 20:55:43.506009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.411 qpair failed and we were unable to recover it. 00:25:40.411 [2024-11-26 20:55:43.506102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.411 [2024-11-26 20:55:43.506130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.411 qpair failed and we were unable to recover it. 00:25:40.411 [2024-11-26 20:55:43.506218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.411 [2024-11-26 20:55:43.506245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.411 qpair failed and we were unable to recover it. 00:25:40.411 [2024-11-26 20:55:43.506330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.411 [2024-11-26 20:55:43.506357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.411 qpair failed and we were unable to recover it. 00:25:40.411 [2024-11-26 20:55:43.506475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.411 [2024-11-26 20:55:43.506501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.411 qpair failed and we were unable to recover it. 00:25:40.411 [2024-11-26 20:55:43.506613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.411 [2024-11-26 20:55:43.506641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.411 qpair failed and we were unable to recover it. 00:25:40.412 [2024-11-26 20:55:43.506727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.412 [2024-11-26 20:55:43.506753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.412 qpair failed and we were unable to recover it. 00:25:40.412 [2024-11-26 20:55:43.506908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.412 [2024-11-26 20:55:43.506934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.412 qpair failed and we were unable to recover it. 00:25:40.412 [2024-11-26 20:55:43.507018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.412 [2024-11-26 20:55:43.507044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.412 qpair failed and we were unable to recover it. 00:25:40.412 [2024-11-26 20:55:43.507158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.412 [2024-11-26 20:55:43.507187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.412 qpair failed and we were unable to recover it. 00:25:40.412 [2024-11-26 20:55:43.507311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.412 [2024-11-26 20:55:43.507340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.412 qpair failed and we were unable to recover it. 00:25:40.412 [2024-11-26 20:55:43.507451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.412 [2024-11-26 20:55:43.507478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.412 qpair failed and we were unable to recover it. 00:25:40.412 [2024-11-26 20:55:43.507557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.412 [2024-11-26 20:55:43.507584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.412 qpair failed and we were unable to recover it. 00:25:40.412 [2024-11-26 20:55:43.507673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.412 [2024-11-26 20:55:43.507700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.412 qpair failed and we were unable to recover it. 00:25:40.412 [2024-11-26 20:55:43.507818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.412 [2024-11-26 20:55:43.507845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.412 qpair failed and we were unable to recover it. 00:25:40.412 [2024-11-26 20:55:43.507949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.412 [2024-11-26 20:55:43.507980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.412 qpair failed and we were unable to recover it. 00:25:40.412 [2024-11-26 20:55:43.508126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.412 [2024-11-26 20:55:43.508153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.412 qpair failed and we were unable to recover it. 00:25:40.412 [2024-11-26 20:55:43.508245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.412 [2024-11-26 20:55:43.508285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.412 qpair failed and we were unable to recover it. 00:25:40.412 [2024-11-26 20:55:43.508402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.412 [2024-11-26 20:55:43.508429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.412 qpair failed and we were unable to recover it. 00:25:40.412 [2024-11-26 20:55:43.508565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.412 [2024-11-26 20:55:43.508604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.412 qpair failed and we were unable to recover it. 00:25:40.412 [2024-11-26 20:55:43.508790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.412 [2024-11-26 20:55:43.508845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.412 qpair failed and we were unable to recover it. 00:25:40.412 [2024-11-26 20:55:43.509038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.412 [2024-11-26 20:55:43.509065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.412 qpair failed and we were unable to recover it. 00:25:40.412 [2024-11-26 20:55:43.509180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.412 [2024-11-26 20:55:43.509207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.412 qpair failed and we were unable to recover it. 00:25:40.412 [2024-11-26 20:55:43.509325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.412 [2024-11-26 20:55:43.509353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.412 qpair failed and we were unable to recover it. 00:25:40.412 [2024-11-26 20:55:43.509465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.412 [2024-11-26 20:55:43.509492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.412 qpair failed and we were unable to recover it. 00:25:40.412 [2024-11-26 20:55:43.509605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.412 [2024-11-26 20:55:43.509632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.412 qpair failed and we were unable to recover it. 00:25:40.412 [2024-11-26 20:55:43.509720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.412 [2024-11-26 20:55:43.509747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.412 qpair failed and we were unable to recover it. 00:25:40.412 [2024-11-26 20:55:43.509835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.412 [2024-11-26 20:55:43.509861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.412 qpair failed and we were unable to recover it. 00:25:40.412 [2024-11-26 20:55:43.509943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.412 [2024-11-26 20:55:43.509970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.412 qpair failed and we were unable to recover it. 00:25:40.412 [2024-11-26 20:55:43.510091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.412 [2024-11-26 20:55:43.510117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.412 qpair failed and we were unable to recover it. 00:25:40.412 [2024-11-26 20:55:43.510224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.412 [2024-11-26 20:55:43.510263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.412 qpair failed and we were unable to recover it. 00:25:40.412 [2024-11-26 20:55:43.510360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.412 [2024-11-26 20:55:43.510389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.412 qpair failed and we were unable to recover it. 00:25:40.412 [2024-11-26 20:55:43.510473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.412 [2024-11-26 20:55:43.510499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.412 qpair failed and we were unable to recover it. 00:25:40.412 [2024-11-26 20:55:43.510594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.412 [2024-11-26 20:55:43.510621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.412 qpair failed and we were unable to recover it. 00:25:40.412 [2024-11-26 20:55:43.510733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.412 [2024-11-26 20:55:43.510759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.412 qpair failed and we were unable to recover it. 00:25:40.412 [2024-11-26 20:55:43.510834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.412 [2024-11-26 20:55:43.510859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.412 qpair failed and we were unable to recover it. 00:25:40.412 [2024-11-26 20:55:43.510966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.412 [2024-11-26 20:55:43.510991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.412 qpair failed and we were unable to recover it. 00:25:40.412 [2024-11-26 20:55:43.511092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.412 [2024-11-26 20:55:43.511132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.412 qpair failed and we were unable to recover it. 00:25:40.412 [2024-11-26 20:55:43.511247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.412 [2024-11-26 20:55:43.511275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.412 qpair failed and we were unable to recover it. 00:25:40.412 [2024-11-26 20:55:43.511392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.412 [2024-11-26 20:55:43.511419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.412 qpair failed and we were unable to recover it. 00:25:40.412 [2024-11-26 20:55:43.511526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.412 [2024-11-26 20:55:43.511552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.412 qpair failed and we were unable to recover it. 00:25:40.412 [2024-11-26 20:55:43.511683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.412 [2024-11-26 20:55:43.511709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.412 qpair failed and we were unable to recover it. 00:25:40.412 [2024-11-26 20:55:43.511820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.412 [2024-11-26 20:55:43.511846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.412 qpair failed and we were unable to recover it. 00:25:40.412 [2024-11-26 20:55:43.511920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.412 [2024-11-26 20:55:43.511947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.412 qpair failed and we were unable to recover it. 00:25:40.412 [2024-11-26 20:55:43.512088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.412 [2024-11-26 20:55:43.512114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.412 qpair failed and we were unable to recover it. 00:25:40.412 [2024-11-26 20:55:43.512197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.412 [2024-11-26 20:55:43.512223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.412 qpair failed and we were unable to recover it. 00:25:40.412 [2024-11-26 20:55:43.512367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.412 [2024-11-26 20:55:43.512394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.412 qpair failed and we were unable to recover it. 00:25:40.412 [2024-11-26 20:55:43.512549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.412 [2024-11-26 20:55:43.512588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.412 qpair failed and we were unable to recover it. 00:25:40.412 [2024-11-26 20:55:43.512687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.412 [2024-11-26 20:55:43.512715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.412 qpair failed and we were unable to recover it. 00:25:40.413 [2024-11-26 20:55:43.512833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.413 [2024-11-26 20:55:43.512860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.413 qpair failed and we were unable to recover it. 00:25:40.413 [2024-11-26 20:55:43.512973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.413 [2024-11-26 20:55:43.512999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.413 qpair failed and we were unable to recover it. 00:25:40.413 [2024-11-26 20:55:43.513086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.413 [2024-11-26 20:55:43.513111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.413 qpair failed and we were unable to recover it. 00:25:40.413 [2024-11-26 20:55:43.513249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.413 [2024-11-26 20:55:43.513275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.413 qpair failed and we were unable to recover it. 00:25:40.413 [2024-11-26 20:55:43.513400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.413 [2024-11-26 20:55:43.513426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.413 qpair failed and we were unable to recover it. 00:25:40.413 [2024-11-26 20:55:43.513556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.413 [2024-11-26 20:55:43.513596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.413 qpair failed and we were unable to recover it. 00:25:40.413 [2024-11-26 20:55:43.513719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.413 [2024-11-26 20:55:43.513748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.413 qpair failed and we were unable to recover it. 00:25:40.413 [2024-11-26 20:55:43.513872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.413 [2024-11-26 20:55:43.513899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.413 qpair failed and we were unable to recover it. 00:25:40.413 [2024-11-26 20:55:43.514058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.413 [2024-11-26 20:55:43.514113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.413 qpair failed and we were unable to recover it. 00:25:40.413 [2024-11-26 20:55:43.514206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.413 [2024-11-26 20:55:43.514235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.413 qpair failed and we were unable to recover it. 00:25:40.413 [2024-11-26 20:55:43.514384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.413 [2024-11-26 20:55:43.514411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.413 qpair failed and we were unable to recover it. 00:25:40.413 [2024-11-26 20:55:43.514500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.413 [2024-11-26 20:55:43.514528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.413 qpair failed and we were unable to recover it. 00:25:40.413 [2024-11-26 20:55:43.514625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.413 [2024-11-26 20:55:43.514652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.413 qpair failed and we were unable to recover it. 00:25:40.413 [2024-11-26 20:55:43.514743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.413 [2024-11-26 20:55:43.514769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.413 qpair failed and we were unable to recover it. 00:25:40.413 [2024-11-26 20:55:43.514856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.413 [2024-11-26 20:55:43.514881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.413 qpair failed and we were unable to recover it. 00:25:40.413 [2024-11-26 20:55:43.514984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.413 [2024-11-26 20:55:43.515010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.413 qpair failed and we were unable to recover it. 00:25:40.413 [2024-11-26 20:55:43.515096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.413 [2024-11-26 20:55:43.515123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.413 qpair failed and we were unable to recover it. 00:25:40.413 [2024-11-26 20:55:43.515208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.413 [2024-11-26 20:55:43.515233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.413 qpair failed and we were unable to recover it. 00:25:40.413 [2024-11-26 20:55:43.515343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.413 [2024-11-26 20:55:43.515370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.413 qpair failed and we were unable to recover it. 00:25:40.413 [2024-11-26 20:55:43.515455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.413 [2024-11-26 20:55:43.515482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.413 qpair failed and we were unable to recover it. 00:25:40.413 [2024-11-26 20:55:43.515567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.413 [2024-11-26 20:55:43.515595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.413 qpair failed and we were unable to recover it. 00:25:40.413 [2024-11-26 20:55:43.515684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.413 [2024-11-26 20:55:43.515710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.413 qpair failed and we were unable to recover it. 00:25:40.413 [2024-11-26 20:55:43.515820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.413 [2024-11-26 20:55:43.515846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.413 qpair failed and we were unable to recover it. 00:25:40.413 [2024-11-26 20:55:43.515918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.413 [2024-11-26 20:55:43.515944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.413 qpair failed and we were unable to recover it. 00:25:40.413 [2024-11-26 20:55:43.516035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.413 [2024-11-26 20:55:43.516062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.413 qpair failed and we were unable to recover it. 00:25:40.413 [2024-11-26 20:55:43.516157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.413 [2024-11-26 20:55:43.516186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.413 qpair failed and we were unable to recover it. 00:25:40.413 [2024-11-26 20:55:43.516266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.413 [2024-11-26 20:55:43.516293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.413 qpair failed and we were unable to recover it. 00:25:40.413 [2024-11-26 20:55:43.516396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.413 [2024-11-26 20:55:43.516422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.413 qpair failed and we were unable to recover it. 00:25:40.413 [2024-11-26 20:55:43.516533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.413 [2024-11-26 20:55:43.516560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.413 qpair failed and we were unable to recover it. 00:25:40.413 [2024-11-26 20:55:43.516676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.413 [2024-11-26 20:55:43.516701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.413 qpair failed and we were unable to recover it. 00:25:40.413 [2024-11-26 20:55:43.516817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.413 [2024-11-26 20:55:43.516843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.413 qpair failed and we were unable to recover it. 00:25:40.413 [2024-11-26 20:55:43.516958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.413 [2024-11-26 20:55:43.516984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.413 qpair failed and we were unable to recover it. 00:25:40.413 [2024-11-26 20:55:43.517120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.413 [2024-11-26 20:55:43.517146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.413 qpair failed and we were unable to recover it. 00:25:40.413 [2024-11-26 20:55:43.517236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.413 [2024-11-26 20:55:43.517267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.413 qpair failed and we were unable to recover it. 00:25:40.413 [2024-11-26 20:55:43.517389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.413 [2024-11-26 20:55:43.517416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.413 qpair failed and we were unable to recover it. 00:25:40.413 [2024-11-26 20:55:43.517536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.413 [2024-11-26 20:55:43.517562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.413 qpair failed and we were unable to recover it. 00:25:40.413 [2024-11-26 20:55:43.517697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.413 [2024-11-26 20:55:43.517723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.413 qpair failed and we were unable to recover it. 00:25:40.413 [2024-11-26 20:55:43.517836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.413 [2024-11-26 20:55:43.517863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.413 qpair failed and we were unable to recover it. 00:25:40.413 [2024-11-26 20:55:43.518003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.413 [2024-11-26 20:55:43.518030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.413 qpair failed and we were unable to recover it. 00:25:40.413 [2024-11-26 20:55:43.518121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.413 [2024-11-26 20:55:43.518147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.413 qpair failed and we were unable to recover it. 00:25:40.413 [2024-11-26 20:55:43.518267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.413 [2024-11-26 20:55:43.518294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.413 qpair failed and we were unable to recover it. 00:25:40.413 [2024-11-26 20:55:43.518414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.413 [2024-11-26 20:55:43.518441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.413 qpair failed and we were unable to recover it. 00:25:40.413 [2024-11-26 20:55:43.518523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.413 [2024-11-26 20:55:43.518548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.413 qpair failed and we were unable to recover it. 00:25:40.413 [2024-11-26 20:55:43.518662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.413 [2024-11-26 20:55:43.518688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.413 qpair failed and we were unable to recover it. 00:25:40.414 [2024-11-26 20:55:43.518772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.414 [2024-11-26 20:55:43.518798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.414 qpair failed and we were unable to recover it. 00:25:40.414 [2024-11-26 20:55:43.518909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.414 [2024-11-26 20:55:43.518935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.414 qpair failed and we were unable to recover it. 00:25:40.414 [2024-11-26 20:55:43.519074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.414 [2024-11-26 20:55:43.519100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.414 qpair failed and we were unable to recover it. 00:25:40.414 [2024-11-26 20:55:43.519230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.414 [2024-11-26 20:55:43.519270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.414 qpair failed and we were unable to recover it. 00:25:40.414 [2024-11-26 20:55:43.519406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.414 [2024-11-26 20:55:43.519447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.414 qpair failed and we were unable to recover it. 00:25:40.414 [2024-11-26 20:55:43.519567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.414 [2024-11-26 20:55:43.519594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.414 qpair failed and we were unable to recover it. 00:25:40.414 [2024-11-26 20:55:43.519726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.414 [2024-11-26 20:55:43.519770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.414 qpair failed and we were unable to recover it. 00:25:40.414 [2024-11-26 20:55:43.519931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.414 [2024-11-26 20:55:43.519984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.414 qpair failed and we were unable to recover it. 00:25:40.414 [2024-11-26 20:55:43.520118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.414 [2024-11-26 20:55:43.520143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.414 qpair failed and we were unable to recover it. 00:25:40.414 [2024-11-26 20:55:43.520266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.414 [2024-11-26 20:55:43.520293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.414 qpair failed and we were unable to recover it. 00:25:40.414 [2024-11-26 20:55:43.520417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.414 [2024-11-26 20:55:43.520443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.414 qpair failed and we were unable to recover it. 00:25:40.414 [2024-11-26 20:55:43.520532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.414 [2024-11-26 20:55:43.520557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.414 qpair failed and we were unable to recover it. 00:25:40.414 [2024-11-26 20:55:43.520665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.414 [2024-11-26 20:55:43.520691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.414 qpair failed and we were unable to recover it. 00:25:40.414 [2024-11-26 20:55:43.520851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.414 [2024-11-26 20:55:43.520903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.414 qpair failed and we were unable to recover it. 00:25:40.414 [2024-11-26 20:55:43.520989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.414 [2024-11-26 20:55:43.521014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.414 qpair failed and we were unable to recover it. 00:25:40.414 [2024-11-26 20:55:43.521123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.414 [2024-11-26 20:55:43.521149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.414 qpair failed and we were unable to recover it. 00:25:40.414 [2024-11-26 20:55:43.521231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.414 [2024-11-26 20:55:43.521262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.414 qpair failed and we were unable to recover it. 00:25:40.414 [2024-11-26 20:55:43.521354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.414 [2024-11-26 20:55:43.521385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.414 qpair failed and we were unable to recover it. 00:25:40.414 [2024-11-26 20:55:43.521531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.414 [2024-11-26 20:55:43.521559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.414 qpair failed and we were unable to recover it. 00:25:40.414 [2024-11-26 20:55:43.521679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.414 [2024-11-26 20:55:43.521705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.414 qpair failed and we were unable to recover it. 00:25:40.414 [2024-11-26 20:55:43.521815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.414 [2024-11-26 20:55:43.521841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.414 qpair failed and we were unable to recover it. 00:25:40.414 [2024-11-26 20:55:43.521970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.414 [2024-11-26 20:55:43.521996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.414 qpair failed and we were unable to recover it. 00:25:40.414 [2024-11-26 20:55:43.522124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.414 [2024-11-26 20:55:43.522164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.414 qpair failed and we were unable to recover it. 00:25:40.414 [2024-11-26 20:55:43.522281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.414 [2024-11-26 20:55:43.522315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.414 qpair failed and we were unable to recover it. 00:25:40.414 [2024-11-26 20:55:43.522426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.414 [2024-11-26 20:55:43.522452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.414 qpair failed and we were unable to recover it. 00:25:40.414 [2024-11-26 20:55:43.522568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.414 [2024-11-26 20:55:43.522594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.414 qpair failed and we were unable to recover it. 00:25:40.414 [2024-11-26 20:55:43.522729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.414 [2024-11-26 20:55:43.522755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.414 qpair failed and we were unable to recover it. 00:25:40.414 [2024-11-26 20:55:43.522845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.414 [2024-11-26 20:55:43.522870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.414 qpair failed and we were unable to recover it. 00:25:40.414 [2024-11-26 20:55:43.522970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.414 [2024-11-26 20:55:43.523037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.414 qpair failed and we were unable to recover it. 00:25:40.414 [2024-11-26 20:55:43.523165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.414 [2024-11-26 20:55:43.523196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.414 qpair failed and we were unable to recover it. 00:25:40.414 [2024-11-26 20:55:43.523332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.414 [2024-11-26 20:55:43.523373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.414 qpair failed and we were unable to recover it. 00:25:40.414 [2024-11-26 20:55:43.523464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.414 [2024-11-26 20:55:43.523492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.414 qpair failed and we were unable to recover it. 00:25:40.414 [2024-11-26 20:55:43.523579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.414 [2024-11-26 20:55:43.523606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.414 qpair failed and we were unable to recover it. 00:25:40.414 [2024-11-26 20:55:43.523684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.414 [2024-11-26 20:55:43.523711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.414 qpair failed and we were unable to recover it. 00:25:40.414 [2024-11-26 20:55:43.523865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.414 [2024-11-26 20:55:43.523931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.414 qpair failed and we were unable to recover it. 00:25:40.414 [2024-11-26 20:55:43.524015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.414 [2024-11-26 20:55:43.524042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.414 qpair failed and we were unable to recover it. 00:25:40.414 [2024-11-26 20:55:43.524196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.414 [2024-11-26 20:55:43.524237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.414 qpair failed and we were unable to recover it. 00:25:40.414 [2024-11-26 20:55:43.524375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.414 [2024-11-26 20:55:43.524404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.414 qpair failed and we were unable to recover it. 00:25:40.414 [2024-11-26 20:55:43.524498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.414 [2024-11-26 20:55:43.524524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.414 qpair failed and we were unable to recover it. 00:25:40.414 [2024-11-26 20:55:43.524656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.414 [2024-11-26 20:55:43.524682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.415 qpair failed and we were unable to recover it. 00:25:40.415 [2024-11-26 20:55:43.524865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.415 [2024-11-26 20:55:43.524891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.415 qpair failed and we were unable to recover it. 00:25:40.415 [2024-11-26 20:55:43.524982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.415 [2024-11-26 20:55:43.525008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.415 qpair failed and we were unable to recover it. 00:25:40.415 [2024-11-26 20:55:43.525133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.415 [2024-11-26 20:55:43.525160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.415 qpair failed and we were unable to recover it. 00:25:40.415 [2024-11-26 20:55:43.525288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.415 [2024-11-26 20:55:43.525342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.415 qpair failed and we were unable to recover it. 00:25:40.415 [2024-11-26 20:55:43.525489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.415 [2024-11-26 20:55:43.525518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.415 qpair failed and we were unable to recover it. 00:25:40.415 [2024-11-26 20:55:43.525642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.415 [2024-11-26 20:55:43.525670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.415 qpair failed and we were unable to recover it. 00:25:40.415 [2024-11-26 20:55:43.525782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.415 [2024-11-26 20:55:43.525809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.415 qpair failed and we were unable to recover it. 00:25:40.415 [2024-11-26 20:55:43.525884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.415 [2024-11-26 20:55:43.525910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.415 qpair failed and we were unable to recover it. 00:25:40.415 [2024-11-26 20:55:43.526066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.415 [2024-11-26 20:55:43.526124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.415 qpair failed and we were unable to recover it. 00:25:40.415 [2024-11-26 20:55:43.526209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.415 [2024-11-26 20:55:43.526235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.415 qpair failed and we were unable to recover it. 00:25:40.415 [2024-11-26 20:55:43.526351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.415 [2024-11-26 20:55:43.526378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.415 qpair failed and we were unable to recover it. 00:25:40.415 [2024-11-26 20:55:43.526458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.415 [2024-11-26 20:55:43.526485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.415 qpair failed and we were unable to recover it. 00:25:40.415 [2024-11-26 20:55:43.526597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.415 [2024-11-26 20:55:43.526637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.415 qpair failed and we were unable to recover it. 00:25:40.415 [2024-11-26 20:55:43.526736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.415 [2024-11-26 20:55:43.526765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.415 qpair failed and we were unable to recover it. 00:25:40.415 [2024-11-26 20:55:43.526878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.415 [2024-11-26 20:55:43.526905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.415 qpair failed and we were unable to recover it. 00:25:40.415 [2024-11-26 20:55:43.527014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.415 [2024-11-26 20:55:43.527040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.415 qpair failed and we were unable to recover it. 00:25:40.415 [2024-11-26 20:55:43.527125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.415 [2024-11-26 20:55:43.527151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.415 qpair failed and we were unable to recover it. 00:25:40.415 [2024-11-26 20:55:43.527239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.415 [2024-11-26 20:55:43.527265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.415 qpair failed and we were unable to recover it. 00:25:40.415 [2024-11-26 20:55:43.527411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.415 [2024-11-26 20:55:43.527438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.415 qpair failed and we were unable to recover it. 00:25:40.415 [2024-11-26 20:55:43.527579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.415 [2024-11-26 20:55:43.527610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.415 qpair failed and we were unable to recover it. 00:25:40.415 [2024-11-26 20:55:43.527698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.415 [2024-11-26 20:55:43.527725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.415 qpair failed and we were unable to recover it. 00:25:40.415 [2024-11-26 20:55:43.527834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.415 [2024-11-26 20:55:43.527863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.415 qpair failed and we were unable to recover it. 00:25:40.415 [2024-11-26 20:55:43.527983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.415 [2024-11-26 20:55:43.528010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.415 qpair failed and we were unable to recover it. 00:25:40.415 [2024-11-26 20:55:43.528152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.415 [2024-11-26 20:55:43.528179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.415 qpair failed and we were unable to recover it. 00:25:40.415 [2024-11-26 20:55:43.528292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.415 [2024-11-26 20:55:43.528327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.415 qpair failed and we were unable to recover it. 00:25:40.415 [2024-11-26 20:55:43.528415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.415 [2024-11-26 20:55:43.528442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.415 qpair failed and we were unable to recover it. 00:25:40.415 [2024-11-26 20:55:43.528550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.415 [2024-11-26 20:55:43.528576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.415 qpair failed and we were unable to recover it. 00:25:40.415 [2024-11-26 20:55:43.528660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.415 [2024-11-26 20:55:43.528686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.415 qpair failed and we were unable to recover it. 00:25:40.415 [2024-11-26 20:55:43.528802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.415 [2024-11-26 20:55:43.528828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.415 qpair failed and we were unable to recover it. 00:25:40.415 [2024-11-26 20:55:43.528913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.415 [2024-11-26 20:55:43.528939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.415 qpair failed and we were unable to recover it. 00:25:40.415 [2024-11-26 20:55:43.529036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.415 [2024-11-26 20:55:43.529077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.415 qpair failed and we were unable to recover it. 00:25:40.415 [2024-11-26 20:55:43.529221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.415 [2024-11-26 20:55:43.529249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.415 qpair failed and we were unable to recover it. 00:25:40.415 [2024-11-26 20:55:43.529361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.415 [2024-11-26 20:55:43.529389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.415 qpair failed and we were unable to recover it. 00:25:40.415 [2024-11-26 20:55:43.529527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.415 [2024-11-26 20:55:43.529554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.415 qpair failed and we were unable to recover it. 00:25:40.415 [2024-11-26 20:55:43.529649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.415 [2024-11-26 20:55:43.529677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.415 qpair failed and we were unable to recover it. 00:25:40.415 [2024-11-26 20:55:43.529776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.415 [2024-11-26 20:55:43.529802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.415 qpair failed and we were unable to recover it. 00:25:40.415 [2024-11-26 20:55:43.529971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.416 [2024-11-26 20:55:43.530029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.416 qpair failed and we were unable to recover it. 00:25:40.416 [2024-11-26 20:55:43.530141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.416 [2024-11-26 20:55:43.530167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.416 qpair failed and we were unable to recover it. 00:25:40.416 [2024-11-26 20:55:43.530244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.416 [2024-11-26 20:55:43.530271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.416 qpair failed and we were unable to recover it. 00:25:40.416 [2024-11-26 20:55:43.530397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.416 [2024-11-26 20:55:43.530424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.416 qpair failed and we were unable to recover it. 00:25:40.416 [2024-11-26 20:55:43.530513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.416 [2024-11-26 20:55:43.530539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.416 qpair failed and we were unable to recover it. 00:25:40.416 [2024-11-26 20:55:43.530657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.416 [2024-11-26 20:55:43.530683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.416 qpair failed and we were unable to recover it. 00:25:40.416 [2024-11-26 20:55:43.530775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.416 [2024-11-26 20:55:43.530801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.416 qpair failed and we were unable to recover it. 00:25:40.416 [2024-11-26 20:55:43.530875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.416 [2024-11-26 20:55:43.530901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.416 qpair failed and we were unable to recover it. 00:25:40.416 [2024-11-26 20:55:43.531015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.416 [2024-11-26 20:55:43.531041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.416 qpair failed and we were unable to recover it. 00:25:40.416 [2024-11-26 20:55:43.531159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.416 [2024-11-26 20:55:43.531184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.416 qpair failed and we were unable to recover it. 00:25:40.416 [2024-11-26 20:55:43.531310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.416 [2024-11-26 20:55:43.531337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.416 qpair failed and we were unable to recover it. 00:25:40.416 [2024-11-26 20:55:43.531421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.416 [2024-11-26 20:55:43.531449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.416 qpair failed and we were unable to recover it. 00:25:40.416 [2024-11-26 20:55:43.531559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.416 [2024-11-26 20:55:43.531585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.416 qpair failed and we were unable to recover it. 00:25:40.416 [2024-11-26 20:55:43.531702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.416 [2024-11-26 20:55:43.531727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.416 qpair failed and we were unable to recover it. 00:25:40.416 [2024-11-26 20:55:43.531812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.416 [2024-11-26 20:55:43.531839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.416 qpair failed and we were unable to recover it. 00:25:40.416 [2024-11-26 20:55:43.531927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.416 [2024-11-26 20:55:43.531953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.416 qpair failed and we were unable to recover it. 00:25:40.416 [2024-11-26 20:55:43.532101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.416 [2024-11-26 20:55:43.532126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.416 qpair failed and we were unable to recover it. 00:25:40.416 [2024-11-26 20:55:43.532215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.416 [2024-11-26 20:55:43.532241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.416 qpair failed and we were unable to recover it. 00:25:40.416 [2024-11-26 20:55:43.532326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.416 [2024-11-26 20:55:43.532354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.416 qpair failed and we were unable to recover it. 00:25:40.416 [2024-11-26 20:55:43.532439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.416 [2024-11-26 20:55:43.532466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.416 qpair failed and we were unable to recover it. 00:25:40.416 [2024-11-26 20:55:43.532575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.416 [2024-11-26 20:55:43.532601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.416 qpair failed and we were unable to recover it. 00:25:40.416 [2024-11-26 20:55:43.532740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.416 [2024-11-26 20:55:43.532766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.416 qpair failed and we were unable to recover it. 00:25:40.416 [2024-11-26 20:55:43.532903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.416 [2024-11-26 20:55:43.532929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.416 qpair failed and we were unable to recover it. 00:25:40.416 [2024-11-26 20:55:43.533008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.416 [2024-11-26 20:55:43.533034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.416 qpair failed and we were unable to recover it. 00:25:40.416 [2024-11-26 20:55:43.533133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.416 [2024-11-26 20:55:43.533158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.416 qpair failed and we were unable to recover it. 00:25:40.416 [2024-11-26 20:55:43.533273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.416 [2024-11-26 20:55:43.533319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.416 qpair failed and we were unable to recover it. 00:25:40.416 [2024-11-26 20:55:43.533455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.416 [2024-11-26 20:55:43.533484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.416 qpair failed and we were unable to recover it. 00:25:40.416 [2024-11-26 20:55:43.533575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.416 [2024-11-26 20:55:43.533602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.416 qpair failed and we were unable to recover it. 00:25:40.416 [2024-11-26 20:55:43.533686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.416 [2024-11-26 20:55:43.533714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.416 qpair failed and we were unable to recover it. 00:25:40.416 [2024-11-26 20:55:43.533832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.416 [2024-11-26 20:55:43.533859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.416 qpair failed and we were unable to recover it. 00:25:40.416 [2024-11-26 20:55:43.533969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.416 [2024-11-26 20:55:43.533996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.416 qpair failed and we were unable to recover it. 00:25:40.416 [2024-11-26 20:55:43.534079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.416 [2024-11-26 20:55:43.534105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.416 qpair failed and we were unable to recover it. 00:25:40.416 [2024-11-26 20:55:43.534182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.416 [2024-11-26 20:55:43.534208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.416 qpair failed and we were unable to recover it. 00:25:40.416 [2024-11-26 20:55:43.534316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.416 [2024-11-26 20:55:43.534342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.416 qpair failed and we were unable to recover it. 00:25:40.416 [2024-11-26 20:55:43.534455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.416 [2024-11-26 20:55:43.534480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.416 qpair failed and we were unable to recover it. 00:25:40.416 [2024-11-26 20:55:43.534579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.417 [2024-11-26 20:55:43.534605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.417 qpair failed and we were unable to recover it. 00:25:40.417 [2024-11-26 20:55:43.534686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.417 [2024-11-26 20:55:43.534711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.417 qpair failed and we were unable to recover it. 00:25:40.417 [2024-11-26 20:55:43.534823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.417 [2024-11-26 20:55:43.534851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.417 qpair failed and we were unable to recover it. 00:25:40.417 [2024-11-26 20:55:43.534936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.417 [2024-11-26 20:55:43.534962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.417 qpair failed and we were unable to recover it. 00:25:40.417 [2024-11-26 20:55:43.535079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.417 [2024-11-26 20:55:43.535106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.417 qpair failed and we were unable to recover it. 00:25:40.417 [2024-11-26 20:55:43.535225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.417 [2024-11-26 20:55:43.535252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.417 qpair failed and we were unable to recover it. 00:25:40.417 [2024-11-26 20:55:43.535362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.417 [2024-11-26 20:55:43.535403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.417 qpair failed and we were unable to recover it. 00:25:40.417 [2024-11-26 20:55:43.535505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.417 [2024-11-26 20:55:43.535533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.417 qpair failed and we were unable to recover it. 00:25:40.417 [2024-11-26 20:55:43.535618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.417 [2024-11-26 20:55:43.535645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.417 qpair failed and we were unable to recover it. 00:25:40.417 [2024-11-26 20:55:43.535730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.417 [2024-11-26 20:55:43.535756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.417 qpair failed and we were unable to recover it. 00:25:40.417 [2024-11-26 20:55:43.535837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.417 [2024-11-26 20:55:43.535863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.417 qpair failed and we were unable to recover it. 00:25:40.417 [2024-11-26 20:55:43.536001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.417 [2024-11-26 20:55:43.536028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.417 qpair failed and we were unable to recover it. 00:25:40.417 [2024-11-26 20:55:43.536137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.417 [2024-11-26 20:55:43.536165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.417 qpair failed and we were unable to recover it. 00:25:40.417 [2024-11-26 20:55:43.536294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.417 [2024-11-26 20:55:43.536340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.417 qpair failed and we were unable to recover it. 00:25:40.417 [2024-11-26 20:55:43.536465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.417 [2024-11-26 20:55:43.536493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.417 qpair failed and we were unable to recover it. 00:25:40.417 [2024-11-26 20:55:43.536585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.417 [2024-11-26 20:55:43.536612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.417 qpair failed and we were unable to recover it. 00:25:40.417 [2024-11-26 20:55:43.536834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.417 [2024-11-26 20:55:43.536861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.417 qpair failed and we were unable to recover it. 00:25:40.417 [2024-11-26 20:55:43.536969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.417 [2024-11-26 20:55:43.536995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.417 qpair failed and we were unable to recover it. 00:25:40.417 [2024-11-26 20:55:43.537070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.417 [2024-11-26 20:55:43.537096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.417 qpair failed and we were unable to recover it. 00:25:40.417 [2024-11-26 20:55:43.537207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.417 [2024-11-26 20:55:43.537234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.417 qpair failed and we were unable to recover it. 00:25:40.417 [2024-11-26 20:55:43.537356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.417 [2024-11-26 20:55:43.537383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.417 qpair failed and we were unable to recover it. 00:25:40.417 [2024-11-26 20:55:43.537492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.417 [2024-11-26 20:55:43.537518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.417 qpair failed and we were unable to recover it. 00:25:40.417 [2024-11-26 20:55:43.537628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.417 [2024-11-26 20:55:43.537655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.417 qpair failed and we were unable to recover it. 00:25:40.417 [2024-11-26 20:55:43.537767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.417 [2024-11-26 20:55:43.537793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.417 qpair failed and we were unable to recover it. 00:25:40.417 [2024-11-26 20:55:43.537879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.417 [2024-11-26 20:55:43.537906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.417 qpair failed and we were unable to recover it. 00:25:40.417 [2024-11-26 20:55:43.538025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.417 [2024-11-26 20:55:43.538051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.417 qpair failed and we were unable to recover it. 00:25:40.417 [2024-11-26 20:55:43.538145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.417 [2024-11-26 20:55:43.538177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.417 qpair failed and we were unable to recover it. 00:25:40.417 [2024-11-26 20:55:43.538318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.417 [2024-11-26 20:55:43.538344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.417 qpair failed and we were unable to recover it. 00:25:40.417 [2024-11-26 20:55:43.539123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.417 [2024-11-26 20:55:43.539156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.417 qpair failed and we were unable to recover it. 00:25:40.417 [2024-11-26 20:55:43.539285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.417 [2024-11-26 20:55:43.539321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.417 qpair failed and we were unable to recover it. 00:25:40.417 [2024-11-26 20:55:43.539421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.417 [2024-11-26 20:55:43.539448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.417 qpair failed and we were unable to recover it. 00:25:40.417 [2024-11-26 20:55:43.539563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.417 [2024-11-26 20:55:43.539589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.417 qpair failed and we were unable to recover it. 00:25:40.417 [2024-11-26 20:55:43.539700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.417 [2024-11-26 20:55:43.539725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.417 qpair failed and we were unable to recover it. 00:25:40.417 [2024-11-26 20:55:43.539847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.417 [2024-11-26 20:55:43.539873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.417 qpair failed and we were unable to recover it. 00:25:40.417 [2024-11-26 20:55:43.539963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.417 [2024-11-26 20:55:43.539990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.417 qpair failed and we were unable to recover it. 00:25:40.417 [2024-11-26 20:55:43.540127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.417 [2024-11-26 20:55:43.540153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.417 qpair failed and we were unable to recover it. 00:25:40.417 [2024-11-26 20:55:43.540239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.417 [2024-11-26 20:55:43.540267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.417 qpair failed and we were unable to recover it. 00:25:40.417 [2024-11-26 20:55:43.540381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.417 [2024-11-26 20:55:43.540422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.417 qpair failed and we were unable to recover it. 00:25:40.417 [2024-11-26 20:55:43.540585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.417 [2024-11-26 20:55:43.540624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.417 qpair failed and we were unable to recover it. 00:25:40.417 [2024-11-26 20:55:43.540716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.417 [2024-11-26 20:55:43.540744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.417 qpair failed and we were unable to recover it. 00:25:40.418 [2024-11-26 20:55:43.540868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.418 [2024-11-26 20:55:43.540895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.418 qpair failed and we were unable to recover it. 00:25:40.418 [2024-11-26 20:55:43.540984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.418 [2024-11-26 20:55:43.541011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.418 qpair failed and we were unable to recover it. 00:25:40.418 [2024-11-26 20:55:43.541120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.418 [2024-11-26 20:55:43.541146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.418 qpair failed and we were unable to recover it. 00:25:40.418 [2024-11-26 20:55:43.541232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.418 [2024-11-26 20:55:43.541259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.418 qpair failed and we were unable to recover it. 00:25:40.418 [2024-11-26 20:55:43.541382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.418 [2024-11-26 20:55:43.541410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.418 qpair failed and we were unable to recover it. 00:25:40.418 [2024-11-26 20:55:43.541527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.418 [2024-11-26 20:55:43.541554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.418 qpair failed and we were unable to recover it. 00:25:40.418 [2024-11-26 20:55:43.541670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.418 [2024-11-26 20:55:43.541697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.418 qpair failed and we were unable to recover it. 00:25:40.418 [2024-11-26 20:55:43.541812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.418 [2024-11-26 20:55:43.541839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.418 qpair failed and we were unable to recover it. 00:25:40.418 [2024-11-26 20:55:43.541952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.418 [2024-11-26 20:55:43.541979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.418 qpair failed and we were unable to recover it. 00:25:40.418 [2024-11-26 20:55:43.542058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.418 [2024-11-26 20:55:43.542085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.418 qpair failed and we were unable to recover it. 00:25:40.418 [2024-11-26 20:55:43.542224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.418 [2024-11-26 20:55:43.542251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.418 qpair failed and we were unable to recover it. 00:25:40.418 [2024-11-26 20:55:43.542335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.418 [2024-11-26 20:55:43.542364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.418 qpair failed and we were unable to recover it. 00:25:40.418 [2024-11-26 20:55:43.542482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.418 [2024-11-26 20:55:43.542509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.418 qpair failed and we were unable to recover it. 00:25:40.418 [2024-11-26 20:55:43.542616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.418 [2024-11-26 20:55:43.542656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.418 qpair failed and we were unable to recover it. 00:25:40.418 [2024-11-26 20:55:43.542775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.418 [2024-11-26 20:55:43.542803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.418 qpair failed and we were unable to recover it. 00:25:40.418 [2024-11-26 20:55:43.542945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.418 [2024-11-26 20:55:43.542973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.418 qpair failed and we were unable to recover it. 00:25:40.418 [2024-11-26 20:55:43.543102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.418 [2024-11-26 20:55:43.543129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.418 qpair failed and we were unable to recover it. 00:25:40.418 [2024-11-26 20:55:43.543220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.418 [2024-11-26 20:55:43.543247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.418 qpair failed and we were unable to recover it. 00:25:40.418 [2024-11-26 20:55:43.543344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.418 [2024-11-26 20:55:43.543371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.418 qpair failed and we were unable to recover it. 00:25:40.418 [2024-11-26 20:55:43.543467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.418 [2024-11-26 20:55:43.543494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.418 qpair failed and we were unable to recover it. 00:25:40.418 [2024-11-26 20:55:43.543608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.418 [2024-11-26 20:55:43.543634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.418 qpair failed and we were unable to recover it. 00:25:40.418 [2024-11-26 20:55:43.543714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.418 [2024-11-26 20:55:43.543741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.418 qpair failed and we were unable to recover it. 00:25:40.418 [2024-11-26 20:55:43.543862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.418 [2024-11-26 20:55:43.543891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.418 qpair failed and we were unable to recover it. 00:25:40.418 [2024-11-26 20:55:43.544040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.418 [2024-11-26 20:55:43.544079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.418 qpair failed and we were unable to recover it. 00:25:40.418 [2024-11-26 20:55:43.544225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.418 [2024-11-26 20:55:43.544252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.418 qpair failed and we were unable to recover it. 00:25:40.418 [2024-11-26 20:55:43.544371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.418 [2024-11-26 20:55:43.544399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.418 qpair failed and we were unable to recover it. 00:25:40.418 [2024-11-26 20:55:43.544513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.418 [2024-11-26 20:55:43.544545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.418 qpair failed and we were unable to recover it. 00:25:40.418 [2024-11-26 20:55:43.544659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.418 [2024-11-26 20:55:43.544685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.418 qpair failed and we were unable to recover it. 00:25:40.418 [2024-11-26 20:55:43.544770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.418 [2024-11-26 20:55:43.544796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.418 qpair failed and we were unable to recover it. 00:25:40.418 [2024-11-26 20:55:43.544932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.418 [2024-11-26 20:55:43.544959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.418 qpair failed and we were unable to recover it. 00:25:40.418 [2024-11-26 20:55:43.545116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.418 [2024-11-26 20:55:43.545156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.418 qpair failed and we were unable to recover it. 00:25:40.418 [2024-11-26 20:55:43.545284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.418 [2024-11-26 20:55:43.545321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.418 qpair failed and we were unable to recover it. 00:25:40.418 [2024-11-26 20:55:43.545440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.418 [2024-11-26 20:55:43.545467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.418 qpair failed and we were unable to recover it. 00:25:40.418 [2024-11-26 20:55:43.545546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.418 [2024-11-26 20:55:43.545573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.418 qpair failed and we were unable to recover it. 00:25:40.418 [2024-11-26 20:55:43.545727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.418 [2024-11-26 20:55:43.545781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.418 qpair failed and we were unable to recover it. 00:25:40.418 [2024-11-26 20:55:43.545962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.418 [2024-11-26 20:55:43.546015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.418 qpair failed and we were unable to recover it. 00:25:40.418 [2024-11-26 20:55:43.546098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.418 [2024-11-26 20:55:43.546125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.418 qpair failed and we were unable to recover it. 00:25:40.418 [2024-11-26 20:55:43.546265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.418 [2024-11-26 20:55:43.546291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.418 qpair failed and we were unable to recover it. 00:25:40.418 [2024-11-26 20:55:43.546387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.418 [2024-11-26 20:55:43.546416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.418 qpair failed and we were unable to recover it. 00:25:40.418 [2024-11-26 20:55:43.546507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.418 [2024-11-26 20:55:43.546535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.418 qpair failed and we were unable to recover it. 00:25:40.418 [2024-11-26 20:55:43.546705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.418 [2024-11-26 20:55:43.546757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.418 qpair failed and we were unable to recover it. 00:25:40.418 [2024-11-26 20:55:43.546963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.418 [2024-11-26 20:55:43.546989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.418 qpair failed and we were unable to recover it. 00:25:40.418 [2024-11-26 20:55:43.547096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.419 [2024-11-26 20:55:43.547124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.419 qpair failed and we were unable to recover it. 00:25:40.419 [2024-11-26 20:55:43.547243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.419 [2024-11-26 20:55:43.547272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.419 qpair failed and we were unable to recover it. 00:25:40.419 [2024-11-26 20:55:43.547377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.419 [2024-11-26 20:55:43.547404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.419 qpair failed and we were unable to recover it. 00:25:40.419 [2024-11-26 20:55:43.547486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.419 [2024-11-26 20:55:43.547514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.419 qpair failed and we were unable to recover it. 00:25:40.419 [2024-11-26 20:55:43.547604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.419 [2024-11-26 20:55:43.547630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.419 qpair failed and we were unable to recover it. 00:25:40.419 [2024-11-26 20:55:43.547769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.419 [2024-11-26 20:55:43.547795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.419 qpair failed and we were unable to recover it. 00:25:40.419 [2024-11-26 20:55:43.547923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.419 [2024-11-26 20:55:43.547976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.419 qpair failed and we were unable to recover it. 00:25:40.419 [2024-11-26 20:55:43.548087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.419 [2024-11-26 20:55:43.548115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.419 qpair failed and we were unable to recover it. 00:25:40.419 [2024-11-26 20:55:43.548245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.419 [2024-11-26 20:55:43.548285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.419 qpair failed and we were unable to recover it. 00:25:40.419 [2024-11-26 20:55:43.548455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.419 [2024-11-26 20:55:43.548483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.419 qpair failed and we were unable to recover it. 00:25:40.419 [2024-11-26 20:55:43.548576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.419 [2024-11-26 20:55:43.548604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.419 qpair failed and we were unable to recover it. 00:25:40.419 [2024-11-26 20:55:43.548692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.419 [2024-11-26 20:55:43.548723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.419 qpair failed and we were unable to recover it. 00:25:40.419 [2024-11-26 20:55:43.548801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.419 [2024-11-26 20:55:43.548828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.419 qpair failed and we were unable to recover it. 00:25:40.419 [2024-11-26 20:55:43.548943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.419 [2024-11-26 20:55:43.548970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.419 qpair failed and we were unable to recover it. 00:25:40.419 [2024-11-26 20:55:43.549101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.419 [2024-11-26 20:55:43.549128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.419 qpair failed and we were unable to recover it. 00:25:40.419 [2024-11-26 20:55:43.549239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.419 [2024-11-26 20:55:43.549266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.419 qpair failed and we were unable to recover it. 00:25:40.419 [2024-11-26 20:55:43.549372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.419 [2024-11-26 20:55:43.549401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.419 qpair failed and we were unable to recover it. 00:25:40.419 [2024-11-26 20:55:43.549516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.419 [2024-11-26 20:55:43.549544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.419 qpair failed and we were unable to recover it. 00:25:40.419 [2024-11-26 20:55:43.549656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.419 [2024-11-26 20:55:43.549682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.419 qpair failed and we were unable to recover it. 00:25:40.419 [2024-11-26 20:55:43.549768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.419 [2024-11-26 20:55:43.549794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.419 qpair failed and we were unable to recover it. 00:25:40.419 [2024-11-26 20:55:43.549935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.419 [2024-11-26 20:55:43.549962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.419 qpair failed and we were unable to recover it. 00:25:40.419 [2024-11-26 20:55:43.550076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.419 [2024-11-26 20:55:43.550102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.419 qpair failed and we were unable to recover it. 00:25:40.419 [2024-11-26 20:55:43.550209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.419 [2024-11-26 20:55:43.550236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.419 qpair failed and we were unable to recover it. 00:25:40.419 [2024-11-26 20:55:43.550347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.419 [2024-11-26 20:55:43.550375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.419 qpair failed and we were unable to recover it. 00:25:40.419 [2024-11-26 20:55:43.550488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.419 [2024-11-26 20:55:43.550514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.419 qpair failed and we were unable to recover it. 00:25:40.419 [2024-11-26 20:55:43.550641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.419 [2024-11-26 20:55:43.550668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.419 qpair failed and we were unable to recover it. 00:25:40.420 [2024-11-26 20:55:43.550785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.420 [2024-11-26 20:55:43.550811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.420 qpair failed and we were unable to recover it. 00:25:40.420 [2024-11-26 20:55:43.550898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.420 [2024-11-26 20:55:43.550927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.420 qpair failed and we were unable to recover it. 00:25:40.420 [2024-11-26 20:55:43.551019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.420 [2024-11-26 20:55:43.551046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.420 qpair failed and we were unable to recover it. 00:25:40.420 [2024-11-26 20:55:43.551162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.420 [2024-11-26 20:55:43.551188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.420 qpair failed and we were unable to recover it. 00:25:40.420 [2024-11-26 20:55:43.551319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.420 [2024-11-26 20:55:43.551348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.420 qpair failed and we were unable to recover it. 00:25:40.420 [2024-11-26 20:55:43.551469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.420 [2024-11-26 20:55:43.551496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.420 qpair failed and we were unable to recover it. 00:25:40.420 [2024-11-26 20:55:43.551584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.420 [2024-11-26 20:55:43.551618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.420 qpair failed and we were unable to recover it. 00:25:40.420 [2024-11-26 20:55:43.551732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.420 [2024-11-26 20:55:43.551759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.420 qpair failed and we were unable to recover it. 00:25:40.420 [2024-11-26 20:55:43.551875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.420 [2024-11-26 20:55:43.551901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.420 qpair failed and we were unable to recover it. 00:25:40.420 [2024-11-26 20:55:43.552016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.420 [2024-11-26 20:55:43.552042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.420 qpair failed and we were unable to recover it. 00:25:40.420 [2024-11-26 20:55:43.552140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.420 [2024-11-26 20:55:43.552179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.420 qpair failed and we were unable to recover it. 00:25:40.420 [2024-11-26 20:55:43.552282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.420 [2024-11-26 20:55:43.552339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.420 qpair failed and we were unable to recover it. 00:25:40.420 [2024-11-26 20:55:43.552454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.420 [2024-11-26 20:55:43.552494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.420 qpair failed and we were unable to recover it. 00:25:40.420 [2024-11-26 20:55:43.552625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.420 [2024-11-26 20:55:43.552653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.420 qpair failed and we were unable to recover it. 00:25:40.420 [2024-11-26 20:55:43.552739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.420 [2024-11-26 20:55:43.552765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.420 qpair failed and we were unable to recover it. 00:25:40.420 [2024-11-26 20:55:43.552879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.420 [2024-11-26 20:55:43.552905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.420 qpair failed and we were unable to recover it. 00:25:40.420 [2024-11-26 20:55:43.553037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.420 [2024-11-26 20:55:43.553083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.420 qpair failed and we were unable to recover it. 00:25:40.420 [2024-11-26 20:55:43.553213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.420 [2024-11-26 20:55:43.553254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.420 qpair failed and we were unable to recover it. 00:25:40.420 [2024-11-26 20:55:43.553392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.420 [2024-11-26 20:55:43.553422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.420 qpair failed and we were unable to recover it. 00:25:40.420 [2024-11-26 20:55:43.553518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.420 [2024-11-26 20:55:43.553545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.420 qpair failed and we were unable to recover it. 00:25:40.420 [2024-11-26 20:55:43.553666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.420 [2024-11-26 20:55:43.553692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.420 qpair failed and we were unable to recover it. 00:25:40.420 [2024-11-26 20:55:43.553804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.420 [2024-11-26 20:55:43.553831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.420 qpair failed and we were unable to recover it. 00:25:40.420 [2024-11-26 20:55:43.553951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.420 [2024-11-26 20:55:43.553980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.420 qpair failed and we were unable to recover it. 00:25:40.420 [2024-11-26 20:55:43.554105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.420 [2024-11-26 20:55:43.554133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.420 qpair failed and we were unable to recover it. 00:25:40.420 [2024-11-26 20:55:43.554222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.420 [2024-11-26 20:55:43.554248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.420 qpair failed and we were unable to recover it. 00:25:40.420 [2024-11-26 20:55:43.554361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.420 [2024-11-26 20:55:43.554394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.420 qpair failed and we were unable to recover it. 00:25:40.420 [2024-11-26 20:55:43.554484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.420 [2024-11-26 20:55:43.554512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.420 qpair failed and we were unable to recover it. 00:25:40.420 [2024-11-26 20:55:43.554629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.420 [2024-11-26 20:55:43.554656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.420 qpair failed and we were unable to recover it. 00:25:40.420 [2024-11-26 20:55:43.554770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.420 [2024-11-26 20:55:43.554797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.420 qpair failed and we were unable to recover it. 00:25:40.420 [2024-11-26 20:55:43.554917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.420 [2024-11-26 20:55:43.554942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.420 qpair failed and we were unable to recover it. 00:25:40.420 [2024-11-26 20:55:43.555017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.420 [2024-11-26 20:55:43.555043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.420 qpair failed and we were unable to recover it. 00:25:40.420 [2024-11-26 20:55:43.555140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.420 [2024-11-26 20:55:43.555169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.420 qpair failed and we were unable to recover it. 00:25:40.420 [2024-11-26 20:55:43.555254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.420 [2024-11-26 20:55:43.555281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.420 qpair failed and we were unable to recover it. 00:25:40.420 [2024-11-26 20:55:43.555432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.420 [2024-11-26 20:55:43.555471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.420 qpair failed and we were unable to recover it. 00:25:40.420 [2024-11-26 20:55:43.555573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.420 [2024-11-26 20:55:43.555601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.420 qpair failed and we were unable to recover it. 00:25:40.420 [2024-11-26 20:55:43.555757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.420 [2024-11-26 20:55:43.555784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.420 qpair failed and we were unable to recover it. 00:25:40.420 [2024-11-26 20:55:43.555891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.420 [2024-11-26 20:55:43.555918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.421 qpair failed and we were unable to recover it. 00:25:40.421 [2024-11-26 20:55:43.556080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.421 [2024-11-26 20:55:43.556141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.421 qpair failed and we were unable to recover it. 00:25:40.421 [2024-11-26 20:55:43.556292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.421 [2024-11-26 20:55:43.556339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.421 qpair failed and we were unable to recover it. 00:25:40.421 [2024-11-26 20:55:43.556439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.421 [2024-11-26 20:55:43.556469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.421 qpair failed and we were unable to recover it. 00:25:40.421 [2024-11-26 20:55:43.556588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.421 [2024-11-26 20:55:43.556618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.421 qpair failed and we were unable to recover it. 00:25:40.421 [2024-11-26 20:55:43.556791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.421 [2024-11-26 20:55:43.556850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.421 qpair failed and we were unable to recover it. 00:25:40.421 [2024-11-26 20:55:43.556979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.421 [2024-11-26 20:55:43.557035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.421 qpair failed and we were unable to recover it. 00:25:40.421 [2024-11-26 20:55:43.557177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.421 [2024-11-26 20:55:43.557204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.421 qpair failed and we were unable to recover it. 00:25:40.421 [2024-11-26 20:55:43.557329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.421 [2024-11-26 20:55:43.557358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.421 qpair failed and we were unable to recover it. 00:25:40.421 [2024-11-26 20:55:43.557449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.421 [2024-11-26 20:55:43.557477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.421 qpair failed and we were unable to recover it. 00:25:40.421 [2024-11-26 20:55:43.557625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.421 [2024-11-26 20:55:43.557652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.421 qpair failed and we were unable to recover it. 00:25:40.421 [2024-11-26 20:55:43.557817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.421 [2024-11-26 20:55:43.557874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.421 qpair failed and we were unable to recover it. 00:25:40.421 [2024-11-26 20:55:43.558039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.421 [2024-11-26 20:55:43.558091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.421 qpair failed and we were unable to recover it. 00:25:40.421 [2024-11-26 20:55:43.558231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.421 [2024-11-26 20:55:43.558258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.421 qpair failed and we were unable to recover it. 00:25:40.421 [2024-11-26 20:55:43.558382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.421 [2024-11-26 20:55:43.558410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.421 qpair failed and we were unable to recover it. 00:25:40.421 [2024-11-26 20:55:43.558507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.421 [2024-11-26 20:55:43.558533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.421 qpair failed and we were unable to recover it. 00:25:40.421 [2024-11-26 20:55:43.558645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.421 [2024-11-26 20:55:43.558676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.421 qpair failed and we were unable to recover it. 00:25:40.421 [2024-11-26 20:55:43.558789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.421 [2024-11-26 20:55:43.558817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.421 qpair failed and we were unable to recover it. 00:25:40.421 [2024-11-26 20:55:43.558933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.421 [2024-11-26 20:55:43.558959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.421 qpair failed and we were unable to recover it. 00:25:40.421 [2024-11-26 20:55:43.559075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.421 [2024-11-26 20:55:43.559101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.421 qpair failed and we were unable to recover it. 00:25:40.421 [2024-11-26 20:55:43.559209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.421 [2024-11-26 20:55:43.559237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.421 qpair failed and we were unable to recover it. 00:25:40.421 [2024-11-26 20:55:43.559415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.421 [2024-11-26 20:55:43.559453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.421 qpair failed and we were unable to recover it. 00:25:40.421 [2024-11-26 20:55:43.559545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.421 [2024-11-26 20:55:43.559576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.421 qpair failed and we were unable to recover it. 00:25:40.421 [2024-11-26 20:55:43.559673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.421 [2024-11-26 20:55:43.559699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.421 qpair failed and we were unable to recover it. 00:25:40.421 [2024-11-26 20:55:43.559813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.421 [2024-11-26 20:55:43.559839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.421 qpair failed and we were unable to recover it. 00:25:40.421 [2024-11-26 20:55:43.559952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.421 [2024-11-26 20:55:43.559978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.421 qpair failed and we were unable to recover it. 00:25:40.421 [2024-11-26 20:55:43.560097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.421 [2024-11-26 20:55:43.560123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.421 qpair failed and we were unable to recover it. 00:25:40.421 [2024-11-26 20:55:43.560237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.421 [2024-11-26 20:55:43.560265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.421 qpair failed and we were unable to recover it. 00:25:40.421 [2024-11-26 20:55:43.560373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.421 [2024-11-26 20:55:43.560414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.421 qpair failed and we were unable to recover it. 00:25:40.421 [2024-11-26 20:55:43.560530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.421 [2024-11-26 20:55:43.560557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.421 qpair failed and we were unable to recover it. 00:25:40.421 [2024-11-26 20:55:43.560686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.421 [2024-11-26 20:55:43.560712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.421 qpair failed and we were unable to recover it. 00:25:40.421 [2024-11-26 20:55:43.560828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.421 [2024-11-26 20:55:43.560854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.421 qpair failed and we were unable to recover it. 00:25:40.421 [2024-11-26 20:55:43.560946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.421 [2024-11-26 20:55:43.560972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.421 qpair failed and we were unable to recover it. 00:25:40.421 [2024-11-26 20:55:43.561048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.421 [2024-11-26 20:55:43.561074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.421 qpair failed and we were unable to recover it. 00:25:40.421 [2024-11-26 20:55:43.561180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.421 [2024-11-26 20:55:43.561206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.421 qpair failed and we were unable to recover it. 00:25:40.421 [2024-11-26 20:55:43.561330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.422 [2024-11-26 20:55:43.561358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.422 qpair failed and we were unable to recover it. 00:25:40.422 [2024-11-26 20:55:43.561470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.422 [2024-11-26 20:55:43.561497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.422 qpair failed and we were unable to recover it. 00:25:40.422 [2024-11-26 20:55:43.561620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.422 [2024-11-26 20:55:43.561646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.422 qpair failed and we were unable to recover it. 00:25:40.422 [2024-11-26 20:55:43.561783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.422 [2024-11-26 20:55:43.561809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.422 qpair failed and we were unable to recover it. 00:25:40.422 [2024-11-26 20:55:43.561904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.422 [2024-11-26 20:55:43.561930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.422 qpair failed and we were unable to recover it. 00:25:40.422 [2024-11-26 20:55:43.562066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.422 [2024-11-26 20:55:43.562091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.422 qpair failed and we were unable to recover it. 00:25:40.422 [2024-11-26 20:55:43.562209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.422 [2024-11-26 20:55:43.562235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.422 qpair failed and we were unable to recover it. 00:25:40.422 [2024-11-26 20:55:43.562332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.422 [2024-11-26 20:55:43.562359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.422 qpair failed and we were unable to recover it. 00:25:40.422 [2024-11-26 20:55:43.562477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.422 [2024-11-26 20:55:43.562509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.422 qpair failed and we were unable to recover it. 00:25:40.422 [2024-11-26 20:55:43.562601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.422 [2024-11-26 20:55:43.562629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.422 qpair failed and we were unable to recover it. 00:25:40.422 [2024-11-26 20:55:43.562721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.422 [2024-11-26 20:55:43.562746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.422 qpair failed and we were unable to recover it. 00:25:40.422 [2024-11-26 20:55:43.562830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.422 [2024-11-26 20:55:43.562857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.422 qpair failed and we were unable to recover it. 00:25:40.422 [2024-11-26 20:55:43.562946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.422 [2024-11-26 20:55:43.562972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.422 qpair failed and we were unable to recover it. 00:25:40.422 [2024-11-26 20:55:43.563087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.422 [2024-11-26 20:55:43.563113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.422 qpair failed and we were unable to recover it. 00:25:40.422 [2024-11-26 20:55:43.563226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.422 [2024-11-26 20:55:43.563253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.422 qpair failed and we were unable to recover it. 00:25:40.422 [2024-11-26 20:55:43.563336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.422 [2024-11-26 20:55:43.563363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.422 qpair failed and we were unable to recover it. 00:25:40.422 [2024-11-26 20:55:43.563443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.422 [2024-11-26 20:55:43.563470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.422 qpair failed and we were unable to recover it. 00:25:40.422 [2024-11-26 20:55:43.563613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.422 [2024-11-26 20:55:43.563639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.422 qpair failed and we were unable to recover it. 00:25:40.422 [2024-11-26 20:55:43.563723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.422 [2024-11-26 20:55:43.563749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.422 qpair failed and we were unable to recover it. 00:25:40.422 [2024-11-26 20:55:43.563861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.422 [2024-11-26 20:55:43.563886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.422 qpair failed and we were unable to recover it. 00:25:40.422 [2024-11-26 20:55:43.563994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.422 [2024-11-26 20:55:43.564020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.422 qpair failed and we were unable to recover it. 00:25:40.422 [2024-11-26 20:55:43.564125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.422 [2024-11-26 20:55:43.564151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.422 qpair failed and we were unable to recover it. 00:25:40.422 [2024-11-26 20:55:43.564273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.422 [2024-11-26 20:55:43.564313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.422 qpair failed and we were unable to recover it. 00:25:40.422 [2024-11-26 20:55:43.564401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.422 [2024-11-26 20:55:43.564427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.422 qpair failed and we were unable to recover it. 00:25:40.422 [2024-11-26 20:55:43.564542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.422 [2024-11-26 20:55:43.564568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.422 qpair failed and we were unable to recover it. 00:25:40.422 [2024-11-26 20:55:43.564694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.422 [2024-11-26 20:55:43.564720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.422 qpair failed and we were unable to recover it. 00:25:40.422 [2024-11-26 20:55:43.564826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.422 [2024-11-26 20:55:43.564852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.422 qpair failed and we were unable to recover it. 00:25:40.422 [2024-11-26 20:55:43.564991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.422 [2024-11-26 20:55:43.565017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.422 qpair failed and we were unable to recover it. 00:25:40.422 [2024-11-26 20:55:43.565101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.422 [2024-11-26 20:55:43.565127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.422 qpair failed and we were unable to recover it. 00:25:40.422 [2024-11-26 20:55:43.565211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.422 [2024-11-26 20:55:43.565239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.422 qpair failed and we were unable to recover it. 00:25:40.422 [2024-11-26 20:55:43.565362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.422 [2024-11-26 20:55:43.565390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.422 qpair failed and we were unable to recover it. 00:25:40.422 [2024-11-26 20:55:43.565476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.422 [2024-11-26 20:55:43.565502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.422 qpair failed and we were unable to recover it. 00:25:40.422 [2024-11-26 20:55:43.565592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.422 [2024-11-26 20:55:43.565618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.422 qpair failed and we were unable to recover it. 00:25:40.422 [2024-11-26 20:55:43.565706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.422 [2024-11-26 20:55:43.565732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.422 qpair failed and we were unable to recover it. 00:25:40.422 [2024-11-26 20:55:43.565845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.422 [2024-11-26 20:55:43.565871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.422 qpair failed and we were unable to recover it. 00:25:40.422 [2024-11-26 20:55:43.565994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.422 [2024-11-26 20:55:43.566040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.422 qpair failed and we were unable to recover it. 00:25:40.422 [2024-11-26 20:55:43.566182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.422 [2024-11-26 20:55:43.566222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.422 qpair failed and we were unable to recover it. 00:25:40.422 [2024-11-26 20:55:43.566336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.422 [2024-11-26 20:55:43.566375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.423 qpair failed and we were unable to recover it. 00:25:40.423 [2024-11-26 20:55:43.566474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.423 [2024-11-26 20:55:43.566502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.423 qpair failed and we were unable to recover it. 00:25:40.423 [2024-11-26 20:55:43.566591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.423 [2024-11-26 20:55:43.566625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.423 qpair failed and we were unable to recover it. 00:25:40.423 [2024-11-26 20:55:43.566707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.423 [2024-11-26 20:55:43.566735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.423 qpair failed and we were unable to recover it. 00:25:40.423 [2024-11-26 20:55:43.566872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.423 [2024-11-26 20:55:43.566899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.423 qpair failed and we were unable to recover it. 00:25:40.423 [2024-11-26 20:55:43.566995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.423 [2024-11-26 20:55:43.567026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.423 qpair failed and we were unable to recover it. 00:25:40.423 [2024-11-26 20:55:43.567158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.423 [2024-11-26 20:55:43.567199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.423 qpair failed and we were unable to recover it. 00:25:40.423 [2024-11-26 20:55:43.567332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.423 [2024-11-26 20:55:43.567361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.423 qpair failed and we were unable to recover it. 00:25:40.423 [2024-11-26 20:55:43.567454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.423 [2024-11-26 20:55:43.567482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.423 qpair failed and we were unable to recover it. 00:25:40.423 [2024-11-26 20:55:43.567584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.423 [2024-11-26 20:55:43.567655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.423 qpair failed and we were unable to recover it. 00:25:40.423 [2024-11-26 20:55:43.567736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.423 [2024-11-26 20:55:43.567762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.423 qpair failed and we were unable to recover it. 00:25:40.423 [2024-11-26 20:55:43.567926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.423 [2024-11-26 20:55:43.567977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.423 qpair failed and we were unable to recover it. 00:25:40.423 [2024-11-26 20:55:43.568096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.423 [2024-11-26 20:55:43.568122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.423 qpair failed and we were unable to recover it. 00:25:40.423 [2024-11-26 20:55:43.568245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.423 [2024-11-26 20:55:43.568286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.423 qpair failed and we were unable to recover it. 00:25:40.423 [2024-11-26 20:55:43.568455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.423 [2024-11-26 20:55:43.568483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.423 qpair failed and we were unable to recover it. 00:25:40.423 [2024-11-26 20:55:43.568601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.423 [2024-11-26 20:55:43.568628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.423 qpair failed and we were unable to recover it. 00:25:40.423 [2024-11-26 20:55:43.568717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.423 [2024-11-26 20:55:43.568744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.423 qpair failed and we were unable to recover it. 00:25:40.423 [2024-11-26 20:55:43.568892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.423 [2024-11-26 20:55:43.568945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.423 qpair failed and we were unable to recover it. 00:25:40.423 [2024-11-26 20:55:43.569064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.423 [2024-11-26 20:55:43.569090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.423 qpair failed and we were unable to recover it. 00:25:40.423 [2024-11-26 20:55:43.569231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.423 [2024-11-26 20:55:43.569257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.423 qpair failed and we were unable to recover it. 00:25:40.423 [2024-11-26 20:55:43.569380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.423 [2024-11-26 20:55:43.569406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.423 qpair failed and we were unable to recover it. 00:25:40.423 [2024-11-26 20:55:43.569518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.423 [2024-11-26 20:55:43.569544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.423 qpair failed and we were unable to recover it. 00:25:40.423 [2024-11-26 20:55:43.569689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.423 [2024-11-26 20:55:43.569715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.423 qpair failed and we were unable to recover it. 00:25:40.423 [2024-11-26 20:55:43.569853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.423 [2024-11-26 20:55:43.569879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.423 qpair failed and we were unable to recover it. 00:25:40.423 [2024-11-26 20:55:43.569966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.423 [2024-11-26 20:55:43.569992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.423 qpair failed and we were unable to recover it. 00:25:40.423 [2024-11-26 20:55:43.570116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.423 [2024-11-26 20:55:43.570162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.423 qpair failed and we were unable to recover it. 00:25:40.423 [2024-11-26 20:55:43.570291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.423 [2024-11-26 20:55:43.570344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.423 qpair failed and we were unable to recover it. 00:25:40.423 [2024-11-26 20:55:43.570446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.423 [2024-11-26 20:55:43.570475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.423 qpair failed and we were unable to recover it. 00:25:40.423 [2024-11-26 20:55:43.570569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.423 [2024-11-26 20:55:43.570607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.423 qpair failed and we were unable to recover it. 00:25:40.423 [2024-11-26 20:55:43.570808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.423 [2024-11-26 20:55:43.570865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.423 qpair failed and we were unable to recover it. 00:25:40.423 [2024-11-26 20:55:43.571008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.423 [2024-11-26 20:55:43.571035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.423 qpair failed and we were unable to recover it. 00:25:40.423 [2024-11-26 20:55:43.571152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.423 [2024-11-26 20:55:43.571180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.423 qpair failed and we were unable to recover it. 00:25:40.423 [2024-11-26 20:55:43.571276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.423 [2024-11-26 20:55:43.571329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.423 qpair failed and we were unable to recover it. 00:25:40.423 [2024-11-26 20:55:43.571455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.423 [2024-11-26 20:55:43.571482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.423 qpair failed and we were unable to recover it. 00:25:40.423 [2024-11-26 20:55:43.571577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.423 [2024-11-26 20:55:43.571614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.423 qpair failed and we were unable to recover it. 00:25:40.423 [2024-11-26 20:55:43.571711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.423 [2024-11-26 20:55:43.571737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.423 qpair failed and we were unable to recover it. 00:25:40.423 [2024-11-26 20:55:43.571951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.423 [2024-11-26 20:55:43.571977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.423 qpair failed and we were unable to recover it. 00:25:40.423 [2024-11-26 20:55:43.572093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.423 [2024-11-26 20:55:43.572120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.423 qpair failed and we were unable to recover it. 00:25:40.423 [2024-11-26 20:55:43.572231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.424 [2024-11-26 20:55:43.572257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.424 qpair failed and we were unable to recover it. 00:25:40.424 [2024-11-26 20:55:43.572395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.424 [2024-11-26 20:55:43.572435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.424 qpair failed and we were unable to recover it. 00:25:40.424 [2024-11-26 20:55:43.572534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.424 [2024-11-26 20:55:43.572563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.424 qpair failed and we were unable to recover it. 00:25:40.424 [2024-11-26 20:55:43.572682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.424 [2024-11-26 20:55:43.572709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.424 qpair failed and we were unable to recover it. 00:25:40.424 [2024-11-26 20:55:43.572796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.424 [2024-11-26 20:55:43.572824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.424 qpair failed and we were unable to recover it. 00:25:40.424 [2024-11-26 20:55:43.572932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.424 [2024-11-26 20:55:43.572959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.424 qpair failed and we were unable to recover it. 00:25:40.424 [2024-11-26 20:55:43.573051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.424 [2024-11-26 20:55:43.573077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.424 qpair failed and we were unable to recover it. 00:25:40.424 [2024-11-26 20:55:43.573156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.424 [2024-11-26 20:55:43.573184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.424 qpair failed and we were unable to recover it. 00:25:40.424 [2024-11-26 20:55:43.573316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.424 [2024-11-26 20:55:43.573357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.424 qpair failed and we were unable to recover it. 00:25:40.424 [2024-11-26 20:55:43.573464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.424 [2024-11-26 20:55:43.573503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.424 qpair failed and we were unable to recover it. 00:25:40.424 [2024-11-26 20:55:43.573610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.424 [2024-11-26 20:55:43.573637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.424 qpair failed and we were unable to recover it. 00:25:40.424 [2024-11-26 20:55:43.573723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.424 [2024-11-26 20:55:43.573750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.424 qpair failed and we were unable to recover it. 00:25:40.424 [2024-11-26 20:55:43.573859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.424 [2024-11-26 20:55:43.573885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.424 qpair failed and we were unable to recover it. 00:25:40.424 [2024-11-26 20:55:43.573982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.424 [2024-11-26 20:55:43.574011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.424 qpair failed and we were unable to recover it. 00:25:40.424 [2024-11-26 20:55:43.574125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.424 [2024-11-26 20:55:43.574152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.424 qpair failed and we were unable to recover it. 00:25:40.424 [2024-11-26 20:55:43.574292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.424 [2024-11-26 20:55:43.574328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.424 qpair failed and we were unable to recover it. 00:25:40.424 [2024-11-26 20:55:43.574447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.424 [2024-11-26 20:55:43.574473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.424 qpair failed and we were unable to recover it. 00:25:40.424 [2024-11-26 20:55:43.574616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.424 [2024-11-26 20:55:43.574642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.424 qpair failed and we were unable to recover it. 00:25:40.424 [2024-11-26 20:55:43.574789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.424 [2024-11-26 20:55:43.574815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.424 qpair failed and we were unable to recover it. 00:25:40.424 [2024-11-26 20:55:43.574958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.424 [2024-11-26 20:55:43.574985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.424 qpair failed and we were unable to recover it. 00:25:40.424 [2024-11-26 20:55:43.575068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.424 [2024-11-26 20:55:43.575095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.424 qpair failed and we were unable to recover it. 00:25:40.424 [2024-11-26 20:55:43.575247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.424 [2024-11-26 20:55:43.575295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.424 qpair failed and we were unable to recover it. 00:25:40.424 [2024-11-26 20:55:43.575429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.424 [2024-11-26 20:55:43.575457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.424 qpair failed and we were unable to recover it. 00:25:40.424 [2024-11-26 20:55:43.575572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.424 [2024-11-26 20:55:43.575598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.424 qpair failed and we were unable to recover it. 00:25:40.424 [2024-11-26 20:55:43.575754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.424 [2024-11-26 20:55:43.575779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.424 qpair failed and we were unable to recover it. 00:25:40.424 [2024-11-26 20:55:43.575889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.424 [2024-11-26 20:55:43.575915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.424 qpair failed and we were unable to recover it. 00:25:40.424 [2024-11-26 20:55:43.576021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.424 [2024-11-26 20:55:43.576090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.424 qpair failed and we were unable to recover it. 00:25:40.424 [2024-11-26 20:55:43.576184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.424 [2024-11-26 20:55:43.576218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.424 qpair failed and we were unable to recover it. 00:25:40.424 [2024-11-26 20:55:43.576376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.424 [2024-11-26 20:55:43.576417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.424 qpair failed and we were unable to recover it. 00:25:40.424 [2024-11-26 20:55:43.576513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.424 [2024-11-26 20:55:43.576542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.425 qpair failed and we were unable to recover it. 00:25:40.425 [2024-11-26 20:55:43.576690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.425 [2024-11-26 20:55:43.576717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.425 qpair failed and we were unable to recover it. 00:25:40.425 [2024-11-26 20:55:43.576838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.425 [2024-11-26 20:55:43.576893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.425 qpair failed and we were unable to recover it. 00:25:40.425 [2024-11-26 20:55:43.577026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.425 [2024-11-26 20:55:43.577072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.425 qpair failed and we were unable to recover it. 00:25:40.425 [2024-11-26 20:55:43.577185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.425 [2024-11-26 20:55:43.577212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.425 qpair failed and we were unable to recover it. 00:25:40.425 [2024-11-26 20:55:43.577328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.425 [2024-11-26 20:55:43.577358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.425 qpair failed and we were unable to recover it. 00:25:40.425 [2024-11-26 20:55:43.577479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.425 [2024-11-26 20:55:43.577506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.425 qpair failed and we were unable to recover it. 00:25:40.425 [2024-11-26 20:55:43.577612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.425 [2024-11-26 20:55:43.577652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.425 qpair failed and we were unable to recover it. 00:25:40.425 [2024-11-26 20:55:43.577808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.425 [2024-11-26 20:55:43.577837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.425 qpair failed and we were unable to recover it. 00:25:40.425 [2024-11-26 20:55:43.577954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.425 [2024-11-26 20:55:43.577983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.425 qpair failed and we were unable to recover it. 00:25:40.425 [2024-11-26 20:55:43.578092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.425 [2024-11-26 20:55:43.578118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.425 qpair failed and we were unable to recover it. 00:25:40.425 [2024-11-26 20:55:43.578232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.425 [2024-11-26 20:55:43.578259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.425 qpair failed and we were unable to recover it. 00:25:40.425 [2024-11-26 20:55:43.578410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.425 [2024-11-26 20:55:43.578451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.425 qpair failed and we were unable to recover it. 00:25:40.425 [2024-11-26 20:55:43.578537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.425 [2024-11-26 20:55:43.578564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.425 qpair failed and we were unable to recover it. 00:25:40.425 [2024-11-26 20:55:43.578740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.425 [2024-11-26 20:55:43.578799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.425 qpair failed and we were unable to recover it. 00:25:40.425 [2024-11-26 20:55:43.578978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.425 [2024-11-26 20:55:43.579004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.425 qpair failed and we were unable to recover it. 00:25:40.425 [2024-11-26 20:55:43.579145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.425 [2024-11-26 20:55:43.579171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.425 qpair failed and we were unable to recover it. 00:25:40.425 [2024-11-26 20:55:43.579287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.425 [2024-11-26 20:55:43.579332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.425 qpair failed and we were unable to recover it. 00:25:40.425 [2024-11-26 20:55:43.579478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.425 [2024-11-26 20:55:43.579505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.425 qpair failed and we were unable to recover it. 00:25:40.425 [2024-11-26 20:55:43.579625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.425 [2024-11-26 20:55:43.579651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.425 qpair failed and we were unable to recover it. 00:25:40.425 [2024-11-26 20:55:43.579746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.425 [2024-11-26 20:55:43.579772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.425 qpair failed and we were unable to recover it. 00:25:40.425 [2024-11-26 20:55:43.579858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.425 [2024-11-26 20:55:43.579884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.425 qpair failed and we were unable to recover it. 00:25:40.425 [2024-11-26 20:55:43.580065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.425 [2024-11-26 20:55:43.580092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.425 qpair failed and we were unable to recover it. 00:25:40.425 [2024-11-26 20:55:43.580180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.425 [2024-11-26 20:55:43.580207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.425 qpair failed and we were unable to recover it. 00:25:40.425 [2024-11-26 20:55:43.580289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.425 [2024-11-26 20:55:43.580331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.425 qpair failed and we were unable to recover it. 00:25:40.425 [2024-11-26 20:55:43.580451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.425 [2024-11-26 20:55:43.580484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.425 qpair failed and we were unable to recover it. 00:25:40.425 [2024-11-26 20:55:43.580578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.425 [2024-11-26 20:55:43.580608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.425 qpair failed and we were unable to recover it. 00:25:40.425 [2024-11-26 20:55:43.580694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.425 [2024-11-26 20:55:43.580721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.425 qpair failed and we were unable to recover it. 00:25:40.425 [2024-11-26 20:55:43.580799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.425 [2024-11-26 20:55:43.580825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.425 qpair failed and we were unable to recover it. 00:25:40.425 [2024-11-26 20:55:43.580933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.425 [2024-11-26 20:55:43.580959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.425 qpair failed and we were unable to recover it. 00:25:40.425 [2024-11-26 20:55:43.581047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.425 [2024-11-26 20:55:43.581087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.425 qpair failed and we were unable to recover it. 00:25:40.425 [2024-11-26 20:55:43.581191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.425 [2024-11-26 20:55:43.581221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.425 qpair failed and we were unable to recover it. 00:25:40.425 [2024-11-26 20:55:43.581319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.425 [2024-11-26 20:55:43.581348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.425 qpair failed and we were unable to recover it. 00:25:40.425 [2024-11-26 20:55:43.581467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.425 [2024-11-26 20:55:43.581493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.425 qpair failed and we were unable to recover it. 00:25:40.425 [2024-11-26 20:55:43.581619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.425 [2024-11-26 20:55:43.581645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.425 qpair failed and we were unable to recover it. 00:25:40.425 [2024-11-26 20:55:43.581781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.425 [2024-11-26 20:55:43.581808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.425 qpair failed and we were unable to recover it. 00:25:40.425 [2024-11-26 20:55:43.581900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.425 [2024-11-26 20:55:43.581928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.425 qpair failed and we were unable to recover it. 00:25:40.425 [2024-11-26 20:55:43.582044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.425 [2024-11-26 20:55:43.582071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.425 qpair failed and we were unable to recover it. 00:25:40.425 [2024-11-26 20:55:43.582158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.425 [2024-11-26 20:55:43.582184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.425 qpair failed and we were unable to recover it. 00:25:40.425 [2024-11-26 20:55:43.582300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.425 [2024-11-26 20:55:43.582333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.425 qpair failed and we were unable to recover it. 00:25:40.425 [2024-11-26 20:55:43.582423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.425 [2024-11-26 20:55:43.582450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.425 qpair failed and we were unable to recover it. 00:25:40.425 [2024-11-26 20:55:43.582535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.425 [2024-11-26 20:55:43.582561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.425 qpair failed and we were unable to recover it. 00:25:40.425 [2024-11-26 20:55:43.582668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.425 [2024-11-26 20:55:43.582695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.425 qpair failed and we were unable to recover it. 00:25:40.425 [2024-11-26 20:55:43.582807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.425 [2024-11-26 20:55:43.582833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.425 qpair failed and we were unable to recover it. 00:25:40.425 [2024-11-26 20:55:43.582952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.425 [2024-11-26 20:55:43.582978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.425 qpair failed and we were unable to recover it. 00:25:40.425 [2024-11-26 20:55:43.583096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.426 [2024-11-26 20:55:43.583124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.426 qpair failed and we were unable to recover it. 00:25:40.426 [2024-11-26 20:55:43.583241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.426 [2024-11-26 20:55:43.583267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.426 qpair failed and we were unable to recover it. 00:25:40.426 [2024-11-26 20:55:43.583416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.426 [2024-11-26 20:55:43.583455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.426 qpair failed and we were unable to recover it. 00:25:40.426 [2024-11-26 20:55:43.583546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.426 [2024-11-26 20:55:43.583573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.426 qpair failed and we were unable to recover it. 00:25:40.426 [2024-11-26 20:55:43.583690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.426 [2024-11-26 20:55:43.583716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.426 qpair failed and we were unable to recover it. 00:25:40.426 [2024-11-26 20:55:43.583826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.426 [2024-11-26 20:55:43.583852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.426 qpair failed and we were unable to recover it. 00:25:40.426 [2024-11-26 20:55:43.583987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.426 [2024-11-26 20:55:43.584047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.426 qpair failed and we were unable to recover it. 00:25:40.426 [2024-11-26 20:55:43.584139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.426 [2024-11-26 20:55:43.584165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.426 qpair failed and we were unable to recover it. 00:25:40.426 [2024-11-26 20:55:43.584254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.426 [2024-11-26 20:55:43.584280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.426 qpair failed and we were unable to recover it. 00:25:40.426 [2024-11-26 20:55:43.584418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.426 [2024-11-26 20:55:43.584447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.426 qpair failed and we were unable to recover it. 00:25:40.426 [2024-11-26 20:55:43.584560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.426 [2024-11-26 20:55:43.584587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.426 qpair failed and we were unable to recover it. 00:25:40.426 [2024-11-26 20:55:43.584678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.426 [2024-11-26 20:55:43.584705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.426 qpair failed and we were unable to recover it. 00:25:40.426 [2024-11-26 20:55:43.584814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.426 [2024-11-26 20:55:43.584841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.426 qpair failed and we were unable to recover it. 00:25:40.426 [2024-11-26 20:55:43.584958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.426 [2024-11-26 20:55:43.584984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.426 qpair failed and we were unable to recover it. 00:25:40.426 [2024-11-26 20:55:43.585110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.426 [2024-11-26 20:55:43.585149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.426 qpair failed and we were unable to recover it. 00:25:40.426 [2024-11-26 20:55:43.585265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.426 [2024-11-26 20:55:43.585293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.426 qpair failed and we were unable to recover it. 00:25:40.426 [2024-11-26 20:55:43.585393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.426 [2024-11-26 20:55:43.585419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.426 qpair failed and we were unable to recover it. 00:25:40.426 [2024-11-26 20:55:43.585501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.426 [2024-11-26 20:55:43.585528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.426 qpair failed and we were unable to recover it. 00:25:40.426 [2024-11-26 20:55:43.585622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.426 [2024-11-26 20:55:43.585648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.426 qpair failed and we were unable to recover it. 00:25:40.426 [2024-11-26 20:55:43.585760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.426 [2024-11-26 20:55:43.585788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.426 qpair failed and we were unable to recover it. 00:25:40.426 [2024-11-26 20:55:43.585908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.426 [2024-11-26 20:55:43.585941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.426 qpair failed and we were unable to recover it. 00:25:40.426 [2024-11-26 20:55:43.586063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.426 [2024-11-26 20:55:43.586093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.426 qpair failed and we were unable to recover it. 00:25:40.426 [2024-11-26 20:55:43.586197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.426 [2024-11-26 20:55:43.586237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.426 qpair failed and we were unable to recover it. 00:25:40.426 [2024-11-26 20:55:43.586345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.426 [2024-11-26 20:55:43.586372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.426 qpair failed and we were unable to recover it. 00:25:40.426 [2024-11-26 20:55:43.586462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.426 [2024-11-26 20:55:43.586488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.426 qpair failed and we were unable to recover it. 00:25:40.426 [2024-11-26 20:55:43.586577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.426 [2024-11-26 20:55:43.586606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.426 qpair failed and we were unable to recover it. 00:25:40.426 [2024-11-26 20:55:43.586746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.426 [2024-11-26 20:55:43.586772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.426 qpair failed and we were unable to recover it. 00:25:40.426 [2024-11-26 20:55:43.586885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.426 [2024-11-26 20:55:43.586913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.426 qpair failed and we were unable to recover it. 00:25:40.426 [2024-11-26 20:55:43.587026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.426 [2024-11-26 20:55:43.587055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.426 qpair failed and we were unable to recover it. 00:25:40.426 [2024-11-26 20:55:43.587159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.426 [2024-11-26 20:55:43.587200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.426 qpair failed and we were unable to recover it. 00:25:40.426 [2024-11-26 20:55:43.587322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.426 [2024-11-26 20:55:43.587349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.426 qpair failed and we were unable to recover it. 00:25:40.426 [2024-11-26 20:55:43.587436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.426 [2024-11-26 20:55:43.587463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.426 qpair failed and we were unable to recover it. 00:25:40.426 [2024-11-26 20:55:43.587552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.426 [2024-11-26 20:55:43.587579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.426 qpair failed and we were unable to recover it. 00:25:40.426 [2024-11-26 20:55:43.587705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.426 [2024-11-26 20:55:43.587731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.426 qpair failed and we were unable to recover it. 00:25:40.426 [2024-11-26 20:55:43.587852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.426 [2024-11-26 20:55:43.587878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.426 qpair failed and we were unable to recover it. 00:25:40.426 [2024-11-26 20:55:43.587987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.426 [2024-11-26 20:55:43.588012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.426 qpair failed and we were unable to recover it. 00:25:40.426 [2024-11-26 20:55:43.588144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.426 [2024-11-26 20:55:43.588184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.426 qpair failed and we were unable to recover it. 00:25:40.426 [2024-11-26 20:55:43.588315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.426 [2024-11-26 20:55:43.588345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.426 qpair failed and we were unable to recover it. 00:25:40.426 [2024-11-26 20:55:43.588456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.426 [2024-11-26 20:55:43.588483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.426 qpair failed and we were unable to recover it. 00:25:40.426 [2024-11-26 20:55:43.588580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.426 [2024-11-26 20:55:43.588619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.426 qpair failed and we were unable to recover it. 00:25:40.426 [2024-11-26 20:55:43.588757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.426 [2024-11-26 20:55:43.588783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.426 qpair failed and we were unable to recover it. 00:25:40.426 [2024-11-26 20:55:43.588928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.426 [2024-11-26 20:55:43.588954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.426 qpair failed and we were unable to recover it. 00:25:40.427 [2024-11-26 20:55:43.589039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.427 [2024-11-26 20:55:43.589066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.427 qpair failed and we were unable to recover it. 00:25:40.427 [2024-11-26 20:55:43.589207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.427 [2024-11-26 20:55:43.589233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.427 qpair failed and we were unable to recover it. 00:25:40.427 [2024-11-26 20:55:43.589349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.427 [2024-11-26 20:55:43.589376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.427 qpair failed and we were unable to recover it. 00:25:40.427 [2024-11-26 20:55:43.589468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.427 [2024-11-26 20:55:43.589493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.427 qpair failed and we were unable to recover it. 00:25:40.427 [2024-11-26 20:55:43.589602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.427 [2024-11-26 20:55:43.589628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.427 qpair failed and we were unable to recover it. 00:25:40.427 [2024-11-26 20:55:43.589738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.427 [2024-11-26 20:55:43.589769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.427 qpair failed and we were unable to recover it. 00:25:40.427 [2024-11-26 20:55:43.589886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.427 [2024-11-26 20:55:43.589914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.427 qpair failed and we were unable to recover it. 00:25:40.427 [2024-11-26 20:55:43.590037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.427 [2024-11-26 20:55:43.590076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.427 qpair failed and we were unable to recover it. 00:25:40.427 [2024-11-26 20:55:43.590176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.427 [2024-11-26 20:55:43.590214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.427 qpair failed and we were unable to recover it. 00:25:40.427 [2024-11-26 20:55:43.590337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.427 [2024-11-26 20:55:43.590364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.427 qpair failed and we were unable to recover it. 00:25:40.427 [2024-11-26 20:55:43.590482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.427 [2024-11-26 20:55:43.590507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.427 qpair failed and we were unable to recover it. 00:25:40.427 [2024-11-26 20:55:43.590620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.427 [2024-11-26 20:55:43.590646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.427 qpair failed and we were unable to recover it. 00:25:40.427 [2024-11-26 20:55:43.590729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.427 [2024-11-26 20:55:43.590755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.427 qpair failed and we were unable to recover it. 00:25:40.427 [2024-11-26 20:55:43.590867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.427 [2024-11-26 20:55:43.590893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.427 qpair failed and we were unable to recover it. 00:25:40.427 [2024-11-26 20:55:43.590976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.427 [2024-11-26 20:55:43.591006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.427 qpair failed and we were unable to recover it. 00:25:40.427 [2024-11-26 20:55:43.591127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.427 [2024-11-26 20:55:43.591155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.427 qpair failed and we were unable to recover it. 00:25:40.427 [2024-11-26 20:55:43.591243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.427 [2024-11-26 20:55:43.591271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.427 qpair failed and we were unable to recover it. 00:25:40.427 [2024-11-26 20:55:43.591408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.427 [2024-11-26 20:55:43.591435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.427 qpair failed and we were unable to recover it. 00:25:40.427 [2024-11-26 20:55:43.591573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.427 [2024-11-26 20:55:43.591604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.427 qpair failed and we were unable to recover it. 00:25:40.427 [2024-11-26 20:55:43.591754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.427 [2024-11-26 20:55:43.591781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.427 qpair failed and we were unable to recover it. 00:25:40.427 [2024-11-26 20:55:43.591879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.427 [2024-11-26 20:55:43.591947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.427 qpair failed and we were unable to recover it. 00:25:40.427 [2024-11-26 20:55:43.592082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.427 [2024-11-26 20:55:43.592109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.427 qpair failed and we were unable to recover it. 00:25:40.427 [2024-11-26 20:55:43.592212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.427 [2024-11-26 20:55:43.592239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.427 qpair failed and we were unable to recover it. 00:25:40.427 [2024-11-26 20:55:43.592367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.427 [2024-11-26 20:55:43.592394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.427 qpair failed and we were unable to recover it. 00:25:40.427 [2024-11-26 20:55:43.592509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.427 [2024-11-26 20:55:43.592536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.427 qpair failed and we were unable to recover it. 00:25:40.427 [2024-11-26 20:55:43.592653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.427 [2024-11-26 20:55:43.592679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.427 qpair failed and we were unable to recover it. 00:25:40.427 [2024-11-26 20:55:43.592816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.428 [2024-11-26 20:55:43.592845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.428 qpair failed and we were unable to recover it. 00:25:40.428 [2024-11-26 20:55:43.592925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.428 [2024-11-26 20:55:43.592951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.428 qpair failed and we were unable to recover it. 00:25:40.428 [2024-11-26 20:55:43.593058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.428 [2024-11-26 20:55:43.593085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.428 qpair failed and we were unable to recover it. 00:25:40.428 [2024-11-26 20:55:43.593208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.428 [2024-11-26 20:55:43.593248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.428 qpair failed and we were unable to recover it. 00:25:40.428 [2024-11-26 20:55:43.593431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.428 [2024-11-26 20:55:43.593461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.428 qpair failed and we were unable to recover it. 00:25:40.428 [2024-11-26 20:55:43.593575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.428 [2024-11-26 20:55:43.593611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.428 qpair failed and we were unable to recover it. 00:25:40.428 [2024-11-26 20:55:43.593727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.428 [2024-11-26 20:55:43.593758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.428 qpair failed and we were unable to recover it. 00:25:40.428 [2024-11-26 20:55:43.593907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.428 [2024-11-26 20:55:43.593933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.428 qpair failed and we were unable to recover it. 00:25:40.428 [2024-11-26 20:55:43.594024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.428 [2024-11-26 20:55:43.594050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.428 qpair failed and we were unable to recover it. 00:25:40.428 [2024-11-26 20:55:43.594141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.428 [2024-11-26 20:55:43.594167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.428 qpair failed and we were unable to recover it. 00:25:40.428 [2024-11-26 20:55:43.594275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.428 [2024-11-26 20:55:43.594314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.428 qpair failed and we were unable to recover it. 00:25:40.428 [2024-11-26 20:55:43.594403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.428 [2024-11-26 20:55:43.594429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.428 qpair failed and we were unable to recover it. 00:25:40.428 [2024-11-26 20:55:43.594542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.428 [2024-11-26 20:55:43.594567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.428 qpair failed and we were unable to recover it. 00:25:40.428 [2024-11-26 20:55:43.594694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.428 [2024-11-26 20:55:43.594721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.428 qpair failed and we were unable to recover it. 00:25:40.428 [2024-11-26 20:55:43.594869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.428 [2024-11-26 20:55:43.594895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.428 qpair failed and we were unable to recover it. 00:25:40.428 [2024-11-26 20:55:43.594974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.428 [2024-11-26 20:55:43.595000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.428 qpair failed and we were unable to recover it. 00:25:40.428 [2024-11-26 20:55:43.595119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.428 [2024-11-26 20:55:43.595158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.428 qpair failed and we were unable to recover it. 00:25:40.428 [2024-11-26 20:55:43.595290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.428 [2024-11-26 20:55:43.595337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.428 qpair failed and we were unable to recover it. 00:25:40.428 [2024-11-26 20:55:43.595432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.428 [2024-11-26 20:55:43.595462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.428 qpair failed and we were unable to recover it. 00:25:40.428 [2024-11-26 20:55:43.595583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.428 [2024-11-26 20:55:43.595611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.428 qpair failed and we were unable to recover it. 00:25:40.428 [2024-11-26 20:55:43.595796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.428 [2024-11-26 20:55:43.595859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.428 qpair failed and we were unable to recover it. 00:25:40.428 [2024-11-26 20:55:43.596002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.428 [2024-11-26 20:55:43.596057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.428 qpair failed and we were unable to recover it. 00:25:40.428 [2024-11-26 20:55:43.596143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.428 [2024-11-26 20:55:43.596176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.428 qpair failed and we were unable to recover it. 00:25:40.428 [2024-11-26 20:55:43.596287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.428 [2024-11-26 20:55:43.596321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.428 qpair failed and we were unable to recover it. 00:25:40.428 [2024-11-26 20:55:43.596441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.428 [2024-11-26 20:55:43.596468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.428 qpair failed and we were unable to recover it. 00:25:40.428 [2024-11-26 20:55:43.596580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.428 [2024-11-26 20:55:43.596606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.428 qpair failed and we were unable to recover it. 00:25:40.428 [2024-11-26 20:55:43.596726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.428 [2024-11-26 20:55:43.596753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.428 qpair failed and we were unable to recover it. 00:25:40.429 [2024-11-26 20:55:43.596841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.429 [2024-11-26 20:55:43.596868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.429 qpair failed and we were unable to recover it. 00:25:40.429 [2024-11-26 20:55:43.596972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.429 [2024-11-26 20:55:43.596998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.429 qpair failed and we were unable to recover it. 00:25:40.429 [2024-11-26 20:55:43.597083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.429 [2024-11-26 20:55:43.597110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.429 qpair failed and we were unable to recover it. 00:25:40.429 [2024-11-26 20:55:43.597205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.429 [2024-11-26 20:55:43.597232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.429 qpair failed and we were unable to recover it. 00:25:40.429 [2024-11-26 20:55:43.597322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.429 [2024-11-26 20:55:43.597350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.429 qpair failed and we were unable to recover it. 00:25:40.429 [2024-11-26 20:55:43.597468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.429 [2024-11-26 20:55:43.597495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.429 qpair failed and we were unable to recover it. 00:25:40.429 [2024-11-26 20:55:43.597585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.429 [2024-11-26 20:55:43.597625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.429 qpair failed and we were unable to recover it. 00:25:40.429 [2024-11-26 20:55:43.597747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.429 [2024-11-26 20:55:43.597775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.429 qpair failed and we were unable to recover it. 00:25:40.429 [2024-11-26 20:55:43.597902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.429 [2024-11-26 20:55:43.597941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.429 qpair failed and we were unable to recover it. 00:25:40.429 [2024-11-26 20:55:43.598039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.429 [2024-11-26 20:55:43.598067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.429 qpair failed and we were unable to recover it. 00:25:40.429 [2024-11-26 20:55:43.598185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.429 [2024-11-26 20:55:43.598214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.429 qpair failed and we were unable to recover it. 00:25:40.429 [2024-11-26 20:55:43.598312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.429 [2024-11-26 20:55:43.598340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.429 qpair failed and we were unable to recover it. 00:25:40.429 [2024-11-26 20:55:43.598439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.429 [2024-11-26 20:55:43.598466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.429 qpair failed and we were unable to recover it. 00:25:40.429 [2024-11-26 20:55:43.598554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.429 [2024-11-26 20:55:43.598581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.429 qpair failed and we were unable to recover it. 00:25:40.429 [2024-11-26 20:55:43.598667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.429 [2024-11-26 20:55:43.598693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.429 qpair failed and we were unable to recover it. 00:25:40.429 [2024-11-26 20:55:43.598827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.429 [2024-11-26 20:55:43.598852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.429 qpair failed and we were unable to recover it. 00:25:40.429 [2024-11-26 20:55:43.598962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.429 [2024-11-26 20:55:43.598988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.429 qpair failed and we were unable to recover it. 00:25:40.429 [2024-11-26 20:55:43.599075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.430 [2024-11-26 20:55:43.599100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.430 qpair failed and we were unable to recover it. 00:25:40.430 [2024-11-26 20:55:43.599204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.430 [2024-11-26 20:55:43.599230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.430 qpair failed and we were unable to recover it. 00:25:40.430 [2024-11-26 20:55:43.599346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.430 [2024-11-26 20:55:43.599372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.430 qpair failed and we were unable to recover it. 00:25:40.430 [2024-11-26 20:55:43.599479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.430 [2024-11-26 20:55:43.599519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.430 qpair failed and we were unable to recover it. 00:25:40.430 [2024-11-26 20:55:43.599642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.430 [2024-11-26 20:55:43.599670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.430 qpair failed and we were unable to recover it. 00:25:40.430 [2024-11-26 20:55:43.599815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.430 [2024-11-26 20:55:43.599842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.430 qpair failed and we were unable to recover it. 00:25:40.430 [2024-11-26 20:55:43.599927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.430 [2024-11-26 20:55:43.599954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.430 qpair failed and we were unable to recover it. 00:25:40.430 [2024-11-26 20:55:43.600066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.430 [2024-11-26 20:55:43.600092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.430 qpair failed and we were unable to recover it. 00:25:40.430 [2024-11-26 20:55:43.600229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.430 [2024-11-26 20:55:43.600255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.430 qpair failed and we were unable to recover it. 00:25:40.430 [2024-11-26 20:55:43.600370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.430 [2024-11-26 20:55:43.600398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.430 qpair failed and we were unable to recover it. 00:25:40.430 [2024-11-26 20:55:43.600510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.430 [2024-11-26 20:55:43.600537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.430 qpair failed and we were unable to recover it. 00:25:40.430 [2024-11-26 20:55:43.600635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.430 [2024-11-26 20:55:43.600662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.430 qpair failed and we were unable to recover it. 00:25:40.430 [2024-11-26 20:55:43.600775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.430 [2024-11-26 20:55:43.600802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.430 qpair failed and we were unable to recover it. 00:25:40.430 [2024-11-26 20:55:43.600880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.430 [2024-11-26 20:55:43.600906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.430 qpair failed and we were unable to recover it. 00:25:40.430 [2024-11-26 20:55:43.601036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.430 [2024-11-26 20:55:43.601083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.430 qpair failed and we were unable to recover it. 00:25:40.430 [2024-11-26 20:55:43.601198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.430 [2024-11-26 20:55:43.601226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.430 qpair failed and we were unable to recover it. 00:25:40.430 [2024-11-26 20:55:43.601351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.431 [2024-11-26 20:55:43.601378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.431 qpair failed and we were unable to recover it. 00:25:40.431 [2024-11-26 20:55:43.601496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.431 [2024-11-26 20:55:43.601523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.431 qpair failed and we were unable to recover it. 00:25:40.431 [2024-11-26 20:55:43.601657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.431 [2024-11-26 20:55:43.601712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.431 qpair failed and we were unable to recover it. 00:25:40.431 [2024-11-26 20:55:43.601804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.431 [2024-11-26 20:55:43.601830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.431 qpair failed and we were unable to recover it. 00:25:40.431 [2024-11-26 20:55:43.601910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.431 [2024-11-26 20:55:43.601936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.431 qpair failed and we were unable to recover it. 00:25:40.431 [2024-11-26 20:55:43.602050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.431 [2024-11-26 20:55:43.602078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.431 qpair failed and we were unable to recover it. 00:25:40.431 [2024-11-26 20:55:43.602198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.431 [2024-11-26 20:55:43.602225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.431 qpair failed and we were unable to recover it. 00:25:40.431 [2024-11-26 20:55:43.602351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.431 [2024-11-26 20:55:43.602379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.431 qpair failed and we were unable to recover it. 00:25:40.431 [2024-11-26 20:55:43.602471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.431 [2024-11-26 20:55:43.602497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.431 qpair failed and we were unable to recover it. 00:25:40.431 [2024-11-26 20:55:43.602610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.431 [2024-11-26 20:55:43.602636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.431 qpair failed and we were unable to recover it. 00:25:40.431 [2024-11-26 20:55:43.602752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.431 [2024-11-26 20:55:43.602779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.431 qpair failed and we were unable to recover it. 00:25:40.431 [2024-11-26 20:55:43.602861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.431 [2024-11-26 20:55:43.602888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.431 qpair failed and we were unable to recover it. 00:25:40.431 [2024-11-26 20:55:43.602994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.431 [2024-11-26 20:55:43.603021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.431 qpair failed and we were unable to recover it. 00:25:40.431 [2024-11-26 20:55:43.603123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.431 [2024-11-26 20:55:43.603168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.431 qpair failed and we were unable to recover it. 00:25:40.431 [2024-11-26 20:55:43.603269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.431 [2024-11-26 20:55:43.603316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.431 qpair failed and we were unable to recover it. 00:25:40.431 [2024-11-26 20:55:43.603427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.431 [2024-11-26 20:55:43.603453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.431 qpair failed and we were unable to recover it. 00:25:40.431 [2024-11-26 20:55:43.603562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.431 [2024-11-26 20:55:43.603599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.431 qpair failed and we were unable to recover it. 00:25:40.431 [2024-11-26 20:55:43.603713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.431 [2024-11-26 20:55:43.603738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.431 qpair failed and we were unable to recover it. 00:25:40.431 [2024-11-26 20:55:43.603847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.431 [2024-11-26 20:55:43.603873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.431 qpair failed and we were unable to recover it. 00:25:40.431 [2024-11-26 20:55:43.604021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.431 [2024-11-26 20:55:43.604049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.431 qpair failed and we were unable to recover it. 00:25:40.431 [2024-11-26 20:55:43.604137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.431 [2024-11-26 20:55:43.604165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.431 qpair failed and we were unable to recover it. 00:25:40.431 [2024-11-26 20:55:43.604277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.431 [2024-11-26 20:55:43.604319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.431 qpair failed and we were unable to recover it. 00:25:40.431 [2024-11-26 20:55:43.604406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.431 [2024-11-26 20:55:43.604432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.431 qpair failed and we were unable to recover it. 00:25:40.431 [2024-11-26 20:55:43.604571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.431 [2024-11-26 20:55:43.604600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.431 qpair failed and we were unable to recover it. 00:25:40.431 [2024-11-26 20:55:43.604719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.431 [2024-11-26 20:55:43.604745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.431 qpair failed and we were unable to recover it. 00:25:40.431 [2024-11-26 20:55:43.604837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.431 [2024-11-26 20:55:43.604864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.432 qpair failed and we were unable to recover it. 00:25:40.432 [2024-11-26 20:55:43.604983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.432 [2024-11-26 20:55:43.605012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.432 qpair failed and we were unable to recover it. 00:25:40.432 [2024-11-26 20:55:43.605159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.432 [2024-11-26 20:55:43.605186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.432 qpair failed and we were unable to recover it. 00:25:40.432 [2024-11-26 20:55:43.605266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.432 [2024-11-26 20:55:43.605310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.432 qpair failed and we were unable to recover it. 00:25:40.432 [2024-11-26 20:55:43.605401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.432 [2024-11-26 20:55:43.605427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.432 qpair failed and we were unable to recover it. 00:25:40.432 [2024-11-26 20:55:43.605544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.432 [2024-11-26 20:55:43.605570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.432 qpair failed and we were unable to recover it. 00:25:40.432 [2024-11-26 20:55:43.605699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.432 [2024-11-26 20:55:43.605725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.432 qpair failed and we were unable to recover it. 00:25:40.432 [2024-11-26 20:55:43.605817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.432 [2024-11-26 20:55:43.605845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.432 qpair failed and we were unable to recover it. 00:25:40.432 [2024-11-26 20:55:43.605962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.432 [2024-11-26 20:55:43.605989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.432 qpair failed and we were unable to recover it. 00:25:40.432 [2024-11-26 20:55:43.606105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.432 [2024-11-26 20:55:43.606131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.432 qpair failed and we were unable to recover it. 00:25:40.432 [2024-11-26 20:55:43.606242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.432 [2024-11-26 20:55:43.606268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.433 qpair failed and we were unable to recover it. 00:25:40.433 [2024-11-26 20:55:43.606384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.433 [2024-11-26 20:55:43.606412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.433 qpair failed and we were unable to recover it. 00:25:40.433 [2024-11-26 20:55:43.606500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.433 [2024-11-26 20:55:43.606528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.433 qpair failed and we were unable to recover it. 00:25:40.433 [2024-11-26 20:55:43.606653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.433 [2024-11-26 20:55:43.606681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.433 qpair failed and we were unable to recover it. 00:25:40.433 [2024-11-26 20:55:43.606777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.433 [2024-11-26 20:55:43.606804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.433 qpair failed and we were unable to recover it. 00:25:40.433 [2024-11-26 20:55:43.606918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.433 [2024-11-26 20:55:43.606945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.433 qpair failed and we were unable to recover it. 00:25:40.433 [2024-11-26 20:55:43.607084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.433 [2024-11-26 20:55:43.607111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.433 qpair failed and we were unable to recover it. 00:25:40.433 [2024-11-26 20:55:43.607249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.433 [2024-11-26 20:55:43.607276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.433 qpair failed and we were unable to recover it. 00:25:40.433 [2024-11-26 20:55:43.607431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.433 [2024-11-26 20:55:43.607471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.433 qpair failed and we were unable to recover it. 00:25:40.433 [2024-11-26 20:55:43.607568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.433 [2024-11-26 20:55:43.607602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.433 qpair failed and we were unable to recover it. 00:25:40.433 [2024-11-26 20:55:43.607714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.433 [2024-11-26 20:55:43.607740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.433 qpair failed and we were unable to recover it. 00:25:40.433 [2024-11-26 20:55:43.607823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.434 [2024-11-26 20:55:43.607849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.434 qpair failed and we were unable to recover it. 00:25:40.434 [2024-11-26 20:55:43.608000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.434 [2024-11-26 20:55:43.608026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.434 qpair failed and we were unable to recover it. 00:25:40.434 [2024-11-26 20:55:43.608105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.434 [2024-11-26 20:55:43.608131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.434 qpair failed and we were unable to recover it. 00:25:40.434 [2024-11-26 20:55:43.608244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.434 [2024-11-26 20:55:43.608273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.434 qpair failed and we were unable to recover it. 00:25:40.434 [2024-11-26 20:55:43.608412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.434 [2024-11-26 20:55:43.608451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.434 qpair failed and we were unable to recover it. 00:25:40.434 [2024-11-26 20:55:43.608577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.434 [2024-11-26 20:55:43.608608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.434 qpair failed and we were unable to recover it. 00:25:40.434 [2024-11-26 20:55:43.608723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.434 [2024-11-26 20:55:43.608750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.434 qpair failed and we were unable to recover it. 00:25:40.434 [2024-11-26 20:55:43.608912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.434 [2024-11-26 20:55:43.608945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.434 qpair failed and we were unable to recover it. 00:25:40.434 [2024-11-26 20:55:43.609107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.434 [2024-11-26 20:55:43.609162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.434 qpair failed and we were unable to recover it. 00:25:40.434 [2024-11-26 20:55:43.609276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.434 [2024-11-26 20:55:43.609316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.434 qpair failed and we were unable to recover it. 00:25:40.434 [2024-11-26 20:55:43.609435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.435 [2024-11-26 20:55:43.609460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.435 qpair failed and we were unable to recover it. 00:25:40.435 [2024-11-26 20:55:43.609579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.435 [2024-11-26 20:55:43.609616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.435 qpair failed and we were unable to recover it. 00:25:40.435 [2024-11-26 20:55:43.609703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.435 [2024-11-26 20:55:43.609729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.435 qpair failed and we were unable to recover it. 00:25:40.435 [2024-11-26 20:55:43.609820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.435 [2024-11-26 20:55:43.609846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.435 qpair failed and we were unable to recover it. 00:25:40.435 [2024-11-26 20:55:43.609968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.435 [2024-11-26 20:55:43.609995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.435 qpair failed and we were unable to recover it. 00:25:40.435 [2024-11-26 20:55:43.610089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.435 [2024-11-26 20:55:43.610116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.435 qpair failed and we were unable to recover it. 00:25:40.435 [2024-11-26 20:55:43.610226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.435 [2024-11-26 20:55:43.610267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.435 qpair failed and we were unable to recover it. 00:25:40.435 [2024-11-26 20:55:43.610445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.435 [2024-11-26 20:55:43.610474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.435 qpair failed and we were unable to recover it. 00:25:40.435 [2024-11-26 20:55:43.610592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.435 [2024-11-26 20:55:43.610620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.436 qpair failed and we were unable to recover it. 00:25:40.436 [2024-11-26 20:55:43.610722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.436 [2024-11-26 20:55:43.610748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.436 qpair failed and we were unable to recover it. 00:25:40.436 [2024-11-26 20:55:43.610854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.436 [2024-11-26 20:55:43.610880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.436 qpair failed and we were unable to recover it. 00:25:40.436 [2024-11-26 20:55:43.611024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.436 [2024-11-26 20:55:43.611050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.436 qpair failed and we were unable to recover it. 00:25:40.436 [2024-11-26 20:55:43.611197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.436 [2024-11-26 20:55:43.611225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.436 qpair failed and we were unable to recover it. 00:25:40.436 [2024-11-26 20:55:43.611349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.436 [2024-11-26 20:55:43.611390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.436 qpair failed and we were unable to recover it. 00:25:40.436 [2024-11-26 20:55:43.611517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.436 [2024-11-26 20:55:43.611547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.436 qpair failed and we were unable to recover it. 00:25:40.436 [2024-11-26 20:55:43.611661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.436 [2024-11-26 20:55:43.611687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.436 qpair failed and we were unable to recover it. 00:25:40.436 [2024-11-26 20:55:43.611773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.436 [2024-11-26 20:55:43.611800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.436 qpair failed and we were unable to recover it. 00:25:40.436 [2024-11-26 20:55:43.611890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.437 [2024-11-26 20:55:43.611929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.437 qpair failed and we were unable to recover it. 00:25:40.437 [2024-11-26 20:55:43.612023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.437 [2024-11-26 20:55:43.612050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.437 qpair failed and we were unable to recover it. 00:25:40.437 [2024-11-26 20:55:43.612168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.437 [2024-11-26 20:55:43.612195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.437 qpair failed and we were unable to recover it. 00:25:40.437 [2024-11-26 20:55:43.612286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.437 [2024-11-26 20:55:43.612321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.437 qpair failed and we were unable to recover it. 00:25:40.437 [2024-11-26 20:55:43.612450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.437 [2024-11-26 20:55:43.612477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.437 qpair failed and we were unable to recover it. 00:25:40.437 [2024-11-26 20:55:43.612566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.437 [2024-11-26 20:55:43.612593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.437 qpair failed and we were unable to recover it. 00:25:40.437 [2024-11-26 20:55:43.612673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.437 [2024-11-26 20:55:43.612699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.437 qpair failed and we were unable to recover it. 00:25:40.437 [2024-11-26 20:55:43.612819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.437 [2024-11-26 20:55:43.612851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.437 qpair failed and we were unable to recover it. 00:25:40.437 [2024-11-26 20:55:43.612967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.437 [2024-11-26 20:55:43.612995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.437 qpair failed and we were unable to recover it. 00:25:40.437 [2024-11-26 20:55:43.613071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.437 [2024-11-26 20:55:43.613098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.437 qpair failed and we were unable to recover it. 00:25:40.437 [2024-11-26 20:55:43.613221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.437 [2024-11-26 20:55:43.613248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.437 qpair failed and we were unable to recover it. 00:25:40.437 [2024-11-26 20:55:43.613389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.437 [2024-11-26 20:55:43.613417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.437 qpair failed and we were unable to recover it. 00:25:40.437 [2024-11-26 20:55:43.614065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.437 [2024-11-26 20:55:43.614097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.437 qpair failed and we were unable to recover it. 00:25:40.438 [2024-11-26 20:55:43.614201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.438 [2024-11-26 20:55:43.614229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.438 qpair failed and we were unable to recover it. 00:25:40.438 [2024-11-26 20:55:43.614356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.438 [2024-11-26 20:55:43.614385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.438 qpair failed and we were unable to recover it. 00:25:40.438 [2024-11-26 20:55:43.614472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.438 [2024-11-26 20:55:43.614499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.438 qpair failed and we were unable to recover it. 00:25:40.438 [2024-11-26 20:55:43.614623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.438 [2024-11-26 20:55:43.614650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.438 qpair failed and we were unable to recover it. 00:25:40.438 [2024-11-26 20:55:43.614770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.438 [2024-11-26 20:55:43.614796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.438 qpair failed and we were unable to recover it. 00:25:40.438 [2024-11-26 20:55:43.614885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.438 [2024-11-26 20:55:43.614913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.438 qpair failed and we were unable to recover it. 00:25:40.438 [2024-11-26 20:55:43.614992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.438 [2024-11-26 20:55:43.615019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.438 qpair failed and we were unable to recover it. 00:25:40.438 [2024-11-26 20:55:43.615114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.438 [2024-11-26 20:55:43.615141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.438 qpair failed and we were unable to recover it. 00:25:40.438 [2024-11-26 20:55:43.615263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.438 [2024-11-26 20:55:43.615307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.438 qpair failed and we were unable to recover it. 00:25:40.438 [2024-11-26 20:55:43.615395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.438 [2024-11-26 20:55:43.615422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.439 qpair failed and we were unable to recover it. 00:25:40.439 [2024-11-26 20:55:43.615499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.439 [2024-11-26 20:55:43.615527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.439 qpair failed and we were unable to recover it. 00:25:40.439 [2024-11-26 20:55:43.615625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.439 [2024-11-26 20:55:43.615651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.439 qpair failed and we were unable to recover it. 00:25:40.439 [2024-11-26 20:55:43.615791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.439 [2024-11-26 20:55:43.615817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.439 qpair failed and we were unable to recover it. 00:25:40.439 [2024-11-26 20:55:43.615926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.439 [2024-11-26 20:55:43.615952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.439 qpair failed and we were unable to recover it. 00:25:40.439 [2024-11-26 20:55:43.616077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.439 [2024-11-26 20:55:43.616119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.439 qpair failed and we were unable to recover it. 00:25:40.439 [2024-11-26 20:55:43.616252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.439 [2024-11-26 20:55:43.616292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.439 qpair failed and we were unable to recover it. 00:25:40.440 [2024-11-26 20:55:43.616425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.440 [2024-11-26 20:55:43.616453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.440 qpair failed and we were unable to recover it. 00:25:40.440 [2024-11-26 20:55:43.616539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.440 [2024-11-26 20:55:43.616565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.440 qpair failed and we were unable to recover it. 00:25:40.440 [2024-11-26 20:55:43.616705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.440 [2024-11-26 20:55:43.616757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.440 qpair failed and we were unable to recover it. 00:25:40.440 [2024-11-26 20:55:43.616900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.440 [2024-11-26 20:55:43.616951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.440 qpair failed and we were unable to recover it. 00:25:40.440 [2024-11-26 20:55:43.617091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.440 [2024-11-26 20:55:43.617142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.440 qpair failed and we were unable to recover it. 00:25:40.440 [2024-11-26 20:55:43.617345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.440 [2024-11-26 20:55:43.617372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.440 qpair failed and we were unable to recover it. 00:25:40.440 [2024-11-26 20:55:43.617510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.440 [2024-11-26 20:55:43.617536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.440 qpair failed and we were unable to recover it. 00:25:40.440 [2024-11-26 20:55:43.617625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.440 [2024-11-26 20:55:43.617651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.441 qpair failed and we were unable to recover it. 00:25:40.441 [2024-11-26 20:55:43.617788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.441 [2024-11-26 20:55:43.617838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.441 qpair failed and we were unable to recover it. 00:25:40.441 [2024-11-26 20:55:43.617983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.441 [2024-11-26 20:55:43.618039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.441 qpair failed and we were unable to recover it. 00:25:40.441 [2024-11-26 20:55:43.618181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.441 [2024-11-26 20:55:43.618211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.441 qpair failed and we were unable to recover it. 00:25:40.441 [2024-11-26 20:55:43.618296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.441 [2024-11-26 20:55:43.618328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.441 qpair failed and we were unable to recover it. 00:25:40.441 [2024-11-26 20:55:43.618415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.441 [2024-11-26 20:55:43.618441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.441 qpair failed and we were unable to recover it. 00:25:40.441 [2024-11-26 20:55:43.618556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.441 [2024-11-26 20:55:43.618582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.441 qpair failed and we were unable to recover it. 00:25:40.441 [2024-11-26 20:55:43.618701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.441 [2024-11-26 20:55:43.618727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.441 qpair failed and we were unable to recover it. 00:25:40.441 [2024-11-26 20:55:43.618816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.441 [2024-11-26 20:55:43.618842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.442 qpair failed and we were unable to recover it. 00:25:40.442 [2024-11-26 20:55:43.618995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.442 [2024-11-26 20:55:43.619044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.442 qpair failed and we were unable to recover it. 00:25:40.442 [2024-11-26 20:55:43.619132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.442 [2024-11-26 20:55:43.619160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.442 qpair failed and we were unable to recover it. 00:25:40.442 [2024-11-26 20:55:43.619274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.442 [2024-11-26 20:55:43.619316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.442 qpair failed and we were unable to recover it. 00:25:40.442 [2024-11-26 20:55:43.619412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.442 [2024-11-26 20:55:43.619439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.442 qpair failed and we were unable to recover it. 00:25:40.443 [2024-11-26 20:55:43.619556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.443 [2024-11-26 20:55:43.619583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.443 qpair failed and we were unable to recover it. 00:25:40.443 [2024-11-26 20:55:43.619692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.443 [2024-11-26 20:55:43.619718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.443 qpair failed and we were unable to recover it. 00:25:40.443 [2024-11-26 20:55:43.619798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.443 [2024-11-26 20:55:43.619824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.443 qpair failed and we were unable to recover it. 00:25:40.443 [2024-11-26 20:55:43.619942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.443 [2024-11-26 20:55:43.619969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.443 qpair failed and we were unable to recover it. 00:25:40.443 [2024-11-26 20:55:43.620063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.443 [2024-11-26 20:55:43.620089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.443 qpair failed and we were unable to recover it. 00:25:40.443 [2024-11-26 20:55:43.620172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.443 [2024-11-26 20:55:43.620201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.443 qpair failed and we were unable to recover it. 00:25:40.443 [2024-11-26 20:55:43.620315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.443 [2024-11-26 20:55:43.620342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.443 qpair failed and we were unable to recover it. 00:25:40.443 [2024-11-26 20:55:43.620452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.443 [2024-11-26 20:55:43.620479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.443 qpair failed and we were unable to recover it. 00:25:40.443 [2024-11-26 20:55:43.620566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.444 [2024-11-26 20:55:43.620601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.444 qpair failed and we were unable to recover it. 00:25:40.444 [2024-11-26 20:55:43.620708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.444 [2024-11-26 20:55:43.620734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.444 qpair failed and we were unable to recover it. 00:25:40.444 [2024-11-26 20:55:43.620858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.444 [2024-11-26 20:55:43.620884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.444 qpair failed and we were unable to recover it. 00:25:40.444 [2024-11-26 20:55:43.620998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.444 [2024-11-26 20:55:43.621026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.444 qpair failed and we were unable to recover it. 00:25:40.444 [2024-11-26 20:55:43.621118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.444 [2024-11-26 20:55:43.621145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.444 qpair failed and we were unable to recover it. 00:25:40.444 [2024-11-26 20:55:43.621259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.444 [2024-11-26 20:55:43.621285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.444 qpair failed and we were unable to recover it. 00:25:40.444 [2024-11-26 20:55:43.621385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.444 [2024-11-26 20:55:43.621412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.444 qpair failed and we were unable to recover it. 00:25:40.444 [2024-11-26 20:55:43.621498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.444 [2024-11-26 20:55:43.621525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.444 qpair failed and we were unable to recover it. 00:25:40.444 [2024-11-26 20:55:43.621617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.444 [2024-11-26 20:55:43.621643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.444 qpair failed and we were unable to recover it. 00:25:40.444 [2024-11-26 20:55:43.621733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.444 [2024-11-26 20:55:43.621760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.444 qpair failed and we were unable to recover it. 00:25:40.444 [2024-11-26 20:55:43.621866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.444 [2024-11-26 20:55:43.621892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.444 qpair failed and we were unable to recover it. 00:25:40.444 [2024-11-26 20:55:43.621971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.444 [2024-11-26 20:55:43.621997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.444 qpair failed and we were unable to recover it. 00:25:40.444 [2024-11-26 20:55:43.622077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.444 [2024-11-26 20:55:43.622104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.444 qpair failed and we were unable to recover it. 00:25:40.444 [2024-11-26 20:55:43.622201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.444 [2024-11-26 20:55:43.622227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.444 qpair failed and we were unable to recover it. 00:25:40.444 [2024-11-26 20:55:43.622314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.444 [2024-11-26 20:55:43.622341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.444 qpair failed and we were unable to recover it. 00:25:40.444 [2024-11-26 20:55:43.622449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.444 [2024-11-26 20:55:43.622475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.444 qpair failed and we were unable to recover it. 00:25:40.444 [2024-11-26 20:55:43.622591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.444 [2024-11-26 20:55:43.622618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.444 qpair failed and we were unable to recover it. 00:25:40.444 [2024-11-26 20:55:43.622709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.444 [2024-11-26 20:55:43.622736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.444 qpair failed and we were unable to recover it. 00:25:40.444 [2024-11-26 20:55:43.622849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.444 [2024-11-26 20:55:43.622875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.444 qpair failed and we were unable to recover it. 00:25:40.444 [2024-11-26 20:55:43.622965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.444 [2024-11-26 20:55:43.622991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.444 qpair failed and we were unable to recover it. 00:25:40.444 [2024-11-26 20:55:43.623098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.444 [2024-11-26 20:55:43.623124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.444 qpair failed and we were unable to recover it. 00:25:40.444 [2024-11-26 20:55:43.623214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.444 [2024-11-26 20:55:43.623240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.444 qpair failed and we were unable to recover it. 00:25:40.444 [2024-11-26 20:55:43.623327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.444 [2024-11-26 20:55:43.623354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.444 qpair failed and we were unable to recover it. 00:25:40.444 [2024-11-26 20:55:43.623493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.444 [2024-11-26 20:55:43.623519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.444 qpair failed and we were unable to recover it. 00:25:40.444 [2024-11-26 20:55:43.623608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.444 [2024-11-26 20:55:43.623635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.444 qpair failed and we were unable to recover it. 00:25:40.444 [2024-11-26 20:55:43.623721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.444 [2024-11-26 20:55:43.623748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.444 qpair failed and we were unable to recover it. 00:25:40.444 [2024-11-26 20:55:43.623834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.444 [2024-11-26 20:55:43.623860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.444 qpair failed and we were unable to recover it. 00:25:40.444 [2024-11-26 20:55:43.623975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.444 [2024-11-26 20:55:43.624001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.444 qpair failed and we were unable to recover it. 00:25:40.444 [2024-11-26 20:55:43.624124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.444 [2024-11-26 20:55:43.624163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.444 qpair failed and we were unable to recover it. 00:25:40.444 [2024-11-26 20:55:43.624315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.444 [2024-11-26 20:55:43.624344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.444 qpair failed and we were unable to recover it. 00:25:40.444 [2024-11-26 20:55:43.624435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.444 [2024-11-26 20:55:43.624469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.445 qpair failed and we were unable to recover it. 00:25:40.445 [2024-11-26 20:55:43.624555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.445 [2024-11-26 20:55:43.624581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.445 qpair failed and we were unable to recover it. 00:25:40.445 [2024-11-26 20:55:43.624724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.445 [2024-11-26 20:55:43.624750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.445 qpair failed and we were unable to recover it. 00:25:40.445 [2024-11-26 20:55:43.624829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.445 [2024-11-26 20:55:43.624855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.445 qpair failed and we were unable to recover it. 00:25:40.445 [2024-11-26 20:55:43.624942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.445 [2024-11-26 20:55:43.624970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.445 qpair failed and we were unable to recover it. 00:25:40.445 [2024-11-26 20:55:43.625088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.445 [2024-11-26 20:55:43.625114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.445 qpair failed and we were unable to recover it. 00:25:40.445 [2024-11-26 20:55:43.625221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.445 [2024-11-26 20:55:43.625248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.445 qpair failed and we were unable to recover it. 00:25:40.445 [2024-11-26 20:55:43.625345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.445 [2024-11-26 20:55:43.625372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.445 qpair failed and we were unable to recover it. 00:25:40.445 [2024-11-26 20:55:43.625452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.445 [2024-11-26 20:55:43.625479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.445 qpair failed and we were unable to recover it. 00:25:40.445 [2024-11-26 20:55:43.625573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.445 [2024-11-26 20:55:43.625610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.445 qpair failed and we were unable to recover it. 00:25:40.445 [2024-11-26 20:55:43.625751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.445 [2024-11-26 20:55:43.625777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.445 qpair failed and we were unable to recover it. 00:25:40.445 [2024-11-26 20:55:43.625885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.445 [2024-11-26 20:55:43.625911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.445 qpair failed and we were unable to recover it. 00:25:40.445 [2024-11-26 20:55:43.626006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.445 [2024-11-26 20:55:43.626032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.445 qpair failed and we were unable to recover it. 00:25:40.445 [2024-11-26 20:55:43.626111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.445 [2024-11-26 20:55:43.626138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.445 qpair failed and we were unable to recover it. 00:25:40.445 [2024-11-26 20:55:43.626237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.445 [2024-11-26 20:55:43.626276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.445 qpair failed and we were unable to recover it. 00:25:40.445 [2024-11-26 20:55:43.626390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.445 [2024-11-26 20:55:43.626418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.445 qpair failed and we were unable to recover it. 00:25:40.445 [2024-11-26 20:55:43.626557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.445 [2024-11-26 20:55:43.626584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.445 qpair failed and we were unable to recover it. 00:25:40.445 [2024-11-26 20:55:43.626699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.445 [2024-11-26 20:55:43.626725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.445 qpair failed and we were unable to recover it. 00:25:40.445 [2024-11-26 20:55:43.626856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.445 [2024-11-26 20:55:43.626903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.445 qpair failed and we were unable to recover it. 00:25:40.445 [2024-11-26 20:55:43.627046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.445 [2024-11-26 20:55:43.627072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.445 qpair failed and we were unable to recover it. 00:25:40.445 [2024-11-26 20:55:43.627188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.445 [2024-11-26 20:55:43.627214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.445 qpair failed and we were unable to recover it. 00:25:40.445 [2024-11-26 20:55:43.627318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.445 [2024-11-26 20:55:43.627345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.445 qpair failed and we were unable to recover it. 00:25:40.445 [2024-11-26 20:55:43.627484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.445 [2024-11-26 20:55:43.627510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.445 qpair failed and we were unable to recover it. 00:25:40.445 [2024-11-26 20:55:43.627597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.445 [2024-11-26 20:55:43.627622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.445 qpair failed and we were unable to recover it. 00:25:40.445 [2024-11-26 20:55:43.627706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.445 [2024-11-26 20:55:43.627733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.445 qpair failed and we were unable to recover it. 00:25:40.445 [2024-11-26 20:55:43.627850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.445 [2024-11-26 20:55:43.627876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.445 qpair failed and we were unable to recover it. 00:25:40.445 [2024-11-26 20:55:43.627965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.445 [2024-11-26 20:55:43.627994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.445 qpair failed and we were unable to recover it. 00:25:40.445 [2024-11-26 20:55:43.628082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.445 [2024-11-26 20:55:43.628114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.445 qpair failed and we were unable to recover it. 00:25:40.445 [2024-11-26 20:55:43.628230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.445 [2024-11-26 20:55:43.628257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.445 qpair failed and we were unable to recover it. 00:25:40.445 [2024-11-26 20:55:43.628382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.445 [2024-11-26 20:55:43.628410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.445 qpair failed and we were unable to recover it. 00:25:40.445 [2024-11-26 20:55:43.628529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.445 [2024-11-26 20:55:43.628555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.445 qpair failed and we were unable to recover it. 00:25:40.445 [2024-11-26 20:55:43.628650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.445 [2024-11-26 20:55:43.628678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.445 qpair failed and we were unable to recover it. 00:25:40.445 [2024-11-26 20:55:43.628775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.445 [2024-11-26 20:55:43.628802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.445 qpair failed and we were unable to recover it. 00:25:40.445 [2024-11-26 20:55:43.628913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.445 [2024-11-26 20:55:43.628939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.445 qpair failed and we were unable to recover it. 00:25:40.445 [2024-11-26 20:55:43.629032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.445 [2024-11-26 20:55:43.629059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.445 qpair failed and we were unable to recover it. 00:25:40.445 [2024-11-26 20:55:43.629200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.445 [2024-11-26 20:55:43.629227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.445 qpair failed and we were unable to recover it. 00:25:40.445 [2024-11-26 20:55:43.629349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.445 [2024-11-26 20:55:43.629376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.445 qpair failed and we were unable to recover it. 00:25:40.445 [2024-11-26 20:55:43.629488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.445 [2024-11-26 20:55:43.629514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.445 qpair failed and we were unable to recover it. 00:25:40.445 [2024-11-26 20:55:43.629633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.445 [2024-11-26 20:55:43.629659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.445 qpair failed and we were unable to recover it. 00:25:40.445 [2024-11-26 20:55:43.629746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.445 [2024-11-26 20:55:43.629771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.445 qpair failed and we were unable to recover it. 00:25:40.445 [2024-11-26 20:55:43.629880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.445 [2024-11-26 20:55:43.629927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.445 qpair failed and we were unable to recover it. 00:25:40.445 [2024-11-26 20:55:43.630008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.445 [2024-11-26 20:55:43.630034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.445 qpair failed and we were unable to recover it. 00:25:40.445 [2024-11-26 20:55:43.630142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.445 [2024-11-26 20:55:43.630167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.445 qpair failed and we were unable to recover it. 00:25:40.445 [2024-11-26 20:55:43.630274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.445 [2024-11-26 20:55:43.630315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.445 qpair failed and we were unable to recover it. 00:25:40.445 [2024-11-26 20:55:43.630457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.445 [2024-11-26 20:55:43.630483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.445 qpair failed and we were unable to recover it. 00:25:40.445 [2024-11-26 20:55:43.630578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.445 [2024-11-26 20:55:43.630630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.445 qpair failed and we were unable to recover it. 00:25:40.445 [2024-11-26 20:55:43.630753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.445 [2024-11-26 20:55:43.630781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.445 qpair failed and we were unable to recover it. 00:25:40.445 [2024-11-26 20:55:43.630864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.445 [2024-11-26 20:55:43.630893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.445 qpair failed and we were unable to recover it. 00:25:40.445 [2024-11-26 20:55:43.630990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.445 [2024-11-26 20:55:43.631017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.445 qpair failed and we were unable to recover it. 00:25:40.445 [2024-11-26 20:55:43.631129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.445 [2024-11-26 20:55:43.631155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.445 qpair failed and we were unable to recover it. 00:25:40.445 [2024-11-26 20:55:43.631243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.445 [2024-11-26 20:55:43.631270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.445 qpair failed and we were unable to recover it. 00:25:40.445 [2024-11-26 20:55:43.631401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.445 [2024-11-26 20:55:43.631428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.445 qpair failed and we were unable to recover it. 00:25:40.445 [2024-11-26 20:55:43.631543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.445 [2024-11-26 20:55:43.631569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.445 qpair failed and we were unable to recover it. 00:25:40.445 [2024-11-26 20:55:43.631687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.445 [2024-11-26 20:55:43.631713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.445 qpair failed and we were unable to recover it. 00:25:40.445 [2024-11-26 20:55:43.631810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.445 [2024-11-26 20:55:43.631837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.445 qpair failed and we were unable to recover it. 00:25:40.445 [2024-11-26 20:55:43.631952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.445 [2024-11-26 20:55:43.631980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.445 qpair failed and we were unable to recover it. 00:25:40.445 [2024-11-26 20:55:43.632091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.445 [2024-11-26 20:55:43.632131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.445 qpair failed and we were unable to recover it. 00:25:40.445 [2024-11-26 20:55:43.632230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.445 [2024-11-26 20:55:43.632258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.445 qpair failed and we were unable to recover it. 00:25:40.445 [2024-11-26 20:55:43.632392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.445 [2024-11-26 20:55:43.632419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.445 qpair failed and we were unable to recover it. 00:25:40.445 [2024-11-26 20:55:43.632533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.445 [2024-11-26 20:55:43.632559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.445 qpair failed and we were unable to recover it. 00:25:40.445 [2024-11-26 20:55:43.632687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.632718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.632918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.632949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.633112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.633144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.633244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.633271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.633415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.633455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.633560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.633587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.633699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.633726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.633805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.633842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.633963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.633991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.634106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.634132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.634216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.634243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.634358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.634388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.634478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.634505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.634592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.634620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.634713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.634740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.634823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.634850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.634990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.635034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.635153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.635199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.635367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.635396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.635485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.635512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.635632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.635660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.635788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.635815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.635905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.635932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.636047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.636074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.636184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.636224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.636330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.636359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.636476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.636503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.636586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.636613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.636697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.636724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.636812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.636840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.636959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.636986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.637100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.637128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.637238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.637268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.637408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.637436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.637544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.637583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.637729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.637757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.637874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.637901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.638047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.638072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.638199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.638226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.638339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.638367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.638449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.638476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.638619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.638645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.638741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.638770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.638917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.638946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.639072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.639099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.639224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.639263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.639405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.639445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.639595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.639623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.639789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.639819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.639945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.639990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.640080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.640108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.640212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.640252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.640388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.640417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.640503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.640530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.640662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.640690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.640819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.640844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.640964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.641007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.641120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.641149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.641261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.641287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.641411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.641438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.641524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.641551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.641674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.641701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.641784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.641811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.641928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.641955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.642034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.642060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.642147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.642173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.642288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.642326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.642422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.642448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.642556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.642581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.642696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.642722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.642837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.642862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.642946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.642971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.446 qpair failed and we were unable to recover it. 00:25:40.446 [2024-11-26 20:55:43.643070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.446 [2024-11-26 20:55:43.643111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.643206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.643234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.643330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.643359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.643450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.643477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.643560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.643598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.643713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.643740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.643858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.643884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.643996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.644021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.644139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.644166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.644250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.644275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.644422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.644463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.644572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.644611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.644707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.644736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.644847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.644875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.645018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.645061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.645171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.645196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.645338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.645366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.645452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.645478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.645561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.645587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.645670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.645697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.645802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.645828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.645912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.645941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.646028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.646057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.646175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.646201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.646340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.646381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.646496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.646522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.646639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.646664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.646772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.646798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.646884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.646909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.647007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.647052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.647149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.647176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.647314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.647356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.647460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.647488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.647612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.647638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.647747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.647773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.647885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.647910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.648005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.648035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.648151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.648178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.648261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.648299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.648395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.648422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.648536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.648563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.648694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.648721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.648805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.648831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.648936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.648962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.649049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.649074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.649188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.649213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.649317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.649346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.649458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.649484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.649559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.649586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.649698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.649723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.649830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.649856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.649973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.650002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.650116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.650142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.650241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.650280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.650396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.650425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.650514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.650541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.650637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.650669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.650788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.650815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.650932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.650960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.651041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.651068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.651190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.651218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.651365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.651392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.651475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.651501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.651584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.651616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.651706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.651736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.651853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.651880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.447 qpair failed and we were unable to recover it. 00:25:40.447 [2024-11-26 20:55:43.651962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.447 [2024-11-26 20:55:43.651989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.652127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.652154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.652297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.652333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.652425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.652452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.652596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.652623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.652706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.652731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.652842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.652867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.652975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.653035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.653119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.653148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.653246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.653287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.653391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.653419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.653504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.653531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.653638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.653665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.653751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.653777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.653910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.653965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.654136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.654200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.654347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.654374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.654495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.654522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.654637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.654664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.654750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.654776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.654888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.654914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.654996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.655023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.655147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.655186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.655314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.655343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.655433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.655461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.655545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.655571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.655665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.655691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.655828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.655854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.656016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.656070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.656216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.656245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.656392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.656439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.656554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.656582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.656692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.656718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.656835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.656896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.657083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.657108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.657243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.657269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.657392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.657418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.657507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.657533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.657648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.657674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.657755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.657782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.657869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.657895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.658010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.658036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.658169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.658209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.658327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.658367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.658483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.658522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.658625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.658652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.658740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.658766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.658850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.658876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.658955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.658980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.659061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.659087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.659197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.659223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.659344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.659372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.659514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.659544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.659663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.659690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.659836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.659863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.659969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.659996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.660107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.660134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.660221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.660254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.660356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.660384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.660495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.660521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.660607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.660632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.660767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.660794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.660904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.660930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.661049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.661075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.661159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.661185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.661269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.661312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.661432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.661459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.661566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.661603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.661744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.661770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.661857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.661884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.661992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.662019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.662140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.662166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.448 [2024-11-26 20:55:43.662282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.448 [2024-11-26 20:55:43.662326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.448 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.662446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.662472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.662617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.662643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.662752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.662778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.662884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.662910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.663021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.663047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.663136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.663162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.663293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.663341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.663442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.663472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.663616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.663643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.663761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.663788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.663899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.663926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.664068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.664094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.664175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.664200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.664286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.664320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.664406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.664432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.664559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.664585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.664723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.664748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.664872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.664898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.665010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.665038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.665194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.665233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.665359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.665389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.665478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.665505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.665618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.665645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.665756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.665783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.665922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.665949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.666096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.666123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.666262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.666289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.666393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.666422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.666502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.666528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.666621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.666646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.666755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.666781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.666889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.666950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.667060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.667086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.667234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.667260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.667361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.667390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.667506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.667532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.667619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.667646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.667781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.667808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.667932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.667958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.668077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.668106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.668194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.668222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.668318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.668345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.668428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.668454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.668533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.668559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.668653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.668678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.668791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.668817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.668903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.668932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.669017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.669043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.669121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.669147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.669262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.669288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.669382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.669409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.669484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.669515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.669635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.669662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.669827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.669881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.669995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.670021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.670155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.670180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.670316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.670356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.670477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.670506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.670624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.670652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.670741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.670768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.670881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.670907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.671020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.671046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.671152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.671179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.671293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.671328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.671406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.671432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.671573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.671600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.449 qpair failed and we were unable to recover it. 00:25:40.449 [2024-11-26 20:55:43.671713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.449 [2024-11-26 20:55:43.671740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.671878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.671904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.672014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.672041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.672180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.672220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.672355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.672395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.672497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.672536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.672692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.672747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.672922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.672986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.673120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.673170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.673263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.673291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.673398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.673428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.673544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.673570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.673684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.673739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.673881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.673927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.674133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.674186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.674335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.674361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.674451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.674477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.674564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.674590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.674726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.674752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.674862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.674893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.675049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.675074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.675158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.675185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.675331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.675358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.675450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.675490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.675586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.675624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.675817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.675877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.676033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.676059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.676139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.676165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.676309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.676336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.676415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.676441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.676523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.676549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.676625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.676651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.676740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.676767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.676877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.676903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.677038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.677064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.677153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.677178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.677298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.677330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.677408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.677434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.677548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.677574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.677692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.677718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.677854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.677880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.677995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.678022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.678115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.678141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.678241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.678281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.678419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.678448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.678537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.678565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.678697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.678741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.678820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.678847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.678930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.678957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.679060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.679100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.679186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.679214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.679354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.679382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.679477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.679509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.679624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.679650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.679757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.679783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.679875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.679901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.680013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.680039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.680148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.680174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.680327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.680368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.680466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.680496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.680589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.680628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.680775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.680803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.680933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.680960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.681085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.681111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.681222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.681249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.681343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.681369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.681471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.681498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.681604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.681630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.681825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.681887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.682054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.682113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.682188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.682214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.682330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.682356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.682440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.682466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.450 [2024-11-26 20:55:43.682552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.450 [2024-11-26 20:55:43.682578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.450 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.682691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.682717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.682801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.682827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.682932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.682958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.683036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.683061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.683174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.683200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.683290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.683332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.683425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.683452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.683540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.683566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.683679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.683707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.683795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.683822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.683953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.683979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.684117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.684144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.684242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.684282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.684403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.684442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.684554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.684582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.684710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.684757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.684943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.684997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.685077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.685103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.685221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.685247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.685365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.685406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.685494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.685522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.685611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.685637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.685748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.685775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.685913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.685939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.686060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.686086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.686201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.686229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.686322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.686349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.686437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.686464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.686559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.686585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.686700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.686726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.686841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.686868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.686979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.687007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.687144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.687189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.687295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.687350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.687441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.687468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.687555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.687581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.687690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.687716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.687834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.687861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.687975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.688003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.688124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.688154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.688268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.688295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.688407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.688434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.688542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.688568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.688698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.688730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.688824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.688855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.689063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.689127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.689396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.689424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.689540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.689566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.689679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.689705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.689909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.689940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.690211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.690237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.690391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.690431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.690530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.690570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.690686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.690713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.690847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.690907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.691093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.691119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.691243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.691282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.691385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.691413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.691500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.691527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.691640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.691667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.691806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.691884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.692129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.692194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.692357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.692383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.692465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.692491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.692573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.692600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.692683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.692728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.692926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.692991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.693231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.693257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.693359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.693388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.693486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.693512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.693657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.693683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.693790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.693815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.451 [2024-11-26 20:55:43.693965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.451 [2024-11-26 20:55:43.694024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.451 qpair failed and we were unable to recover it. 00:25:40.452 [2024-11-26 20:55:43.694163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.452 [2024-11-26 20:55:43.694189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.452 qpair failed and we were unable to recover it. 00:25:40.452 [2024-11-26 20:55:43.694276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.452 [2024-11-26 20:55:43.694309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.452 qpair failed and we were unable to recover it. 00:25:40.452 [2024-11-26 20:55:43.694399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.452 [2024-11-26 20:55:43.694426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.452 qpair failed and we were unable to recover it. 00:25:40.452 [2024-11-26 20:55:43.694536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.452 [2024-11-26 20:55:43.694562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.452 qpair failed and we were unable to recover it. 00:25:40.452 [2024-11-26 20:55:43.694668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.452 [2024-11-26 20:55:43.694694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.452 qpair failed and we were unable to recover it. 00:25:40.452 [2024-11-26 20:55:43.694954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.452 [2024-11-26 20:55:43.694980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.452 qpair failed and we were unable to recover it. 00:25:40.452 [2024-11-26 20:55:43.695297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.452 [2024-11-26 20:55:43.695380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.452 qpair failed and we were unable to recover it. 00:25:40.452 [2024-11-26 20:55:43.695474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.452 [2024-11-26 20:55:43.695501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.452 qpair failed and we were unable to recover it. 00:25:40.452 [2024-11-26 20:55:43.695640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.452 [2024-11-26 20:55:43.695666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.452 qpair failed and we were unable to recover it. 00:25:40.452 [2024-11-26 20:55:43.695761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.452 [2024-11-26 20:55:43.695787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.452 qpair failed and we were unable to recover it. 00:25:40.452 [2024-11-26 20:55:43.695926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.452 [2024-11-26 20:55:43.695985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.452 qpair failed and we were unable to recover it. 00:25:40.452 [2024-11-26 20:55:43.696150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.452 [2024-11-26 20:55:43.696184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.452 qpair failed and we were unable to recover it. 00:25:40.452 [2024-11-26 20:55:43.696365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.452 [2024-11-26 20:55:43.696392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.452 qpair failed and we were unable to recover it. 00:25:40.452 [2024-11-26 20:55:43.696511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.452 [2024-11-26 20:55:43.696537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.452 qpair failed and we were unable to recover it. 00:25:40.452 [2024-11-26 20:55:43.696653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.452 [2024-11-26 20:55:43.696680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.452 qpair failed and we were unable to recover it. 00:25:40.452 [2024-11-26 20:55:43.696808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.452 [2024-11-26 20:55:43.696834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.452 qpair failed and we were unable to recover it. 00:25:40.452 [2024-11-26 20:55:43.696960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.452 [2024-11-26 20:55:43.696986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.452 qpair failed and we were unable to recover it. 00:25:40.452 [2024-11-26 20:55:43.697197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.452 [2024-11-26 20:55:43.697224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.452 qpair failed and we were unable to recover it. 00:25:40.452 [2024-11-26 20:55:43.697317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.452 [2024-11-26 20:55:43.697344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.452 qpair failed and we were unable to recover it. 00:25:40.452 [2024-11-26 20:55:43.697430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.452 [2024-11-26 20:55:43.697458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.452 qpair failed and we were unable to recover it. 00:25:40.452 [2024-11-26 20:55:43.697542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.452 [2024-11-26 20:55:43.697569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.452 qpair failed and we were unable to recover it. 00:25:40.452 [2024-11-26 20:55:43.697710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.452 [2024-11-26 20:55:43.697736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.452 qpair failed and we were unable to recover it. 00:25:40.452 [2024-11-26 20:55:43.697826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.452 [2024-11-26 20:55:43.697890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.452 qpair failed and we were unable to recover it. 00:25:40.452 [2024-11-26 20:55:43.698156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.452 [2024-11-26 20:55:43.698192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.452 qpair failed and we were unable to recover it. 00:25:40.452 [2024-11-26 20:55:43.698313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.452 [2024-11-26 20:55:43.698340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.452 qpair failed and we were unable to recover it. 00:25:40.452 [2024-11-26 20:55:43.698431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.452 [2024-11-26 20:55:43.698458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.452 qpair failed and we were unable to recover it. 00:25:40.452 [2024-11-26 20:55:43.698568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.452 [2024-11-26 20:55:43.698608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.452 qpair failed and we were unable to recover it. 00:25:40.452 [2024-11-26 20:55:43.698728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.452 [2024-11-26 20:55:43.698756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.452 qpair failed and we were unable to recover it. 00:25:40.452 [2024-11-26 20:55:43.698973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.452 [2024-11-26 20:55:43.699029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.452 qpair failed and we were unable to recover it. 00:25:40.452 [2024-11-26 20:55:43.699143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.452 [2024-11-26 20:55:43.699170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.452 qpair failed and we were unable to recover it. 00:25:40.452 [2024-11-26 20:55:43.699328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.452 [2024-11-26 20:55:43.699367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.452 qpair failed and we were unable to recover it. 00:25:40.452 [2024-11-26 20:55:43.699465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.452 [2024-11-26 20:55:43.699493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.452 qpair failed and we were unable to recover it. 00:25:40.452 [2024-11-26 20:55:43.699584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.452 [2024-11-26 20:55:43.699610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.452 qpair failed and we were unable to recover it. 00:25:40.452 [2024-11-26 20:55:43.699745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.452 [2024-11-26 20:55:43.699795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.452 qpair failed and we were unable to recover it. 00:25:40.452 [2024-11-26 20:55:43.699989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.452 [2024-11-26 20:55:43.700040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.452 qpair failed and we were unable to recover it. 00:25:40.452 [2024-11-26 20:55:43.700118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.452 [2024-11-26 20:55:43.700145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.452 qpair failed and we were unable to recover it. 00:25:40.452 [2024-11-26 20:55:43.700283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.452 [2024-11-26 20:55:43.700326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.452 qpair failed and we were unable to recover it. 00:25:40.452 [2024-11-26 20:55:43.700435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.452 [2024-11-26 20:55:43.700462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.452 qpair failed and we were unable to recover it. 00:25:40.452 [2024-11-26 20:55:43.700544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.452 [2024-11-26 20:55:43.700570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.452 qpair failed and we were unable to recover it. 00:25:40.452 [2024-11-26 20:55:43.700774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.452 [2024-11-26 20:55:43.700805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.452 qpair failed and we were unable to recover it. 00:25:40.452 [2024-11-26 20:55:43.701013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.452 [2024-11-26 20:55:43.701079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.452 qpair failed and we were unable to recover it. 00:25:40.452 [2024-11-26 20:55:43.701264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.452 [2024-11-26 20:55:43.701291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.452 qpair failed and we were unable to recover it. 00:25:40.452 [2024-11-26 20:55:43.701436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.452 [2024-11-26 20:55:43.701462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.452 qpair failed and we were unable to recover it. 00:25:40.452 [2024-11-26 20:55:43.701553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.452 [2024-11-26 20:55:43.701579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.452 qpair failed and we were unable to recover it. 00:25:40.452 [2024-11-26 20:55:43.701714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.452 [2024-11-26 20:55:43.701740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.452 qpair failed and we were unable to recover it. 00:25:40.452 [2024-11-26 20:55:43.701836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.452 [2024-11-26 20:55:43.701862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.452 qpair failed and we were unable to recover it. 00:25:40.452 [2024-11-26 20:55:43.702010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.452 [2024-11-26 20:55:43.702076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.452 qpair failed and we were unable to recover it. 00:25:40.452 [2024-11-26 20:55:43.702188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.452 [2024-11-26 20:55:43.702216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.452 qpair failed and we were unable to recover it. 00:25:40.452 [2024-11-26 20:55:43.702330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.452 [2024-11-26 20:55:43.702357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.452 qpair failed and we were unable to recover it. 00:25:40.452 [2024-11-26 20:55:43.702452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.452 [2024-11-26 20:55:43.702478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.452 qpair failed and we were unable to recover it. 00:25:40.452 [2024-11-26 20:55:43.702585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.452 [2024-11-26 20:55:43.702611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.452 qpair failed and we were unable to recover it. 00:25:40.452 [2024-11-26 20:55:43.702701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.452 [2024-11-26 20:55:43.702728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.452 qpair failed and we were unable to recover it. 00:25:40.452 [2024-11-26 20:55:43.702834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.452 [2024-11-26 20:55:43.702860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.452 qpair failed and we were unable to recover it. 00:25:40.452 [2024-11-26 20:55:43.703005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.452 [2024-11-26 20:55:43.703031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.452 qpair failed and we were unable to recover it. 00:25:40.452 [2024-11-26 20:55:43.703161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.703202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.703324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.703352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.703449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.703475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.703560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.703586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.703737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.703783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.703912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.703958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.704099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.704126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.704221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.704251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.704350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.704379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.704466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.704493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.704622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.704677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.704761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.704788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.704965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.704998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.705109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.705135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.705244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.705270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.705365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.705393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.705510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.705537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.705660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.705707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.705879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.705928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.706117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.706143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.706278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.706311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.706404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.706431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.706553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.706579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.706675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.706712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.706874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.706925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.707032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.707058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.707151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.707178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.707315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.707355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.707459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.707498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.707604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.707644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.707733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.707761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.707846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.707873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.708127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.708193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.708383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.708410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.708524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.708550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.708697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.708723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.708880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.708908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.709119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.709153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.709289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.709329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.709447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.709475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.709560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.709586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.709702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.709729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.709867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.709893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.710040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.710071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.710184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.710228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.710349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.710375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.710464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.710492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.710586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.710612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.710698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.710724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.710855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.710887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.711067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.711132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.711297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.711328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.711415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.711453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.711555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.711582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.711674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.711701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.711873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.711935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.712099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.712162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.712344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.712372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.712455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.712481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.712575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.712601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.712717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.712743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.712916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.712981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.713146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.713225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.713404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.713431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.713522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.453 [2024-11-26 20:55:43.713548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.453 qpair failed and we were unable to recover it. 00:25:40.453 [2024-11-26 20:55:43.713667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.713694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.713800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.713838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.713960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.713989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.714084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.714154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.714288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.714357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.714442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.714468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.714595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.714634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.714731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.714760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.714841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.714868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.714951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.714978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.715073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.715102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.715225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.715265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.715376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.715405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.715504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.715531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.715634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1bf30 is same with the state(6) to be set 00:25:40.454 [2024-11-26 20:55:43.715801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.715842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.715936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.715964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.716158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.716185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.716314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.716342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.716433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.716459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.716544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.716574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.716655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.716682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.716830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.716882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.717025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.717080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.717235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.717276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.717409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.717449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.717546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.717575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.717658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.717685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.717821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.717867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.718018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.718103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.718311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.718338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.718431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.718458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.718541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.718568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.718694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.718759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.718957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.719026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.719224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.719251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.719372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.719398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.719476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.719503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.719643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.719670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.719778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.719805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.719922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.719949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.720191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.720272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.720449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.720477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.720569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.720596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.720731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.720758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.721026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.721091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.721259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.721286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.721375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.721402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.721510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.721537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.721615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.721642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.721752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.721778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.721939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.722007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.722260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.722338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.722480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.722506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.722597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.722623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.722740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.722768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.722857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.722882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.723064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.723131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.723328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.723355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.723463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.723489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.723615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.723641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.723733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.723759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.723847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.723873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.724032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.724091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.454 [2024-11-26 20:55:43.724255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.454 [2024-11-26 20:55:43.724335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.454 qpair failed and we were unable to recover it. 00:25:40.455 [2024-11-26 20:55:43.724463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.455 [2024-11-26 20:55:43.724490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.455 qpair failed and we were unable to recover it. 00:25:40.455 [2024-11-26 20:55:43.724616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.455 [2024-11-26 20:55:43.724656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.455 qpair failed and we were unable to recover it. 00:25:40.455 [2024-11-26 20:55:43.724831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.455 [2024-11-26 20:55:43.724888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.455 qpair failed and we were unable to recover it. 00:25:40.455 [2024-11-26 20:55:43.725095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.455 [2024-11-26 20:55:43.725149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.455 qpair failed and we were unable to recover it. 00:25:40.455 [2024-11-26 20:55:43.725235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.455 [2024-11-26 20:55:43.725270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.455 qpair failed and we were unable to recover it. 00:25:40.455 [2024-11-26 20:55:43.725411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.455 [2024-11-26 20:55:43.725451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.455 qpair failed and we were unable to recover it. 00:25:40.455 [2024-11-26 20:55:43.725574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.455 [2024-11-26 20:55:43.725601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.455 qpair failed and we were unable to recover it. 00:25:40.455 [2024-11-26 20:55:43.725708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.455 [2024-11-26 20:55:43.725734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.455 qpair failed and we were unable to recover it. 00:25:40.455 [2024-11-26 20:55:43.725946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.455 [2024-11-26 20:55:43.726004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.455 qpair failed and we were unable to recover it. 00:25:40.455 [2024-11-26 20:55:43.726132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.455 [2024-11-26 20:55:43.726193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.455 qpair failed and we were unable to recover it. 00:25:40.455 [2024-11-26 20:55:43.726335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.455 [2024-11-26 20:55:43.726362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.455 qpair failed and we were unable to recover it. 00:25:40.455 [2024-11-26 20:55:43.726450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.455 [2024-11-26 20:55:43.726477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.455 qpair failed and we were unable to recover it. 00:25:40.455 [2024-11-26 20:55:43.726592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.455 [2024-11-26 20:55:43.726618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.455 qpair failed and we were unable to recover it. 00:25:40.455 [2024-11-26 20:55:43.726699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.455 [2024-11-26 20:55:43.726726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.455 qpair failed and we were unable to recover it. 00:25:40.455 [2024-11-26 20:55:43.726818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.455 [2024-11-26 20:55:43.726844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.455 qpair failed and we were unable to recover it. 00:25:40.455 [2024-11-26 20:55:43.726958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.455 [2024-11-26 20:55:43.726984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.455 qpair failed and we were unable to recover it. 00:25:40.455 [2024-11-26 20:55:43.727125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.455 [2024-11-26 20:55:43.727160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.455 qpair failed and we were unable to recover it. 00:25:40.455 [2024-11-26 20:55:43.727280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.455 [2024-11-26 20:55:43.727333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.455 qpair failed and we were unable to recover it. 00:25:40.455 [2024-11-26 20:55:43.727446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.455 [2024-11-26 20:55:43.727486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:40.455 qpair failed and we were unable to recover it. 00:25:40.455 [2024-11-26 20:55:43.727619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.455 [2024-11-26 20:55:43.727658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.455 qpair failed and we were unable to recover it. 00:25:40.455 [2024-11-26 20:55:43.727915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.455 [2024-11-26 20:55:43.727943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.455 qpair failed and we were unable to recover it. 00:25:40.455 [2024-11-26 20:55:43.728246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.455 [2024-11-26 20:55:43.728327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.455 qpair failed and we were unable to recover it. 00:25:40.455 [2024-11-26 20:55:43.728482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.455 [2024-11-26 20:55:43.728510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.455 qpair failed and we were unable to recover it. 00:25:40.455 [2024-11-26 20:55:43.728605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.455 [2024-11-26 20:55:43.728633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.455 qpair failed and we were unable to recover it. 00:25:40.455 [2024-11-26 20:55:43.728787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.455 [2024-11-26 20:55:43.728839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.455 qpair failed and we were unable to recover it. 00:25:40.455 [2024-11-26 20:55:43.729068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.455 [2024-11-26 20:55:43.729134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.455 qpair failed and we were unable to recover it. 00:25:40.455 [2024-11-26 20:55:43.729380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.455 [2024-11-26 20:55:43.729409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.455 qpair failed and we were unable to recover it. 00:25:40.455 [2024-11-26 20:55:43.729520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.455 [2024-11-26 20:55:43.729547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.455 qpair failed and we were unable to recover it. 00:25:40.455 [2024-11-26 20:55:43.729661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.455 [2024-11-26 20:55:43.729687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.455 qpair failed and we were unable to recover it. 00:25:40.455 [2024-11-26 20:55:43.729770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.455 [2024-11-26 20:55:43.729798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.455 qpair failed and we were unable to recover it. 00:25:40.455 [2024-11-26 20:55:43.729952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.455 [2024-11-26 20:55:43.729978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.455 qpair failed and we were unable to recover it. 00:25:40.455 [2024-11-26 20:55:43.730248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.455 [2024-11-26 20:55:43.730274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.455 qpair failed and we were unable to recover it. 00:25:40.455 [2024-11-26 20:55:43.730374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.455 [2024-11-26 20:55:43.730401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.455 qpair failed and we were unable to recover it. 00:25:40.455 [2024-11-26 20:55:43.730482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.455 [2024-11-26 20:55:43.730519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.455 qpair failed and we were unable to recover it. 00:25:40.455 [2024-11-26 20:55:43.730598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.455 [2024-11-26 20:55:43.730625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.455 qpair failed and we were unable to recover it. 00:25:40.455 [2024-11-26 20:55:43.730724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.455 [2024-11-26 20:55:43.730770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.455 qpair failed and we were unable to recover it. 00:25:40.455 [2024-11-26 20:55:43.730963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.455 [2024-11-26 20:55:43.730989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.455 qpair failed and we were unable to recover it. 00:25:40.455 [2024-11-26 20:55:43.731248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.455 [2024-11-26 20:55:43.731340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.455 qpair failed and we were unable to recover it. 00:25:40.455 [2024-11-26 20:55:43.731449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.455 [2024-11-26 20:55:43.731475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.455 qpair failed and we were unable to recover it. 00:25:40.455 [2024-11-26 20:55:43.731590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.455 [2024-11-26 20:55:43.731616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.455 qpair failed and we were unable to recover it. 00:25:40.455 [2024-11-26 20:55:43.731701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.455 [2024-11-26 20:55:43.731728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.455 qpair failed and we were unable to recover it. 00:25:40.455 [2024-11-26 20:55:43.731920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.455 [2024-11-26 20:55:43.731952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.455 qpair failed and we were unable to recover it. 00:25:40.455 [2024-11-26 20:55:43.732160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.455 [2024-11-26 20:55:43.732225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.455 qpair failed and we were unable to recover it. 00:25:40.455 [2024-11-26 20:55:43.732425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.455 [2024-11-26 20:55:43.732457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.455 qpair failed and we were unable to recover it. 00:25:40.455 [2024-11-26 20:55:43.732549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.455 [2024-11-26 20:55:43.732576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.455 qpair failed and we were unable to recover it. 00:25:40.455 [2024-11-26 20:55:43.732724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.455 [2024-11-26 20:55:43.732761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.455 qpair failed and we were unable to recover it. 00:25:40.455 [2024-11-26 20:55:43.732929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.455 [2024-11-26 20:55:43.732994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.455 qpair failed and we were unable to recover it. 00:25:40.455 [2024-11-26 20:55:43.733267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.455 [2024-11-26 20:55:43.733360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.455 qpair failed and we were unable to recover it. 00:25:40.455 [2024-11-26 20:55:43.733469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.455 [2024-11-26 20:55:43.733494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.455 qpair failed and we were unable to recover it. 00:25:40.455 [2024-11-26 20:55:43.733637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.455 [2024-11-26 20:55:43.733674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.455 qpair failed and we were unable to recover it. 00:25:40.455 [2024-11-26 20:55:43.733813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.455 [2024-11-26 20:55:43.733878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.455 qpair failed and we were unable to recover it. 00:25:40.455 [2024-11-26 20:55:43.734162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.455 [2024-11-26 20:55:43.734227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.455 qpair failed and we were unable to recover it. 00:25:40.455 [2024-11-26 20:55:43.734414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.455 [2024-11-26 20:55:43.734441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.455 qpair failed and we were unable to recover it. 00:25:40.455 [2024-11-26 20:55:43.734533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.455 [2024-11-26 20:55:43.734559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.455 qpair failed and we were unable to recover it. 00:25:40.455 [2024-11-26 20:55:43.734649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.455 [2024-11-26 20:55:43.734675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.455 qpair failed and we were unable to recover it. 00:25:40.455 [2024-11-26 20:55:43.734870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.455 [2024-11-26 20:55:43.734935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.455 qpair failed and we were unable to recover it. 00:25:40.455 [2024-11-26 20:55:43.735180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.455 [2024-11-26 20:55:43.735245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.455 qpair failed and we were unable to recover it. 00:25:40.455 [2024-11-26 20:55:43.735424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.455 [2024-11-26 20:55:43.735451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.455 qpair failed and we were unable to recover it. 00:25:40.455 [2024-11-26 20:55:43.735554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.455 [2024-11-26 20:55:43.735580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.455 qpair failed and we were unable to recover it. 00:25:40.455 [2024-11-26 20:55:43.735779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.455 [2024-11-26 20:55:43.735805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.455 qpair failed and we were unable to recover it. 00:25:40.455 [2024-11-26 20:55:43.735947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.736006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.736268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.736294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.736388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.736414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.736505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.736531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.736729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.736756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.736908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.736968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.737155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.737221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.737380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.737407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.737519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.737546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.737633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.737660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.737782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.737808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.738011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.738076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.738331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.738358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.738451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.738477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.738572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.738598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.738684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.738711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.738826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.738880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.739167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.739232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.739464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.739491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.739580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.739606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.739695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.739721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.739830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.739894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.740139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.740170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.740277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.740322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.740427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.740458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.740611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.740671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.740901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.740961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.741180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.741211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.741318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.741350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.741451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.741483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.741628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.741687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.741958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.742019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.742298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.742368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.742552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.742612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.742843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.742906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.743146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.743206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.743490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.743552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.743785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.743845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.744120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.744180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.744386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.744443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.744716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.744771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.744993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.745048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.745363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.745419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.745607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.745663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.745912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.745968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.746193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.746248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.746443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.746501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.746704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.746760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.746973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.747004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.747136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.747168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.747358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.747415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.747617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.747648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.747764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.747795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.748042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.748098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.748266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.748335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.748565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.748620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.748837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.748891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.749116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.749171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.749426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.749486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.749681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.749736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.749950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.750005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.750220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.750275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.750539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.750594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.750818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.750883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.751079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.751135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.751398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.456 [2024-11-26 20:55:43.751451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.456 qpair failed and we were unable to recover it. 00:25:40.456 [2024-11-26 20:55:43.751650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.457 [2024-11-26 20:55:43.751703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.457 qpair failed and we were unable to recover it. 00:25:40.457 [2024-11-26 20:55:43.751889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.457 [2024-11-26 20:55:43.751940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.457 qpair failed and we were unable to recover it. 00:25:40.457 [2024-11-26 20:55:43.752138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.457 [2024-11-26 20:55:43.752189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.457 qpair failed and we were unable to recover it. 00:25:40.457 [2024-11-26 20:55:43.752420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.457 [2024-11-26 20:55:43.752472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.457 qpair failed and we were unable to recover it. 00:25:40.457 [2024-11-26 20:55:43.752659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.457 [2024-11-26 20:55:43.752691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.457 qpair failed and we were unable to recover it. 00:25:40.457 [2024-11-26 20:55:43.752862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.457 [2024-11-26 20:55:43.752893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.457 qpair failed and we were unable to recover it. 00:25:40.457 [2024-11-26 20:55:43.753125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.457 [2024-11-26 20:55:43.753180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.457 qpair failed and we were unable to recover it. 00:25:40.457 [2024-11-26 20:55:43.753374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.457 [2024-11-26 20:55:43.753431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.457 qpair failed and we were unable to recover it. 00:25:40.457 [2024-11-26 20:55:43.753658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.457 [2024-11-26 20:55:43.753714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.457 qpair failed and we were unable to recover it. 00:25:40.457 [2024-11-26 20:55:43.753952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.457 [2024-11-26 20:55:43.754025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.457 qpair failed and we were unable to recover it. 00:25:40.457 [2024-11-26 20:55:43.754239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.457 [2024-11-26 20:55:43.754294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.457 qpair failed and we were unable to recover it. 00:25:40.457 [2024-11-26 20:55:43.754541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.457 [2024-11-26 20:55:43.754574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.457 qpair failed and we were unable to recover it. 00:25:40.457 [2024-11-26 20:55:43.754710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.457 [2024-11-26 20:55:43.754742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.457 qpair failed and we were unable to recover it. 00:25:40.457 [2024-11-26 20:55:43.754879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.457 [2024-11-26 20:55:43.754911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.457 qpair failed and we were unable to recover it. 00:25:40.457 [2024-11-26 20:55:43.755024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.457 [2024-11-26 20:55:43.755082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.457 qpair failed and we were unable to recover it. 00:25:40.457 [2024-11-26 20:55:43.755257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.457 [2024-11-26 20:55:43.755348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.457 qpair failed and we were unable to recover it. 00:25:40.457 [2024-11-26 20:55:43.755599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.457 [2024-11-26 20:55:43.755654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.457 qpair failed and we were unable to recover it. 00:25:40.457 [2024-11-26 20:55:43.755840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.457 [2024-11-26 20:55:43.755895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.457 qpair failed and we were unable to recover it. 00:25:40.457 [2024-11-26 20:55:43.756101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.457 [2024-11-26 20:55:43.756133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.457 qpair failed and we were unable to recover it. 00:25:40.457 [2024-11-26 20:55:43.756253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.457 [2024-11-26 20:55:43.756284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.457 qpair failed and we were unable to recover it. 00:25:40.457 [2024-11-26 20:55:43.756477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.457 [2024-11-26 20:55:43.756537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.457 qpair failed and we were unable to recover it. 00:25:40.457 [2024-11-26 20:55:43.756809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.457 [2024-11-26 20:55:43.756841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.457 qpair failed and we were unable to recover it. 00:25:40.457 [2024-11-26 20:55:43.757027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.457 [2024-11-26 20:55:43.757087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.457 qpair failed and we were unable to recover it. 00:25:40.457 [2024-11-26 20:55:43.757328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.457 [2024-11-26 20:55:43.757389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.457 qpair failed and we were unable to recover it. 00:25:40.457 [2024-11-26 20:55:43.757672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.457 [2024-11-26 20:55:43.757733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.457 qpair failed and we were unable to recover it. 00:25:40.457 [2024-11-26 20:55:43.757970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.457 [2024-11-26 20:55:43.758029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.457 qpair failed and we were unable to recover it. 00:25:40.457 [2024-11-26 20:55:43.758326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.457 [2024-11-26 20:55:43.758393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.457 qpair failed and we were unable to recover it. 00:25:40.457 [2024-11-26 20:55:43.758638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.457 [2024-11-26 20:55:43.758694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.457 qpair failed and we were unable to recover it. 00:25:40.457 [2024-11-26 20:55:43.758959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.457 [2024-11-26 20:55:43.759014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.457 qpair failed and we were unable to recover it. 00:25:40.457 [2024-11-26 20:55:43.759241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.457 [2024-11-26 20:55:43.759296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.457 qpair failed and we were unable to recover it. 00:25:40.457 [2024-11-26 20:55:43.759532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.457 [2024-11-26 20:55:43.759587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.457 qpair failed and we were unable to recover it. 00:25:40.457 [2024-11-26 20:55:43.759778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.457 [2024-11-26 20:55:43.759835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.457 qpair failed and we were unable to recover it. 00:25:40.457 [2024-11-26 20:55:43.760038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.457 [2024-11-26 20:55:43.760095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.457 qpair failed and we were unable to recover it. 00:25:40.457 [2024-11-26 20:55:43.760342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.457 [2024-11-26 20:55:43.760417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.457 qpair failed and we were unable to recover it. 00:25:40.457 [2024-11-26 20:55:43.760641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.457 [2024-11-26 20:55:43.760697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.457 qpair failed and we were unable to recover it. 00:25:40.457 [2024-11-26 20:55:43.760973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.457 [2024-11-26 20:55:43.761050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.457 qpair failed and we were unable to recover it. 00:25:40.457 [2024-11-26 20:55:43.761315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.457 [2024-11-26 20:55:43.761372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.457 qpair failed and we were unable to recover it. 00:25:40.457 [2024-11-26 20:55:43.761581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.457 [2024-11-26 20:55:43.761647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.457 qpair failed and we were unable to recover it. 00:25:40.457 [2024-11-26 20:55:43.761828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.457 [2024-11-26 20:55:43.761886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.457 qpair failed and we were unable to recover it. 00:25:40.457 [2024-11-26 20:55:43.762158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.457 [2024-11-26 20:55:43.762214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.457 qpair failed and we were unable to recover it. 00:25:40.457 [2024-11-26 20:55:43.762478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.457 [2024-11-26 20:55:43.762534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.457 qpair failed and we were unable to recover it. 00:25:40.457 [2024-11-26 20:55:43.762729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.457 [2024-11-26 20:55:43.762803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.457 qpair failed and we were unable to recover it. 00:25:40.457 [2024-11-26 20:55:43.762965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.457 [2024-11-26 20:55:43.763022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.457 qpair failed and we were unable to recover it. 00:25:40.457 [2024-11-26 20:55:43.763251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.457 [2024-11-26 20:55:43.763320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.457 qpair failed and we were unable to recover it. 00:25:40.457 [2024-11-26 20:55:43.763572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.457 [2024-11-26 20:55:43.763627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.457 qpair failed and we were unable to recover it. 00:25:40.457 [2024-11-26 20:55:43.763830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.457 [2024-11-26 20:55:43.763885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.457 qpair failed and we were unable to recover it. 00:25:40.457 [2024-11-26 20:55:43.764128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.457 [2024-11-26 20:55:43.764183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.457 qpair failed and we were unable to recover it. 00:25:40.457 [2024-11-26 20:55:43.764462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.457 [2024-11-26 20:55:43.764517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.457 qpair failed and we were unable to recover it. 00:25:40.457 [2024-11-26 20:55:43.764693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.457 [2024-11-26 20:55:43.764748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.457 qpair failed and we were unable to recover it. 00:25:40.457 [2024-11-26 20:55:43.765009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.457 [2024-11-26 20:55:43.765065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.457 qpair failed and we were unable to recover it. 00:25:40.457 [2024-11-26 20:55:43.765325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.457 [2024-11-26 20:55:43.765381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.457 qpair failed and we were unable to recover it. 00:25:40.457 [2024-11-26 20:55:43.765588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.457 [2024-11-26 20:55:43.765644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.457 qpair failed and we were unable to recover it. 00:25:40.457 [2024-11-26 20:55:43.765878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.457 [2024-11-26 20:55:43.765938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.457 qpair failed and we were unable to recover it. 00:25:40.457 [2024-11-26 20:55:43.766174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.457 [2024-11-26 20:55:43.766237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.457 qpair failed and we were unable to recover it. 00:25:40.457 [2024-11-26 20:55:43.766454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.457 [2024-11-26 20:55:43.766514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.457 qpair failed and we were unable to recover it. 00:25:40.457 [2024-11-26 20:55:43.766751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.457 [2024-11-26 20:55:43.766809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.457 qpair failed and we were unable to recover it. 00:25:40.457 [2024-11-26 20:55:43.767041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.457 [2024-11-26 20:55:43.767100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.457 qpair failed and we were unable to recover it. 00:25:40.457 [2024-11-26 20:55:43.767286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.457 [2024-11-26 20:55:43.767378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.457 qpair failed and we were unable to recover it. 00:25:40.457 [2024-11-26 20:55:43.767566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.457 [2024-11-26 20:55:43.767625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.457 qpair failed and we were unable to recover it. 00:25:40.457 [2024-11-26 20:55:43.767821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.457 [2024-11-26 20:55:43.767882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.457 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.768114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.768175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.768417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.768500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.768684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.768744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.769015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.769075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.769296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.769380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.769648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.769709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.769917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.769976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.770230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.770290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.770569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.770628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.770901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.770961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.771196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.771256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.771542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.771601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.771873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.771932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.772126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.772187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.772417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.772477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.772715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.772774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.773004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.773064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.773296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.773381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.773649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.773729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.773990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.774050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.774295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.774369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.774576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.774638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.774864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.774924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.775190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.775251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.775476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.775540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.775844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.775915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.776150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.776210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.776464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.776524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.776742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.776820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.777095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.777155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.777351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.777415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.777740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.777826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.778135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.778212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.778494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.778573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.778795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.778874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.779076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.779138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.779389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.779471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.779755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.779817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.780033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.780093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.780322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.780383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.780596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.780655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.780864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.780923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.781186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.781246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.781442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.781503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.781813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.781900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.782147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.782206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.782445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.782526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.782755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.782835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.783039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.783100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.783292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.783384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.783637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.783716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.783957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.784036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.784272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.784347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.784594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.784673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.784977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.785066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.785291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.785363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.785614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.785693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.785993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.786092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.786328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.786388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.786619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.786697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.787011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.787090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.787371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.787453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.787756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.787833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.788131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.788210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.788411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.788491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.788747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.458 [2024-11-26 20:55:43.788825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.458 qpair failed and we were unable to recover it. 00:25:40.458 [2024-11-26 20:55:43.789076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.789136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.789379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.789460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.789719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.789798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.790033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.790094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.790381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.790461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.790734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.790813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.791038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.791099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.791327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.791389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.791683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.791768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.792006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.792066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.792294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.792366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.792625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.792684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.792987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.793067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.793323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.793383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.793631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.793719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.793959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.794038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.794275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.794348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.794587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.794664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.794892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.794970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.795239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.795299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.795567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.795647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.795948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.796037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.796226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.796288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.796553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.796632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.796879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.796960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.797186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.797247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.797478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.797560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.797812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.797892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.798165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.798224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.798465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.798547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.798772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.798862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.799080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.799150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.799389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.799450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.799723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.799783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.800009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.800085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.800299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.800371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.800601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.800678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.800930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.801013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.801254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.801337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.801606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.801684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.801892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.801974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.802176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.802238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.802497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.802575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.802799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.802875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.803085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.803146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.803383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.803463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.803731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.803790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.803977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.804037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.804276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.804348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.804656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.804741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.804950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.805027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.805267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.805355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.805624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.805701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.805937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.805998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.806239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.806299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.806575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.806655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.806947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.807037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.807269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.807341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.807627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.807718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.808011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.808088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.808332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.808393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.808657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.808734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.809019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.809104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.809318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.809378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.809665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.809752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.810038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.810116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.810360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.810422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.810680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.810756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.459 qpair failed and we were unable to recover it. 00:25:40.459 [2024-11-26 20:55:43.810941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.459 [2024-11-26 20:55:43.811020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.811252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.811326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.811556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.811634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.811822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.811893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.812128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.812189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.812426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.812505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.812797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.812885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.813153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.813212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.813481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.813561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.813867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.813944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.814153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.814213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.814487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.814566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.814835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.814896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.815100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.815160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.815413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.815490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.815695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.815778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.816003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.816064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.816360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.816442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.816693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.816772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.817007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.817084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.817260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.817334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.817586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.817665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.817977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.818048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.818281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.818355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.818618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.818696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.818940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.819021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.819222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.819282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.819568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.819646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.819940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.820016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.820251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.820333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.820602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.820680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.820919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.820999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.821241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.821316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.821546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.821623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.821854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.821932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.822161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.822220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.822463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.822542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.822797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.822874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.823051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.823110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.823379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.823460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.823671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.823730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.823921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.823981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.824211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.824271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.824529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.824598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.824797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.824859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.825088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.825149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.825348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.825407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.825642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.825702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.825960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.826020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.826259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.826329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.826534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.826594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.826830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.826891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.827118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.827176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.827475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.827564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.827857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.827936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.828162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.828222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.828474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.828553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.828874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.828962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.829209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.829269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.829558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.829635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.829924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.830002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.830237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.830295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.830585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.830669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.830942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.831020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.460 [2024-11-26 20:55:43.831280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.460 [2024-11-26 20:55:43.831351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.460 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.831610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.831688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.831877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.831940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.832204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.832264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.832558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.832646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.832898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.832976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.833213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.833273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.833531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.833609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.833882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.833980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.834220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.834278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.834542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.834619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.834885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.834964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.835210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.835270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.835493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.835575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.835866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.835956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.836195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.836254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.836520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.836601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.836812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.836894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.837135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.837194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.837430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.837518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.837783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.837862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.838101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.838160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.838382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.838463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.838716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.838796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.839040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.839100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.839399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.839486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.839758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.839818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.840059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.840120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.840351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.840413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.840710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.840799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.840993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.841056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.841296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.841371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.841614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.841693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.841919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.841997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.842234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.842293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.842541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.842620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.842884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.842964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.843240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.843298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.843589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.843671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.843920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.843998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.844226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.844284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.844585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.844662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.844886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.844965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.845231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.845290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.845577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.845655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.845924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.845984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.846274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.846351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.846544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.846607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.846893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.846983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.847246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.847324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.847624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.847708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.847936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.848012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.848274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.848362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.848616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.848694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.848942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.849020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.849259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.849339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.849552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.849630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.849852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.849929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.850131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.850191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.850457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.850530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.850733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.850820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.851079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.851138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.851414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.851475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.851664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.851724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.851911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.851971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.852159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.852219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.852479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.852540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.852760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.852819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.853056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.461 [2024-11-26 20:55:43.853116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.461 qpair failed and we were unable to recover it. 00:25:40.461 [2024-11-26 20:55:43.853339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.853400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.853612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.853689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.853882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.853945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.854148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.854208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.854494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.854574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.854777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.854856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.855083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.855144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.855341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.855401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.855658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.855738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.855979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.856041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.856230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.856290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.856553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.856633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.856883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.856965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.857162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.857221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.857497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.857560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.857815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.857892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.858112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.858172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.858445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.858526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.858790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.858850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.859078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.859140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.859384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.859465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.859766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.859844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.860053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.860112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.860321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.860381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.860647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.860725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.860954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.861016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.861241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.861301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.861587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.861646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.861857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.861934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.862206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.862265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.862488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.862560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.862767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.862836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.863113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.863173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.863424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.863501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.863731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.863809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.864037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.864096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.864371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.864432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.864660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.864738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.865013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.865073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.865342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.865403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.865652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.865717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.865991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.866055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.866333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.866397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.866684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.866770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.867026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.867107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.867363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.867425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.867649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.867728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.867978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.868057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.868315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.868377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.868641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.868718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.868957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.869038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.869324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.869386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.869691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.869769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.870046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.870124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.870328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.870400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.870646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.870725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.870948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.871025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.871227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.871287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.871583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.871662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.871883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.871962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.872183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.872242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.872488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.872568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.872841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.872920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.873162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.873221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.873501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.873581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.873835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.873915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.462 [2024-11-26 20:55:43.874085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.462 [2024-11-26 20:55:43.874145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.462 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.874366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.874428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.874687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.874772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.874997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.875057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.875276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.875352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.875628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.875706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.875913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.875991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.876232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.876291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.876538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.876615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.876890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.876950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.877225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.877285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.877562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.877640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.877901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.877980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.878215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.878274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.878526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.878603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.878895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.878974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.879214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.879274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.879570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.879652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.879897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.879975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.880202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.880264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.880549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.880611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.880914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.880993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.881235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.881294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.881599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.881677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.881914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.881990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.882219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.882279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.882517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.882595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.882878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.882955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.883146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.883206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.883475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.883555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.883774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.883852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.884056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.884125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.884323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.884387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.884636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.884715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.884974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.885054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.885329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.885390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.885634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.885712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.885961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.886039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.886235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.886295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.886603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.886681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.886913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.886975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.887167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.887226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.887510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.887572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.887878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.887957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.888200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.888259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.888575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.888655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.888901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.888980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.889181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.889241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.889525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.889589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.889835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.889897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.890107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.890170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.890432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.890513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.890746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.890831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.891059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.891118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.891376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.891457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.891694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.891755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.891982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.892043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.892265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.892340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.892622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.892702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.892960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.893039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.893264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.893335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.893596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.893676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.893942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.894021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.894213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.894272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.894548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.894626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.894931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.895010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.895274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.895364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.895647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.895725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.895975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.896053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.896283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.896360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.896546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.896608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.896835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.896924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.463 [2024-11-26 20:55:43.897207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.463 [2024-11-26 20:55:43.897267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.463 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.897456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.897519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.897726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.897785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.898067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.898127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.898428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.898509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.898805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.898889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.899128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.899187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.899414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.899493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.899719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.899779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.900001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.900062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.900258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.900333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.900566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.900628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.900860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.900919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.901170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.901229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.901456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.901536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.901776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.901836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.902066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.902127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.902362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.902423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.902684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.902763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.902974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.903034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.903214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.903274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.903501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.903560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.903748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.903807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.904051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.904110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.904341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.904402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.904673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.904735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.904988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.905051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.905282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.905358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.905602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.905682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.905879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.905940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.906179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.906239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.906514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.906597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.906892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.906969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.907236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.907296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.907559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.907638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.907845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.907925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.908118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.908179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.908470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.908560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.908835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.908912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.909142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.909211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.909438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.909517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.909767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.909844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.910068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.910127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.910367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.910449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.910616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.910676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.910858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.910918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.911177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.911237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.911449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.911514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.911757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.911816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.912013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.912074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.912278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.912353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.912532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.912592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.912823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.912885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.913119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.913180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.913405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.913484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.913773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.913833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.914094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.914153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.914400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.914480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.914701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.914779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.915047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.915106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.915398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.915486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.915724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.915811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.916075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.916134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.916329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.916391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.916686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.916764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.916992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.917051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.917298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.917383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.917686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.917765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.918038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.918100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.918326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.918388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.918616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.918699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.918963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.919041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.919241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.919331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.919537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.464 [2024-11-26 20:55:43.919618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.464 qpair failed and we were unable to recover it. 00:25:40.464 [2024-11-26 20:55:43.919802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.919864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.920068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.920127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.920354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.920418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.920602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.920665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.920888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.920949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.921161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.921232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.921488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.921547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.921791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.921854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.922089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.922149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.922344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.922406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.922648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.922727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.922940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.923000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.923210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.923271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.923557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.923646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.923924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.923995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.924268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.924341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.924564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.924650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.924938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.925016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.925249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.925321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.925592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.925671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.925849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.925910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.926183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.926243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.926518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.926597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.926906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.926994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.927275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.927347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.927611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.927690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.927919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.927996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.928226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.928294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.928566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.928648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.928939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.929017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.929281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.929361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.929584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.929666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.929928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.930006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.930238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.930299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.930584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.930663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.930955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.931037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.931282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.931383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.931575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.931655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.931916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.931994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.932218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.932278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.932592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.932671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.932968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.933046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.933329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.933390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.933637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.933697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.933910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.933988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.934228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.934297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.934540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.934601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.934858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.934937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.935164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.935224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.935501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.935582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.935840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.935917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.936197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.936258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.936489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.936586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.936850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.936931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.937210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.937271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.937509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.937588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.937822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.937900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.938089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.938149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.938414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.938494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.938774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.938851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.939072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.939131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.939386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.939470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.939756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.939835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.940078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.940138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.940335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.940413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.465 [2024-11-26 20:55:43.940614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.465 [2024-11-26 20:55:43.940697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.465 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.940995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.941073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.941263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.941345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.941576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.941655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.941870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.941947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.942162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.942222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.942471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.942550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.942814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.942893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.943135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.943195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.943441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.943520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.943771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.943850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.944086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.944146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.944429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.944507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.944796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.944873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.945110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.945171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.945411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.945490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.945759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.945844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.946081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.946141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.946396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.946476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.946777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.946854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.947116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.947186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.947434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.947515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.947819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.947898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.948132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.948192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.948423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.948504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.948810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.948888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.949150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.949211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.949433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.949513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.949771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.949849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.950102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.950162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.950373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.950455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.950653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.950733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.950987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.951066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.951327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.951388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.951703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.951780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.952059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.952120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.952384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.952464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.952769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.952847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.953032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.953093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.953355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.953436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.953736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.953814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.954050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.954109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.954327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.954388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.954616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.954697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.955005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.955084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.955331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.955392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.955641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.955725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.956043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.956121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.956343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.956420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.956720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.956797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.957038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.957116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.957365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.957445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.957706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.957767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.958031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.958093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.958363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.958441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.958731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.958809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.959045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.959106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.959393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.959473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.959734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.959810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.960086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.960146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.960340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.960412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.960639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.960719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.960934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.960994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.961213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.961273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.961458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.961518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.961783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.961843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.962047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.962110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.962364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.962450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.962709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.962787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.963059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.963118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.963411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.963491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.963757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.963834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.964021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.466 [2024-11-26 20:55:43.964083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.466 qpair failed and we were unable to recover it. 00:25:40.466 [2024-11-26 20:55:43.964322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.964385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.964631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.964713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.964981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.965040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.965225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.965287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.965540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.965617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.965849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.965929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.966157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.966215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.966470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.966550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.966803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.966880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.967118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.967177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.967393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.967453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.967679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.967758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.967983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.968044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.968325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.968386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.968596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.968680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.968931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.969010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.969282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.969362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.969690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.969767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.970029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.970106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.970326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.970387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.970592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.970680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.970983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.971061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.971246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.971334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.971609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.971687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.971938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.972017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.972248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.972322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.972590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.972671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.972857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.972948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.973214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.973274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.973535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.973614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.973835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.973914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.974182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.974242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.974547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.974628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.974885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.974963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.975190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.975249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.975573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.975659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.975898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.975976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.976196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.976256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.976522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.976601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.976843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.976921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.977127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.977188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.977463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.977543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.977785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.977862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.978063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.978122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.978374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.978454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.978695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.978774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.979036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.979096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.979361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.979422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.979662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.979724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.979956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.980016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.980214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.980277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.980537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.980602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.980896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.980975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.981198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.981259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.981530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.981609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.981855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.981933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.982125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.982184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.982409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.982489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.982757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.982835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.983068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.983128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.983390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.983470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.983713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.983791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.984028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.984088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.984334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.984400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.984609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.984671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.984871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.467 [2024-11-26 20:55:43.984933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.467 qpair failed and we were unable to recover it. 00:25:40.467 [2024-11-26 20:55:43.985177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:43.985237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:43.985445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:43.985516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:43.985711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:43.985773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:43.986000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:43.986062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:43.986295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:43.986367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:43.986607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:43.986669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:43.986892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:43.986970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:43.987194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:43.987255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:43.987490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:43.987573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:43.987811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:43.987889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:43.988095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:43.988156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:43.988453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:43.988534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:43.988835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:43.988914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:43.989191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:43.989252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:43.989475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:43.989556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:43.989830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:43.989910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:43.990149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:43.990209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:43.990474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:43.990555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:43.990846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:43.990926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:43.991155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:43.991217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:43.991454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:43.991519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:43.991764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:43.991842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:43.992088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:43.992148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:43.992412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:43.992492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:43.992788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:43.992867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:43.993107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:43.993166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:43.993377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:43.993459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:43.993731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:43.993809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:43.994087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:43.994148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:43.994398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:43.994479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:43.994738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:43.994816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:43.995037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:43.995099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:43.995379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:43.995460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:43.995757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:43.995834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:43.996078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:43.996138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:43.996352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:43.996415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:43.996664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:43.996743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:43.996959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:43.997037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:43.997285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:43.997367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:43.997627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:43.997706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:43.997975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:43.998035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:43.998314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:43.998385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:43.998650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:43.998711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:43.998999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:43.999076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:43.999342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:43.999404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:43.999650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:43.999729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:43.999943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:44.000020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:44.000297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:44.000379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:44.000643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:44.000706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:44.001005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:44.001083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:44.001286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:44.001357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:44.001594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:44.001654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:44.001932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:44.002009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:44.002211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:44.002273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:44.002549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:44.002634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:44.002865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:44.002943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:44.003140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:44.003200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:44.003446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:44.003527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:44.003799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:44.003860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:44.004154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:44.004233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:44.004559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:44.004648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:44.004903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:44.004984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:44.005255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:44.005318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:44.005487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:44.005538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:44.005753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:44.005805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:44.006042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:44.006095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:44.006299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:44.006378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:44.006545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:44.006597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:44.006801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:44.006855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:44.007080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:44.007161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:44.007376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:44.007446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:44.007637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:44.007728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.468 [2024-11-26 20:55:44.007945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.468 [2024-11-26 20:55:44.008004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.468 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.008203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.008267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.008546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.008599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.008844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.008905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.009139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.009199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.009411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.009465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.009737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.009796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.010043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.010103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.010314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.010376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.010579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.010666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.010913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.010987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.011186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.011274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.011464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.011518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.011720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.011781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.012000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.012059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.012258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.012352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.012521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.012575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.012857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.012917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.013134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.013195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.013401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.013457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.013631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.013683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.013885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.013965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.014154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.014220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.014491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.014544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.014836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.014914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.015120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.015180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.015398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.015453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.015664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.015727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.015992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.016054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.016368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.016440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.016709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.016771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.016984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.017036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.017250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.017362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.017570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.017629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.017841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.017893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.018107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.018163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.018393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.018451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.018660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.018721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.018907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.018960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.019161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.019221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.019432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.019489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.019708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.019765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.019996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.020049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.020232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.020284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.020489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.020543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.020780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.020836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.021033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.021089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.021314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.021369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.021539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.021593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.021799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.021862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.022095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.022152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.022362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.022420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.022630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.022683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.022939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.023008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.023215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.023287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.023470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.023528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.023749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.023806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.024006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.024062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.024266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.024334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.024567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.024624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.024860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.024913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.025101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.025154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.025341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.025395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.025617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.025670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.025842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.025895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.026105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.026158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.026347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.026399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.026610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.026663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.026830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.026883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.027029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.027081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.027322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.469 [2024-11-26 20:55:44.027375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.469 qpair failed and we were unable to recover it. 00:25:40.469 [2024-11-26 20:55:44.027624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.470 [2024-11-26 20:55:44.027682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.470 qpair failed and we were unable to recover it. 00:25:40.470 [2024-11-26 20:55:44.027898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.470 [2024-11-26 20:55:44.027950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.470 qpair failed and we were unable to recover it. 00:25:40.470 [2024-11-26 20:55:44.028122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.470 [2024-11-26 20:55:44.028178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.470 qpair failed and we were unable to recover it. 00:25:40.470 [2024-11-26 20:55:44.028419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.470 [2024-11-26 20:55:44.028473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.470 qpair failed and we were unable to recover it. 00:25:40.470 [2024-11-26 20:55:44.028699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.470 [2024-11-26 20:55:44.028751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.470 qpair failed and we were unable to recover it. 00:25:40.470 [2024-11-26 20:55:44.028967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.470 [2024-11-26 20:55:44.029021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.470 qpair failed and we were unable to recover it. 00:25:40.470 [2024-11-26 20:55:44.029253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.470 [2024-11-26 20:55:44.029317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.470 qpair failed and we were unable to recover it. 00:25:40.470 [2024-11-26 20:55:44.029563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.470 [2024-11-26 20:55:44.029624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.470 qpair failed and we were unable to recover it. 00:25:40.470 [2024-11-26 20:55:44.029789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.470 [2024-11-26 20:55:44.029842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.470 qpair failed and we were unable to recover it. 00:25:40.470 [2024-11-26 20:55:44.030081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.470 [2024-11-26 20:55:44.030133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.470 qpair failed and we were unable to recover it. 00:25:40.470 [2024-11-26 20:55:44.030324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.470 [2024-11-26 20:55:44.030388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.470 qpair failed and we were unable to recover it. 00:25:40.470 [2024-11-26 20:55:44.030626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.470 [2024-11-26 20:55:44.030679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.470 qpair failed and we were unable to recover it. 00:25:40.470 [2024-11-26 20:55:44.030890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.470 [2024-11-26 20:55:44.030943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.470 qpair failed and we were unable to recover it. 00:25:40.470 [2024-11-26 20:55:44.031147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.470 [2024-11-26 20:55:44.031199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.470 qpair failed and we were unable to recover it. 00:25:40.470 [2024-11-26 20:55:44.031376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.470 [2024-11-26 20:55:44.031431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.470 qpair failed and we were unable to recover it. 00:25:40.470 [2024-11-26 20:55:44.031609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.470 [2024-11-26 20:55:44.031662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.470 qpair failed and we were unable to recover it. 00:25:40.470 [2024-11-26 20:55:44.031877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.470 [2024-11-26 20:55:44.031931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.470 qpair failed and we were unable to recover it. 00:25:40.470 [2024-11-26 20:55:44.032103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.470 [2024-11-26 20:55:44.032156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.470 qpair failed and we were unable to recover it. 00:25:40.470 [2024-11-26 20:55:44.032366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.470 [2024-11-26 20:55:44.032421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.470 qpair failed and we were unable to recover it. 00:25:40.470 [2024-11-26 20:55:44.032626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.470 [2024-11-26 20:55:44.032681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.470 qpair failed and we were unable to recover it. 00:25:40.470 [2024-11-26 20:55:44.032920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.470 [2024-11-26 20:55:44.032972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.470 qpair failed and we were unable to recover it. 00:25:40.470 [2024-11-26 20:55:44.033159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.470 [2024-11-26 20:55:44.033213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.470 qpair failed and we were unable to recover it. 00:25:40.470 [2024-11-26 20:55:44.033383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.470 [2024-11-26 20:55:44.033439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.470 qpair failed and we were unable to recover it. 00:25:40.470 [2024-11-26 20:55:44.033647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.470 [2024-11-26 20:55:44.033701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.470 qpair failed and we were unable to recover it. 00:25:40.470 [2024-11-26 20:55:44.033915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.470 [2024-11-26 20:55:44.033968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.470 qpair failed and we were unable to recover it. 00:25:40.470 [2024-11-26 20:55:44.034162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.470 [2024-11-26 20:55:44.034238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.470 qpair failed and we were unable to recover it. 00:25:40.470 [2024-11-26 20:55:44.034495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.470 [2024-11-26 20:55:44.034574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.470 qpair failed and we were unable to recover it. 00:25:40.470 [2024-11-26 20:55:44.034819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.470 [2024-11-26 20:55:44.034899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.470 qpair failed and we were unable to recover it. 00:25:40.470 [2024-11-26 20:55:44.035180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.470 [2024-11-26 20:55:44.035233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.470 qpair failed and we were unable to recover it. 00:25:40.470 [2024-11-26 20:55:44.035444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.470 [2024-11-26 20:55:44.035499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.470 qpair failed and we were unable to recover it. 00:25:40.470 [2024-11-26 20:55:44.035709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.470 [2024-11-26 20:55:44.035762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.470 qpair failed and we were unable to recover it. 00:25:40.470 [2024-11-26 20:55:44.035961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.470 [2024-11-26 20:55:44.036014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.470 qpair failed and we were unable to recover it. 00:25:40.470 [2024-11-26 20:55:44.036205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.470 [2024-11-26 20:55:44.036258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.470 qpair failed and we were unable to recover it. 00:25:40.470 [2024-11-26 20:55:44.036488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.470 [2024-11-26 20:55:44.036542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.470 qpair failed and we were unable to recover it. 00:25:40.470 [2024-11-26 20:55:44.036748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.470 [2024-11-26 20:55:44.036801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.470 qpair failed and we were unable to recover it. 00:25:40.470 [2024-11-26 20:55:44.037000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.470 [2024-11-26 20:55:44.037052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.470 qpair failed and we were unable to recover it. 00:25:40.470 [2024-11-26 20:55:44.037263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.470 [2024-11-26 20:55:44.037326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.470 qpair failed and we were unable to recover it. 00:25:40.470 [2024-11-26 20:55:44.037517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.470 [2024-11-26 20:55:44.037570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.470 qpair failed and we were unable to recover it. 00:25:40.470 [2024-11-26 20:55:44.037720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.470 [2024-11-26 20:55:44.037775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.470 qpair failed and we were unable to recover it. 00:25:40.470 [2024-11-26 20:55:44.037962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.470 [2024-11-26 20:55:44.038016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.470 qpair failed and we were unable to recover it. 00:25:40.470 [2024-11-26 20:55:44.038195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.470 [2024-11-26 20:55:44.038250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.470 qpair failed and we were unable to recover it. 00:25:40.470 [2024-11-26 20:55:44.038446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.470 [2024-11-26 20:55:44.038499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.470 qpair failed and we were unable to recover it. 00:25:40.470 [2024-11-26 20:55:44.038719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.470 [2024-11-26 20:55:44.038773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.470 qpair failed and we were unable to recover it. 00:25:40.470 [2024-11-26 20:55:44.039012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.470 [2024-11-26 20:55:44.039065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.470 qpair failed and we were unable to recover it. 00:25:40.470 [2024-11-26 20:55:44.039246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.470 [2024-11-26 20:55:44.039299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.470 qpair failed and we were unable to recover it. 00:25:40.470 [2024-11-26 20:55:44.039510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.470 [2024-11-26 20:55:44.039572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.470 qpair failed and we were unable to recover it. 00:25:40.470 [2024-11-26 20:55:44.039779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.470 [2024-11-26 20:55:44.039832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.470 qpair failed and we were unable to recover it. 00:25:40.470 [2024-11-26 20:55:44.040068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.470 [2024-11-26 20:55:44.040121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.470 qpair failed and we were unable to recover it. 00:25:40.470 [2024-11-26 20:55:44.040326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.470 [2024-11-26 20:55:44.040380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.470 qpair failed and we were unable to recover it. 00:25:40.470 [2024-11-26 20:55:44.040565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.470 [2024-11-26 20:55:44.040620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.470 qpair failed and we were unable to recover it. 00:25:40.470 [2024-11-26 20:55:44.040829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.470 [2024-11-26 20:55:44.040883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.470 qpair failed and we were unable to recover it. 00:25:40.470 [2024-11-26 20:55:44.041062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.470 [2024-11-26 20:55:44.041115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.470 qpair failed and we were unable to recover it. 00:25:40.470 [2024-11-26 20:55:44.041295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.470 [2024-11-26 20:55:44.041358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.470 qpair failed and we were unable to recover it. 00:25:40.470 [2024-11-26 20:55:44.041542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.470 [2024-11-26 20:55:44.041595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.470 qpair failed and we were unable to recover it. 00:25:40.470 [2024-11-26 20:55:44.041797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.470 [2024-11-26 20:55:44.041852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.470 qpair failed and we were unable to recover it. 00:25:40.470 [2024-11-26 20:55:44.041999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.470 [2024-11-26 20:55:44.042052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.470 qpair failed and we were unable to recover it. 00:25:40.470 [2024-11-26 20:55:44.042252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.470 [2024-11-26 20:55:44.042315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.470 qpair failed and we were unable to recover it. 00:25:40.470 [2024-11-26 20:55:44.042509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.470 [2024-11-26 20:55:44.042562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.470 qpair failed and we were unable to recover it. 00:25:40.470 [2024-11-26 20:55:44.042773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.470 [2024-11-26 20:55:44.042826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.470 qpair failed and we were unable to recover it. 00:25:40.470 [2024-11-26 20:55:44.043041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.470 [2024-11-26 20:55:44.043094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.470 qpair failed and we were unable to recover it. 00:25:40.470 [2024-11-26 20:55:44.043294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.470 [2024-11-26 20:55:44.043362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.470 qpair failed and we were unable to recover it. 00:25:40.751 [2024-11-26 20:55:44.043560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.751 [2024-11-26 20:55:44.043612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.751 qpair failed and we were unable to recover it. 00:25:40.751 [2024-11-26 20:55:44.043767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.751 [2024-11-26 20:55:44.043819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.751 qpair failed and we were unable to recover it. 00:25:40.751 [2024-11-26 20:55:44.043991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.751 [2024-11-26 20:55:44.044045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.751 qpair failed and we were unable to recover it. 00:25:40.751 [2024-11-26 20:55:44.044223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.751 [2024-11-26 20:55:44.044276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.751 qpair failed and we were unable to recover it. 00:25:40.751 [2024-11-26 20:55:44.044523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.751 [2024-11-26 20:55:44.044574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.751 qpair failed and we were unable to recover it. 00:25:40.751 [2024-11-26 20:55:44.044732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.751 [2024-11-26 20:55:44.044780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.751 qpair failed and we were unable to recover it. 00:25:40.751 [2024-11-26 20:55:44.044964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.751 [2024-11-26 20:55:44.045015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.751 qpair failed and we were unable to recover it. 00:25:40.751 [2024-11-26 20:55:44.045166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.751 [2024-11-26 20:55:44.045213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.751 qpair failed and we were unable to recover it. 00:25:40.751 [2024-11-26 20:55:44.045384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.751 [2024-11-26 20:55:44.045437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.751 qpair failed and we were unable to recover it. 00:25:40.751 [2024-11-26 20:55:44.045637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.751 [2024-11-26 20:55:44.045687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.751 qpair failed and we were unable to recover it. 00:25:40.751 [2024-11-26 20:55:44.045838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.751 [2024-11-26 20:55:44.045891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.751 qpair failed and we were unable to recover it. 00:25:40.751 [2024-11-26 20:55:44.046106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.751 [2024-11-26 20:55:44.046157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.751 qpair failed and we were unable to recover it. 00:25:40.751 [2024-11-26 20:55:44.046364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.751 [2024-11-26 20:55:44.046415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.751 qpair failed and we were unable to recover it. 00:25:40.751 [2024-11-26 20:55:44.046581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.751 [2024-11-26 20:55:44.046631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.751 qpair failed and we were unable to recover it. 00:25:40.751 [2024-11-26 20:55:44.046829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.751 [2024-11-26 20:55:44.046887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.751 qpair failed and we were unable to recover it. 00:25:40.751 [2024-11-26 20:55:44.047117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.751 [2024-11-26 20:55:44.047168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.751 qpair failed and we were unable to recover it. 00:25:40.751 [2024-11-26 20:55:44.047371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.751 [2024-11-26 20:55:44.047423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.751 qpair failed and we were unable to recover it. 00:25:40.751 [2024-11-26 20:55:44.047700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.751 [2024-11-26 20:55:44.047751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.751 qpair failed and we were unable to recover it. 00:25:40.751 [2024-11-26 20:55:44.047947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.751 [2024-11-26 20:55:44.047998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.751 qpair failed and we were unable to recover it. 00:25:40.751 [2024-11-26 20:55:44.048160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.751 [2024-11-26 20:55:44.048211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.751 qpair failed and we were unable to recover it. 00:25:40.751 [2024-11-26 20:55:44.048401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.751 [2024-11-26 20:55:44.048451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.751 qpair failed and we were unable to recover it. 00:25:40.751 [2024-11-26 20:55:44.048622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.751 [2024-11-26 20:55:44.048689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.751 qpair failed and we were unable to recover it. 00:25:40.751 [2024-11-26 20:55:44.048852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.751 [2024-11-26 20:55:44.048910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.751 qpair failed and we were unable to recover it. 00:25:40.751 [2024-11-26 20:55:44.049081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.751 [2024-11-26 20:55:44.049132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.751 qpair failed and we were unable to recover it. 00:25:40.751 [2024-11-26 20:55:44.049294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.751 [2024-11-26 20:55:44.049383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.751 qpair failed and we were unable to recover it. 00:25:40.751 [2024-11-26 20:55:44.049796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.751 [2024-11-26 20:55:44.049899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.751 qpair failed and we were unable to recover it. 00:25:40.751 [2024-11-26 20:55:44.050082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.751 [2024-11-26 20:55:44.050138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.751 qpair failed and we were unable to recover it. 00:25:40.751 [2024-11-26 20:55:44.050333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.751 [2024-11-26 20:55:44.050385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.751 qpair failed and we were unable to recover it. 00:25:40.751 [2024-11-26 20:55:44.050583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.751 [2024-11-26 20:55:44.050635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.751 qpair failed and we were unable to recover it. 00:25:40.751 [2024-11-26 20:55:44.050825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.751 [2024-11-26 20:55:44.050876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.751 qpair failed and we were unable to recover it. 00:25:40.751 [2024-11-26 20:55:44.051080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.751 [2024-11-26 20:55:44.051130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.751 qpair failed and we were unable to recover it. 00:25:40.751 [2024-11-26 20:55:44.051298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.751 [2024-11-26 20:55:44.051362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.751 qpair failed and we were unable to recover it. 00:25:40.751 [2024-11-26 20:55:44.051553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.751 [2024-11-26 20:55:44.051604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.751 qpair failed and we were unable to recover it. 00:25:40.751 [2024-11-26 20:55:44.051763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.751 [2024-11-26 20:55:44.051813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.752 qpair failed and we were unable to recover it. 00:25:40.752 [2024-11-26 20:55:44.052006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.752 [2024-11-26 20:55:44.052058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.752 qpair failed and we were unable to recover it. 00:25:40.752 [2024-11-26 20:55:44.052220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.752 [2024-11-26 20:55:44.052270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.752 qpair failed and we were unable to recover it. 00:25:40.752 [2024-11-26 20:55:44.052455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.752 [2024-11-26 20:55:44.052505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.752 qpair failed and we were unable to recover it. 00:25:40.752 [2024-11-26 20:55:44.052734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.752 [2024-11-26 20:55:44.052803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.752 qpair failed and we were unable to recover it. 00:25:40.752 [2024-11-26 20:55:44.053085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.752 [2024-11-26 20:55:44.053152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.752 qpair failed and we were unable to recover it. 00:25:40.752 [2024-11-26 20:55:44.053368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.752 [2024-11-26 20:55:44.053430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.752 qpair failed and we were unable to recover it. 00:25:40.752 [2024-11-26 20:55:44.053658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.752 [2024-11-26 20:55:44.053743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.752 qpair failed and we were unable to recover it. 00:25:40.752 [2024-11-26 20:55:44.054013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.752 [2024-11-26 20:55:44.054107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.752 qpair failed and we were unable to recover it. 00:25:40.752 [2024-11-26 20:55:44.054328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.752 [2024-11-26 20:55:44.054378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.752 qpair failed and we were unable to recover it. 00:25:40.752 [2024-11-26 20:55:44.054578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.752 [2024-11-26 20:55:44.054628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.752 qpair failed and we were unable to recover it. 00:25:40.752 [2024-11-26 20:55:44.054890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.752 [2024-11-26 20:55:44.054942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.752 qpair failed and we were unable to recover it. 00:25:40.752 [2024-11-26 20:55:44.055140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.752 [2024-11-26 20:55:44.055211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.752 qpair failed and we were unable to recover it. 00:25:40.752 [2024-11-26 20:55:44.055527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.752 [2024-11-26 20:55:44.055592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.752 qpair failed and we were unable to recover it. 00:25:40.752 [2024-11-26 20:55:44.055823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.752 [2024-11-26 20:55:44.055910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.752 qpair failed and we were unable to recover it. 00:25:40.752 [2024-11-26 20:55:44.056155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.752 [2024-11-26 20:55:44.056216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.752 qpair failed and we were unable to recover it. 00:25:40.752 [2024-11-26 20:55:44.056514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.752 [2024-11-26 20:55:44.056579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.752 qpair failed and we were unable to recover it. 00:25:40.752 [2024-11-26 20:55:44.056789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.752 [2024-11-26 20:55:44.056841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.752 qpair failed and we were unable to recover it. 00:25:40.752 [2024-11-26 20:55:44.057027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.752 [2024-11-26 20:55:44.057086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.752 qpair failed and we were unable to recover it. 00:25:40.752 [2024-11-26 20:55:44.057273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.752 [2024-11-26 20:55:44.057341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.752 qpair failed and we were unable to recover it. 00:25:40.752 [2024-11-26 20:55:44.057498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.752 [2024-11-26 20:55:44.057547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.752 qpair failed and we were unable to recover it. 00:25:40.752 [2024-11-26 20:55:44.057712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.752 [2024-11-26 20:55:44.057762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.752 qpair failed and we were unable to recover it. 00:25:40.752 [2024-11-26 20:55:44.057927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.752 [2024-11-26 20:55:44.057977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.752 qpair failed and we were unable to recover it. 00:25:40.752 [2024-11-26 20:55:44.058187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.752 [2024-11-26 20:55:44.058245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.752 qpair failed and we were unable to recover it. 00:25:40.752 [2024-11-26 20:55:44.058479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.752 [2024-11-26 20:55:44.058553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.752 qpair failed and we were unable to recover it. 00:25:40.752 [2024-11-26 20:55:44.058735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.752 [2024-11-26 20:55:44.058794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.752 qpair failed and we were unable to recover it. 00:25:40.752 [2024-11-26 20:55:44.059005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.752 [2024-11-26 20:55:44.059076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.752 qpair failed and we were unable to recover it. 00:25:40.752 [2024-11-26 20:55:44.059291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.752 [2024-11-26 20:55:44.059360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.752 qpair failed and we were unable to recover it. 00:25:40.752 [2024-11-26 20:55:44.059560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.752 [2024-11-26 20:55:44.059610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.752 qpair failed and we were unable to recover it. 00:25:40.752 [2024-11-26 20:55:44.059785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.752 [2024-11-26 20:55:44.059833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.752 qpair failed and we were unable to recover it. 00:25:40.752 [2024-11-26 20:55:44.060031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.752 [2024-11-26 20:55:44.060083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.752 qpair failed and we were unable to recover it. 00:25:40.752 [2024-11-26 20:55:44.060233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.752 [2024-11-26 20:55:44.060293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.752 qpair failed and we were unable to recover it. 00:25:40.752 [2024-11-26 20:55:44.060510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.752 [2024-11-26 20:55:44.060559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.752 qpair failed and we were unable to recover it. 00:25:40.752 [2024-11-26 20:55:44.060787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.753 [2024-11-26 20:55:44.060836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.753 qpair failed and we were unable to recover it. 00:25:40.753 [2024-11-26 20:55:44.061056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.753 [2024-11-26 20:55:44.061107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.753 qpair failed and we were unable to recover it. 00:25:40.753 [2024-11-26 20:55:44.061273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.753 [2024-11-26 20:55:44.061348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.753 qpair failed and we were unable to recover it. 00:25:40.753 [2024-11-26 20:55:44.061503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.753 [2024-11-26 20:55:44.061553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.753 qpair failed and we were unable to recover it. 00:25:40.753 [2024-11-26 20:55:44.061734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.753 [2024-11-26 20:55:44.061783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.753 qpair failed and we were unable to recover it. 00:25:40.753 [2024-11-26 20:55:44.061968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.753 [2024-11-26 20:55:44.062021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.753 qpair failed and we were unable to recover it. 00:25:40.753 [2024-11-26 20:55:44.062235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.753 [2024-11-26 20:55:44.062285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.753 qpair failed and we were unable to recover it. 00:25:40.753 [2024-11-26 20:55:44.062462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.753 [2024-11-26 20:55:44.062510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.753 qpair failed and we were unable to recover it. 00:25:40.753 [2024-11-26 20:55:44.062692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.753 [2024-11-26 20:55:44.062741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.753 qpair failed and we were unable to recover it. 00:25:40.753 [2024-11-26 20:55:44.062962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.753 [2024-11-26 20:55:44.063011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.753 qpair failed and we were unable to recover it. 00:25:40.753 [2024-11-26 20:55:44.063204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.753 [2024-11-26 20:55:44.063259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.753 qpair failed and we were unable to recover it. 00:25:40.753 [2024-11-26 20:55:44.063443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.753 [2024-11-26 20:55:44.063493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.753 qpair failed and we were unable to recover it. 00:25:40.753 [2024-11-26 20:55:44.063671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.753 [2024-11-26 20:55:44.063720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.753 qpair failed and we were unable to recover it. 00:25:40.753 [2024-11-26 20:55:44.063939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.753 [2024-11-26 20:55:44.063988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.753 qpair failed and we were unable to recover it. 00:25:40.753 [2024-11-26 20:55:44.064170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.753 [2024-11-26 20:55:44.064232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.753 qpair failed and we were unable to recover it. 00:25:40.753 [2024-11-26 20:55:44.064476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.753 [2024-11-26 20:55:44.064553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.753 qpair failed and we were unable to recover it. 00:25:40.753 [2024-11-26 20:55:44.064821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.753 [2024-11-26 20:55:44.064906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.753 qpair failed and we were unable to recover it. 00:25:40.753 [2024-11-26 20:55:44.065139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.753 [2024-11-26 20:55:44.065206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.753 qpair failed and we were unable to recover it. 00:25:40.753 [2024-11-26 20:55:44.065452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.753 [2024-11-26 20:55:44.065528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.753 qpair failed and we were unable to recover it. 00:25:40.753 [2024-11-26 20:55:44.065753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.753 [2024-11-26 20:55:44.065805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.753 qpair failed and we were unable to recover it. 00:25:40.753 [2024-11-26 20:55:44.065980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.753 [2024-11-26 20:55:44.066031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.753 qpair failed and we were unable to recover it. 00:25:40.753 [2024-11-26 20:55:44.066230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.753 [2024-11-26 20:55:44.066281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.753 qpair failed and we were unable to recover it. 00:25:40.753 [2024-11-26 20:55:44.066507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.753 [2024-11-26 20:55:44.066561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.753 qpair failed and we were unable to recover it. 00:25:40.753 [2024-11-26 20:55:44.066798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.753 [2024-11-26 20:55:44.066853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.753 qpair failed and we were unable to recover it. 00:25:40.753 [2024-11-26 20:55:44.067045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.753 [2024-11-26 20:55:44.067096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.753 qpair failed and we were unable to recover it. 00:25:40.753 [2024-11-26 20:55:44.067297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.753 [2024-11-26 20:55:44.067372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.753 qpair failed and we were unable to recover it. 00:25:40.753 [2024-11-26 20:55:44.067519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.753 [2024-11-26 20:55:44.067571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.753 qpair failed and we were unable to recover it. 00:25:40.753 [2024-11-26 20:55:44.067778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.753 [2024-11-26 20:55:44.067833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.753 qpair failed and we were unable to recover it. 00:25:40.753 [2024-11-26 20:55:44.068033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.753 [2024-11-26 20:55:44.068088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.753 qpair failed and we were unable to recover it. 00:25:40.753 [2024-11-26 20:55:44.068289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.753 [2024-11-26 20:55:44.068359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.753 qpair failed and we were unable to recover it. 00:25:40.753 [2024-11-26 20:55:44.068546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.753 [2024-11-26 20:55:44.068598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.753 qpair failed and we were unable to recover it. 00:25:40.753 [2024-11-26 20:55:44.068765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.753 [2024-11-26 20:55:44.068817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.753 qpair failed and we were unable to recover it. 00:25:40.753 [2024-11-26 20:55:44.069003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.753 [2024-11-26 20:55:44.069057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.753 qpair failed and we were unable to recover it. 00:25:40.753 [2024-11-26 20:55:44.069268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.753 [2024-11-26 20:55:44.069348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.753 qpair failed and we were unable to recover it. 00:25:40.753 [2024-11-26 20:55:44.069565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.753 [2024-11-26 20:55:44.069617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.753 qpair failed and we were unable to recover it. 00:25:40.753 [2024-11-26 20:55:44.069805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.753 [2024-11-26 20:55:44.069857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.753 qpair failed and we were unable to recover it. 00:25:40.753 [2024-11-26 20:55:44.070064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.753 [2024-11-26 20:55:44.070128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.753 qpair failed and we were unable to recover it. 00:25:40.753 [2024-11-26 20:55:44.070317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.753 [2024-11-26 20:55:44.070372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.753 qpair failed and we were unable to recover it. 00:25:40.753 [2024-11-26 20:55:44.070547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.754 [2024-11-26 20:55:44.070599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.754 qpair failed and we were unable to recover it. 00:25:40.754 [2024-11-26 20:55:44.070833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.754 [2024-11-26 20:55:44.070895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.754 qpair failed and we were unable to recover it. 00:25:40.754 [2024-11-26 20:55:44.071100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.754 [2024-11-26 20:55:44.071155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.754 qpair failed and we were unable to recover it. 00:25:40.754 [2024-11-26 20:55:44.071384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.754 [2024-11-26 20:55:44.071442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.754 qpair failed and we were unable to recover it. 00:25:40.754 [2024-11-26 20:55:44.071668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.754 [2024-11-26 20:55:44.071727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.754 qpair failed and we were unable to recover it. 00:25:40.754 [2024-11-26 20:55:44.071911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.754 [2024-11-26 20:55:44.071969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.754 qpair failed and we were unable to recover it. 00:25:40.754 [2024-11-26 20:55:44.072162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.754 [2024-11-26 20:55:44.072220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.754 qpair failed and we were unable to recover it. 00:25:40.754 [2024-11-26 20:55:44.072436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.754 [2024-11-26 20:55:44.072493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.754 qpair failed and we were unable to recover it. 00:25:40.754 [2024-11-26 20:55:44.072694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.754 [2024-11-26 20:55:44.072752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.754 qpair failed and we were unable to recover it. 00:25:40.754 [2024-11-26 20:55:44.072945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.754 [2024-11-26 20:55:44.073003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.754 qpair failed and we were unable to recover it. 00:25:40.754 [2024-11-26 20:55:44.073250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.754 [2024-11-26 20:55:44.073329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.754 qpair failed and we were unable to recover it. 00:25:40.754 [2024-11-26 20:55:44.073576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.754 [2024-11-26 20:55:44.073643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.754 qpair failed and we were unable to recover it. 00:25:40.754 [2024-11-26 20:55:44.073850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.754 [2024-11-26 20:55:44.073910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.754 qpair failed and we were unable to recover it. 00:25:40.754 [2024-11-26 20:55:44.074124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.754 [2024-11-26 20:55:44.074188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.754 qpair failed and we were unable to recover it. 00:25:40.754 [2024-11-26 20:55:44.074388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.754 [2024-11-26 20:55:44.074476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.754 qpair failed and we were unable to recover it. 00:25:40.754 [2024-11-26 20:55:44.074737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.754 [2024-11-26 20:55:44.074795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.754 qpair failed and we were unable to recover it. 00:25:40.754 [2024-11-26 20:55:44.075042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.754 [2024-11-26 20:55:44.075095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.754 qpair failed and we were unable to recover it. 00:25:40.754 [2024-11-26 20:55:44.075319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.754 [2024-11-26 20:55:44.075381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.754 qpair failed and we were unable to recover it. 00:25:40.754 [2024-11-26 20:55:44.075600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.754 [2024-11-26 20:55:44.075657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.754 qpair failed and we were unable to recover it. 00:25:40.754 [2024-11-26 20:55:44.075903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.754 [2024-11-26 20:55:44.075981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.754 qpair failed and we were unable to recover it. 00:25:40.754 [2024-11-26 20:55:44.076167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.754 [2024-11-26 20:55:44.076225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.754 qpair failed and we were unable to recover it. 00:25:40.754 [2024-11-26 20:55:44.076476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.754 [2024-11-26 20:55:44.076540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.754 qpair failed and we were unable to recover it. 00:25:40.754 [2024-11-26 20:55:44.076799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.754 [2024-11-26 20:55:44.076859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.754 qpair failed and we were unable to recover it. 00:25:40.754 [2024-11-26 20:55:44.077073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.754 [2024-11-26 20:55:44.077128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.754 qpair failed and we were unable to recover it. 00:25:40.754 [2024-11-26 20:55:44.077326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.754 [2024-11-26 20:55:44.077380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.754 qpair failed and we were unable to recover it. 00:25:40.754 [2024-11-26 20:55:44.077598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.754 [2024-11-26 20:55:44.077656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.754 qpair failed and we were unable to recover it. 00:25:40.754 [2024-11-26 20:55:44.077878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.754 [2024-11-26 20:55:44.077939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.754 qpair failed and we were unable to recover it. 00:25:40.754 [2024-11-26 20:55:44.078128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.754 [2024-11-26 20:55:44.078188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.754 qpair failed and we were unable to recover it. 00:25:40.754 [2024-11-26 20:55:44.078430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.754 [2024-11-26 20:55:44.078501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.754 qpair failed and we were unable to recover it. 00:25:40.754 [2024-11-26 20:55:44.078708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.754 [2024-11-26 20:55:44.078772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.754 qpair failed and we were unable to recover it. 00:25:40.754 [2024-11-26 20:55:44.079020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.754 [2024-11-26 20:55:44.079085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.754 qpair failed and we were unable to recover it. 00:25:40.754 [2024-11-26 20:55:44.079382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.754 [2024-11-26 20:55:44.079445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.754 qpair failed and we were unable to recover it. 00:25:40.754 [2024-11-26 20:55:44.079680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.754 [2024-11-26 20:55:44.079739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.754 qpair failed and we were unable to recover it. 00:25:40.754 [2024-11-26 20:55:44.079917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.755 [2024-11-26 20:55:44.079975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.755 qpair failed and we were unable to recover it. 00:25:40.755 [2024-11-26 20:55:44.080228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.755 [2024-11-26 20:55:44.080285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.755 qpair failed and we were unable to recover it. 00:25:40.755 [2024-11-26 20:55:44.080561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.755 [2024-11-26 20:55:44.080622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.755 qpair failed and we were unable to recover it. 00:25:40.755 [2024-11-26 20:55:44.080917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.755 [2024-11-26 20:55:44.080976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.755 qpair failed and we were unable to recover it. 00:25:40.755 [2024-11-26 20:55:44.081220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.755 [2024-11-26 20:55:44.081279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.755 qpair failed and we were unable to recover it. 00:25:40.755 [2024-11-26 20:55:44.081489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.755 [2024-11-26 20:55:44.081547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.755 qpair failed and we were unable to recover it. 00:25:40.755 [2024-11-26 20:55:44.081776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.755 [2024-11-26 20:55:44.081838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.755 qpair failed and we were unable to recover it. 00:25:40.755 [2024-11-26 20:55:44.082067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.755 [2024-11-26 20:55:44.082141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.755 qpair failed and we were unable to recover it. 00:25:40.755 [2024-11-26 20:55:44.082399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.755 [2024-11-26 20:55:44.082458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.755 qpair failed and we were unable to recover it. 00:25:40.755 [2024-11-26 20:55:44.082730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.755 [2024-11-26 20:55:44.082795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.755 qpair failed and we were unable to recover it. 00:25:40.755 [2024-11-26 20:55:44.083008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.755 [2024-11-26 20:55:44.083075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.755 qpair failed and we were unable to recover it. 00:25:40.755 [2024-11-26 20:55:44.083330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.755 [2024-11-26 20:55:44.083388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.755 qpair failed and we were unable to recover it. 00:25:40.755 [2024-11-26 20:55:44.083661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.755 [2024-11-26 20:55:44.083728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.755 qpair failed and we were unable to recover it. 00:25:40.755 [2024-11-26 20:55:44.084006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.755 [2024-11-26 20:55:44.084067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.755 qpair failed and we were unable to recover it. 00:25:40.755 [2024-11-26 20:55:44.084381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.755 [2024-11-26 20:55:44.084466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.755 qpair failed and we were unable to recover it. 00:25:40.755 [2024-11-26 20:55:44.084740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.755 [2024-11-26 20:55:44.084793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.755 qpair failed and we were unable to recover it. 00:25:40.755 [2024-11-26 20:55:44.085002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.755 [2024-11-26 20:55:44.085064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.755 qpair failed and we were unable to recover it. 00:25:40.755 [2024-11-26 20:55:44.085276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.755 [2024-11-26 20:55:44.085367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.755 qpair failed and we were unable to recover it. 00:25:40.755 [2024-11-26 20:55:44.085624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.755 [2024-11-26 20:55:44.085685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.755 qpair failed and we were unable to recover it. 00:25:40.755 [2024-11-26 20:55:44.085932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.755 [2024-11-26 20:55:44.085995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.755 qpair failed and we were unable to recover it. 00:25:40.755 [2024-11-26 20:55:44.086168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.755 [2024-11-26 20:55:44.086231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.755 qpair failed and we were unable to recover it. 00:25:40.755 [2024-11-26 20:55:44.086501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.755 [2024-11-26 20:55:44.086579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.755 qpair failed and we were unable to recover it. 00:25:40.755 [2024-11-26 20:55:44.086803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.755 [2024-11-26 20:55:44.086864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.755 qpair failed and we were unable to recover it. 00:25:40.755 [2024-11-26 20:55:44.087098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.755 [2024-11-26 20:55:44.087152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.755 qpair failed and we were unable to recover it. 00:25:40.755 [2024-11-26 20:55:44.087360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.755 [2024-11-26 20:55:44.087415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.755 qpair failed and we were unable to recover it. 00:25:40.755 [2024-11-26 20:55:44.087709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.755 [2024-11-26 20:55:44.087775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.755 qpair failed and we were unable to recover it. 00:25:40.755 [2024-11-26 20:55:44.088019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.755 [2024-11-26 20:55:44.088081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.755 qpair failed and we were unable to recover it. 00:25:40.755 [2024-11-26 20:55:44.088349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.755 [2024-11-26 20:55:44.088420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.755 qpair failed and we were unable to recover it. 00:25:40.755 [2024-11-26 20:55:44.088729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.755 [2024-11-26 20:55:44.088783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.755 qpair failed and we were unable to recover it. 00:25:40.755 [2024-11-26 20:55:44.088982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.755 [2024-11-26 20:55:44.089040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.755 qpair failed and we were unable to recover it. 00:25:40.755 [2024-11-26 20:55:44.089203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.755 [2024-11-26 20:55:44.089284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.755 qpair failed and we were unable to recover it. 00:25:40.755 [2024-11-26 20:55:44.089544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.755 [2024-11-26 20:55:44.089602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.755 qpair failed and we were unable to recover it. 00:25:40.755 [2024-11-26 20:55:44.089852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.755 [2024-11-26 20:55:44.089912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.755 qpair failed and we were unable to recover it. 00:25:40.756 [2024-11-26 20:55:44.090154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.756 [2024-11-26 20:55:44.090211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.756 qpair failed and we were unable to recover it. 00:25:40.756 [2024-11-26 20:55:44.090500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.756 [2024-11-26 20:55:44.090559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.756 qpair failed and we were unable to recover it. 00:25:40.756 [2024-11-26 20:55:44.090799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.756 [2024-11-26 20:55:44.090881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.756 qpair failed and we were unable to recover it. 00:25:40.756 [2024-11-26 20:55:44.091083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.756 [2024-11-26 20:55:44.091144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.756 qpair failed and we were unable to recover it. 00:25:40.756 [2024-11-26 20:55:44.091355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.756 [2024-11-26 20:55:44.091414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.756 qpair failed and we were unable to recover it. 00:25:40.756 [2024-11-26 20:55:44.091684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.756 [2024-11-26 20:55:44.091741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.756 qpair failed and we were unable to recover it. 00:25:40.756 [2024-11-26 20:55:44.091970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.756 [2024-11-26 20:55:44.092032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.756 qpair failed and we were unable to recover it. 00:25:40.756 [2024-11-26 20:55:44.092209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.756 [2024-11-26 20:55:44.092289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.756 qpair failed and we were unable to recover it. 00:25:40.756 [2024-11-26 20:55:44.092528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.756 [2024-11-26 20:55:44.092587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.756 qpair failed and we were unable to recover it. 00:25:40.756 [2024-11-26 20:55:44.092882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.756 [2024-11-26 20:55:44.092939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.756 qpair failed and we were unable to recover it. 00:25:40.756 [2024-11-26 20:55:44.093162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.756 [2024-11-26 20:55:44.093223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.756 qpair failed and we were unable to recover it. 00:25:40.756 [2024-11-26 20:55:44.093551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.756 [2024-11-26 20:55:44.093633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.756 qpair failed and we were unable to recover it. 00:25:40.756 [2024-11-26 20:55:44.093861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.756 [2024-11-26 20:55:44.093917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.756 qpair failed and we were unable to recover it. 00:25:40.756 [2024-11-26 20:55:44.094131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.756 [2024-11-26 20:55:44.094191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.756 qpair failed and we were unable to recover it. 00:25:40.756 [2024-11-26 20:55:44.094455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.756 [2024-11-26 20:55:44.094537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.756 qpair failed and we were unable to recover it. 00:25:40.756 [2024-11-26 20:55:44.094791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.756 [2024-11-26 20:55:44.094853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.756 qpair failed and we were unable to recover it. 00:25:40.756 [2024-11-26 20:55:44.095128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.756 [2024-11-26 20:55:44.095191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.756 qpair failed and we were unable to recover it. 00:25:40.756 [2024-11-26 20:55:44.095465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.756 [2024-11-26 20:55:44.095527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.756 qpair failed and we were unable to recover it. 00:25:40.756 [2024-11-26 20:55:44.095726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.756 [2024-11-26 20:55:44.095787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.756 qpair failed and we were unable to recover it. 00:25:40.756 [2024-11-26 20:55:44.096004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.756 [2024-11-26 20:55:44.096065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.756 qpair failed and we were unable to recover it. 00:25:40.756 [2024-11-26 20:55:44.096269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.756 [2024-11-26 20:55:44.096345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.756 qpair failed and we were unable to recover it. 00:25:40.756 [2024-11-26 20:55:44.096591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.756 [2024-11-26 20:55:44.096651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.756 qpair failed and we were unable to recover it. 00:25:40.756 [2024-11-26 20:55:44.096885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.756 [2024-11-26 20:55:44.096947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.756 qpair failed and we were unable to recover it. 00:25:40.756 [2024-11-26 20:55:44.097189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.756 [2024-11-26 20:55:44.097250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.756 qpair failed and we were unable to recover it. 00:25:40.756 [2024-11-26 20:55:44.097517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.756 [2024-11-26 20:55:44.097580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.756 qpair failed and we were unable to recover it. 00:25:40.756 [2024-11-26 20:55:44.097870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.756 [2024-11-26 20:55:44.097930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.756 qpair failed and we were unable to recover it. 00:25:40.756 [2024-11-26 20:55:44.098164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.756 [2024-11-26 20:55:44.098225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.756 qpair failed and we were unable to recover it. 00:25:40.756 [2024-11-26 20:55:44.098465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.756 [2024-11-26 20:55:44.098547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.756 qpair failed and we were unable to recover it. 00:25:40.756 [2024-11-26 20:55:44.098800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.756 [2024-11-26 20:55:44.098890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.756 qpair failed and we were unable to recover it. 00:25:40.756 [2024-11-26 20:55:44.099094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.756 [2024-11-26 20:55:44.099154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.756 qpair failed and we were unable to recover it. 00:25:40.756 [2024-11-26 20:55:44.099417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.756 [2024-11-26 20:55:44.099499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.756 qpair failed and we were unable to recover it. 00:25:40.756 [2024-11-26 20:55:44.099805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.756 [2024-11-26 20:55:44.099886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.756 qpair failed and we were unable to recover it. 00:25:40.756 [2024-11-26 20:55:44.100116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.756 [2024-11-26 20:55:44.100178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.756 qpair failed and we were unable to recover it. 00:25:40.756 [2024-11-26 20:55:44.100441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.756 [2024-11-26 20:55:44.100503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.757 qpair failed and we were unable to recover it. 00:25:40.757 [2024-11-26 20:55:44.100772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.757 [2024-11-26 20:55:44.100833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.757 qpair failed and we were unable to recover it. 00:25:40.757 [2024-11-26 20:55:44.101097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.757 [2024-11-26 20:55:44.101157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.757 qpair failed and we were unable to recover it. 00:25:40.757 [2024-11-26 20:55:44.101378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.757 [2024-11-26 20:55:44.101440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.757 qpair failed and we were unable to recover it. 00:25:40.757 [2024-11-26 20:55:44.101686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.757 [2024-11-26 20:55:44.101746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.757 qpair failed and we were unable to recover it. 00:25:40.757 [2024-11-26 20:55:44.101934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.757 [2024-11-26 20:55:44.101995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.757 qpair failed and we were unable to recover it. 00:25:40.757 [2024-11-26 20:55:44.102201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.757 [2024-11-26 20:55:44.102261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.757 qpair failed and we were unable to recover it. 00:25:40.757 [2024-11-26 20:55:44.102546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.757 [2024-11-26 20:55:44.102606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.757 qpair failed and we were unable to recover it. 00:25:40.757 [2024-11-26 20:55:44.102872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.757 [2024-11-26 20:55:44.102933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.757 qpair failed and we were unable to recover it. 00:25:40.757 [2024-11-26 20:55:44.103133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.757 [2024-11-26 20:55:44.103194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.757 qpair failed and we were unable to recover it. 00:25:40.757 [2024-11-26 20:55:44.103451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.757 [2024-11-26 20:55:44.103513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.757 qpair failed and we were unable to recover it. 00:25:40.757 [2024-11-26 20:55:44.103759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.757 [2024-11-26 20:55:44.103837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.757 qpair failed and we were unable to recover it. 00:25:40.757 [2024-11-26 20:55:44.104084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.757 [2024-11-26 20:55:44.104144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.757 qpair failed and we were unable to recover it. 00:25:40.757 [2024-11-26 20:55:44.104405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.757 [2024-11-26 20:55:44.104485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.757 qpair failed and we were unable to recover it. 00:25:40.757 [2024-11-26 20:55:44.104720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.757 [2024-11-26 20:55:44.104799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.757 qpair failed and we were unable to recover it. 00:25:40.757 [2024-11-26 20:55:44.105065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.757 [2024-11-26 20:55:44.105125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.757 qpair failed and we were unable to recover it. 00:25:40.757 [2024-11-26 20:55:44.105410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.757 [2024-11-26 20:55:44.105490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.757 qpair failed and we were unable to recover it. 00:25:40.757 [2024-11-26 20:55:44.105802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.757 [2024-11-26 20:55:44.105862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.757 qpair failed and we were unable to recover it. 00:25:40.757 [2024-11-26 20:55:44.106090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.757 [2024-11-26 20:55:44.106150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.757 qpair failed and we were unable to recover it. 00:25:40.757 [2024-11-26 20:55:44.106377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.757 [2024-11-26 20:55:44.106441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.757 qpair failed and we were unable to recover it. 00:25:40.757 [2024-11-26 20:55:44.106705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.757 [2024-11-26 20:55:44.106765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.757 qpair failed and we were unable to recover it. 00:25:40.757 [2024-11-26 20:55:44.106996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.757 [2024-11-26 20:55:44.107060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.757 qpair failed and we were unable to recover it. 00:25:40.757 [2024-11-26 20:55:44.107321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.757 [2024-11-26 20:55:44.107384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.757 qpair failed and we were unable to recover it. 00:25:40.757 [2024-11-26 20:55:44.107624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.757 [2024-11-26 20:55:44.107684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.757 qpair failed and we were unable to recover it. 00:25:40.757 [2024-11-26 20:55:44.107943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.757 [2024-11-26 20:55:44.108003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.757 qpair failed and we were unable to recover it. 00:25:40.757 [2024-11-26 20:55:44.108230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.757 [2024-11-26 20:55:44.108293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.757 qpair failed and we were unable to recover it. 00:25:40.757 [2024-11-26 20:55:44.108509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.757 [2024-11-26 20:55:44.108570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.757 qpair failed and we were unable to recover it. 00:25:40.757 [2024-11-26 20:55:44.108838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.757 [2024-11-26 20:55:44.108898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.757 qpair failed and we were unable to recover it. 00:25:40.757 [2024-11-26 20:55:44.109123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.757 [2024-11-26 20:55:44.109184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.757 qpair failed and we were unable to recover it. 00:25:40.757 [2024-11-26 20:55:44.109452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.757 [2024-11-26 20:55:44.109514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.757 qpair failed and we were unable to recover it. 00:25:40.757 [2024-11-26 20:55:44.109811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.757 [2024-11-26 20:55:44.109890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.757 qpair failed and we were unable to recover it. 00:25:40.757 [2024-11-26 20:55:44.110069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.757 [2024-11-26 20:55:44.110130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.757 qpair failed and we were unable to recover it. 00:25:40.757 [2024-11-26 20:55:44.110363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.757 [2024-11-26 20:55:44.110425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.757 qpair failed and we were unable to recover it. 00:25:40.757 [2024-11-26 20:55:44.110668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.757 [2024-11-26 20:55:44.110748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.757 qpair failed and we were unable to recover it. 00:25:40.757 [2024-11-26 20:55:44.111012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.757 [2024-11-26 20:55:44.111072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.757 qpair failed and we were unable to recover it. 00:25:40.757 [2024-11-26 20:55:44.111333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.757 [2024-11-26 20:55:44.111404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.757 qpair failed and we were unable to recover it. 00:25:40.757 [2024-11-26 20:55:44.111597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.757 [2024-11-26 20:55:44.111659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.757 qpair failed and we were unable to recover it. 00:25:40.757 [2024-11-26 20:55:44.111888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.757 [2024-11-26 20:55:44.111948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.757 qpair failed and we were unable to recover it. 00:25:40.757 [2024-11-26 20:55:44.112170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.758 [2024-11-26 20:55:44.112230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.758 qpair failed and we were unable to recover it. 00:25:40.758 [2024-11-26 20:55:44.112516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.758 [2024-11-26 20:55:44.112577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.758 qpair failed and we were unable to recover it. 00:25:40.758 [2024-11-26 20:55:44.112841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.758 [2024-11-26 20:55:44.112900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.758 qpair failed and we were unable to recover it. 00:25:40.758 [2024-11-26 20:55:44.113129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.758 [2024-11-26 20:55:44.113189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.758 qpair failed and we were unable to recover it. 00:25:40.758 [2024-11-26 20:55:44.113403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.758 [2024-11-26 20:55:44.113465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.758 qpair failed and we were unable to recover it. 00:25:40.758 [2024-11-26 20:55:44.113687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.758 [2024-11-26 20:55:44.113748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.758 qpair failed and we were unable to recover it. 00:25:40.758 [2024-11-26 20:55:44.113943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.758 [2024-11-26 20:55:44.114004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.758 qpair failed and we were unable to recover it. 00:25:40.758 [2024-11-26 20:55:44.114233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.758 [2024-11-26 20:55:44.114296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.758 qpair failed and we were unable to recover it. 00:25:40.758 [2024-11-26 20:55:44.114562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.758 [2024-11-26 20:55:44.114624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.758 qpair failed and we were unable to recover it. 00:25:40.758 [2024-11-26 20:55:44.114924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.758 [2024-11-26 20:55:44.114984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.758 qpair failed and we were unable to recover it. 00:25:40.758 [2024-11-26 20:55:44.115204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.758 [2024-11-26 20:55:44.115265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.758 qpair failed and we were unable to recover it. 00:25:40.758 [2024-11-26 20:55:44.115549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.758 [2024-11-26 20:55:44.115629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.758 qpair failed and we were unable to recover it. 00:25:40.758 [2024-11-26 20:55:44.115928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.758 [2024-11-26 20:55:44.116006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.758 qpair failed and we were unable to recover it. 00:25:40.758 [2024-11-26 20:55:44.116246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.758 [2024-11-26 20:55:44.116320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.758 qpair failed and we were unable to recover it. 00:25:40.758 [2024-11-26 20:55:44.116575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.758 [2024-11-26 20:55:44.116656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.758 qpair failed and we were unable to recover it. 00:25:40.758 [2024-11-26 20:55:44.116949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.758 [2024-11-26 20:55:44.117029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.758 qpair failed and we were unable to recover it. 00:25:40.758 [2024-11-26 20:55:44.117252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.758 [2024-11-26 20:55:44.117325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.758 qpair failed and we were unable to recover it. 00:25:40.758 [2024-11-26 20:55:44.117541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.758 [2024-11-26 20:55:44.117622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.758 qpair failed and we were unable to recover it. 00:25:40.758 [2024-11-26 20:55:44.117854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.758 [2024-11-26 20:55:44.117916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.758 qpair failed and we were unable to recover it. 00:25:40.758 [2024-11-26 20:55:44.118134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.758 [2024-11-26 20:55:44.118194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.758 qpair failed and we were unable to recover it. 00:25:40.758 [2024-11-26 20:55:44.118449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.758 [2024-11-26 20:55:44.118529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.758 qpair failed and we were unable to recover it. 00:25:40.758 [2024-11-26 20:55:44.118759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.758 [2024-11-26 20:55:44.118819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.758 qpair failed and we were unable to recover it. 00:25:40.758 [2024-11-26 20:55:44.119041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.758 [2024-11-26 20:55:44.119100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.758 qpair failed and we were unable to recover it. 00:25:40.758 [2024-11-26 20:55:44.119336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.758 [2024-11-26 20:55:44.119397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.758 qpair failed and we were unable to recover it. 00:25:40.758 [2024-11-26 20:55:44.119670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.758 [2024-11-26 20:55:44.119749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.758 qpair failed and we were unable to recover it. 00:25:40.758 [2024-11-26 20:55:44.119984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.758 [2024-11-26 20:55:44.120044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.758 qpair failed and we were unable to recover it. 00:25:40.758 [2024-11-26 20:55:44.120322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.758 [2024-11-26 20:55:44.120383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.758 qpair failed and we were unable to recover it. 00:25:40.758 [2024-11-26 20:55:44.120640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.758 [2024-11-26 20:55:44.120718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.758 qpair failed and we were unable to recover it. 00:25:40.758 [2024-11-26 20:55:44.120965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.758 [2024-11-26 20:55:44.121045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.758 qpair failed and we were unable to recover it. 00:25:40.758 [2024-11-26 20:55:44.121356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.758 [2024-11-26 20:55:44.121417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.758 qpair failed and we were unable to recover it. 00:25:40.758 [2024-11-26 20:55:44.121667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.758 [2024-11-26 20:55:44.121746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.758 qpair failed and we were unable to recover it. 00:25:40.758 [2024-11-26 20:55:44.121991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.758 [2024-11-26 20:55:44.122073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.758 qpair failed and we were unable to recover it. 00:25:40.758 [2024-11-26 20:55:44.122320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.758 [2024-11-26 20:55:44.122382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.758 qpair failed and we were unable to recover it. 00:25:40.758 [2024-11-26 20:55:44.122627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.758 [2024-11-26 20:55:44.122706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.758 qpair failed and we were unable to recover it. 00:25:40.758 [2024-11-26 20:55:44.122934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.759 [2024-11-26 20:55:44.123021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.759 qpair failed and we were unable to recover it. 00:25:40.759 [2024-11-26 20:55:44.123252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.759 [2024-11-26 20:55:44.123327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.759 qpair failed and we were unable to recover it. 00:25:40.759 [2024-11-26 20:55:44.123595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.759 [2024-11-26 20:55:44.123675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.759 qpair failed and we were unable to recover it. 00:25:40.759 [2024-11-26 20:55:44.123935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.759 [2024-11-26 20:55:44.124023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.759 qpair failed and we were unable to recover it. 00:25:40.759 [2024-11-26 20:55:44.124220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.759 [2024-11-26 20:55:44.124281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.759 qpair failed and we were unable to recover it. 00:25:40.759 [2024-11-26 20:55:44.124535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.759 [2024-11-26 20:55:44.124615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.759 qpair failed and we were unable to recover it. 00:25:40.759 [2024-11-26 20:55:44.124913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.759 [2024-11-26 20:55:44.124991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.759 qpair failed and we were unable to recover it. 00:25:40.759 [2024-11-26 20:55:44.125216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.759 [2024-11-26 20:55:44.125275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.759 qpair failed and we were unable to recover it. 00:25:40.759 [2024-11-26 20:55:44.125577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.759 [2024-11-26 20:55:44.125639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.759 qpair failed and we were unable to recover it. 00:25:40.759 [2024-11-26 20:55:44.125837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.759 [2024-11-26 20:55:44.125916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.759 qpair failed and we were unable to recover it. 00:25:40.759 [2024-11-26 20:55:44.126107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.759 [2024-11-26 20:55:44.126169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.759 qpair failed and we were unable to recover it. 00:25:40.759 [2024-11-26 20:55:44.126457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.759 [2024-11-26 20:55:44.126538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.759 qpair failed and we were unable to recover it. 00:25:40.759 [2024-11-26 20:55:44.126792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.759 [2024-11-26 20:55:44.126870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.759 qpair failed and we were unable to recover it. 00:25:40.759 [2024-11-26 20:55:44.127140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.759 [2024-11-26 20:55:44.127200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.759 qpair failed and we were unable to recover it. 00:25:40.759 [2024-11-26 20:55:44.127475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.759 [2024-11-26 20:55:44.127555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.759 qpair failed and we were unable to recover it. 00:25:40.759 [2024-11-26 20:55:44.131496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.759 [2024-11-26 20:55:44.131592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.759 qpair failed and we were unable to recover it. 00:25:40.759 [2024-11-26 20:55:44.131895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.759 [2024-11-26 20:55:44.131978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.759 qpair failed and we were unable to recover it. 00:25:40.759 [2024-11-26 20:55:44.132226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.759 [2024-11-26 20:55:44.132288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.759 qpair failed and we were unable to recover it. 00:25:40.759 [2024-11-26 20:55:44.132547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.759 [2024-11-26 20:55:44.132606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.759 qpair failed and we were unable to recover it. 00:25:40.759 [2024-11-26 20:55:44.132912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.759 [2024-11-26 20:55:44.132991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.759 qpair failed and we were unable to recover it. 00:25:40.759 [2024-11-26 20:55:44.133256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.759 [2024-11-26 20:55:44.133331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.759 qpair failed and we were unable to recover it. 00:25:40.759 [2024-11-26 20:55:44.133564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.759 [2024-11-26 20:55:44.133624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.759 qpair failed and we were unable to recover it. 00:25:40.759 [2024-11-26 20:55:44.133922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.759 [2024-11-26 20:55:44.134000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.759 qpair failed and we were unable to recover it. 00:25:40.759 [2024-11-26 20:55:44.134233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.759 [2024-11-26 20:55:44.134293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.759 qpair failed and we were unable to recover it. 00:25:40.759 [2024-11-26 20:55:44.134547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.759 [2024-11-26 20:55:44.134608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.759 qpair failed and we were unable to recover it. 00:25:40.759 [2024-11-26 20:55:44.134871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.759 [2024-11-26 20:55:44.134951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.759 qpair failed and we were unable to recover it. 00:25:40.759 [2024-11-26 20:55:44.135213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.759 [2024-11-26 20:55:44.135274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.759 qpair failed and we were unable to recover it. 00:25:40.759 [2024-11-26 20:55:44.135581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.759 [2024-11-26 20:55:44.135642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.759 qpair failed and we were unable to recover it. 00:25:40.759 [2024-11-26 20:55:44.135866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.759 [2024-11-26 20:55:44.135944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.759 qpair failed and we were unable to recover it. 00:25:40.759 [2024-11-26 20:55:44.136127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.759 [2024-11-26 20:55:44.136187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.759 qpair failed and we were unable to recover it. 00:25:40.759 [2024-11-26 20:55:44.136517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.759 [2024-11-26 20:55:44.136598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.759 qpair failed and we were unable to recover it. 00:25:40.759 [2024-11-26 20:55:44.136857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.759 [2024-11-26 20:55:44.136936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.759 qpair failed and we were unable to recover it. 00:25:40.759 [2024-11-26 20:55:44.137203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.759 [2024-11-26 20:55:44.137264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.759 qpair failed and we were unable to recover it. 00:25:40.759 [2024-11-26 20:55:44.137505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.759 [2024-11-26 20:55:44.137584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.759 qpair failed and we were unable to recover it. 00:25:40.759 [2024-11-26 20:55:44.137879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.759 [2024-11-26 20:55:44.137957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.759 qpair failed and we were unable to recover it. 00:25:40.759 [2024-11-26 20:55:44.138234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.759 [2024-11-26 20:55:44.138293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.759 qpair failed and we were unable to recover it. 00:25:40.759 [2024-11-26 20:55:44.138597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.759 [2024-11-26 20:55:44.138679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.759 qpair failed and we were unable to recover it. 00:25:40.759 [2024-11-26 20:55:44.138971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.759 [2024-11-26 20:55:44.139049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.759 qpair failed and we were unable to recover it. 00:25:40.759 [2024-11-26 20:55:44.139284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.760 [2024-11-26 20:55:44.139359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.760 qpair failed and we were unable to recover it. 00:25:40.760 [2024-11-26 20:55:44.139622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.760 [2024-11-26 20:55:44.139700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.760 qpair failed and we were unable to recover it. 00:25:40.760 [2024-11-26 20:55:44.139997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.760 [2024-11-26 20:55:44.140075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.760 qpair failed and we were unable to recover it. 00:25:40.760 [2024-11-26 20:55:44.140338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.760 [2024-11-26 20:55:44.140403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.760 qpair failed and we were unable to recover it. 00:25:40.760 [2024-11-26 20:55:44.140657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.760 [2024-11-26 20:55:44.140737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.760 qpair failed and we were unable to recover it. 00:25:40.760 [2024-11-26 20:55:44.140991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.760 [2024-11-26 20:55:44.141081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.760 qpair failed and we were unable to recover it. 00:25:40.760 [2024-11-26 20:55:44.141275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.760 [2024-11-26 20:55:44.141351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.760 qpair failed and we were unable to recover it. 00:25:40.760 [2024-11-26 20:55:44.141605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.760 [2024-11-26 20:55:44.141684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.760 qpair failed and we were unable to recover it. 00:25:40.760 [2024-11-26 20:55:44.141943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.760 [2024-11-26 20:55:44.142022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.760 qpair failed and we were unable to recover it. 00:25:40.760 [2024-11-26 20:55:44.142235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.760 [2024-11-26 20:55:44.142295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.760 qpair failed and we were unable to recover it. 00:25:40.760 [2024-11-26 20:55:44.142627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.760 [2024-11-26 20:55:44.142706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.760 qpair failed and we were unable to recover it. 00:25:40.760 [2024-11-26 20:55:44.142994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.760 [2024-11-26 20:55:44.143074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.760 qpair failed and we were unable to recover it. 00:25:40.760 [2024-11-26 20:55:44.143329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.760 [2024-11-26 20:55:44.143391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.760 qpair failed and we were unable to recover it. 00:25:40.760 [2024-11-26 20:55:44.143662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.760 [2024-11-26 20:55:44.143723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.760 qpair failed and we were unable to recover it. 00:25:40.760 [2024-11-26 20:55:44.143924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.760 [2024-11-26 20:55:44.144003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.760 qpair failed and we were unable to recover it. 00:25:40.760 [2024-11-26 20:55:44.144236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.760 [2024-11-26 20:55:44.144300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.760 qpair failed and we were unable to recover it. 00:25:40.760 [2024-11-26 20:55:44.144588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.760 [2024-11-26 20:55:44.144649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.760 qpair failed and we were unable to recover it. 00:25:40.760 [2024-11-26 20:55:44.144895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.760 [2024-11-26 20:55:44.144955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.760 qpair failed and we were unable to recover it. 00:25:40.760 [2024-11-26 20:55:44.145199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.760 [2024-11-26 20:55:44.145259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.760 qpair failed and we were unable to recover it. 00:25:40.760 [2024-11-26 20:55:44.145485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.760 [2024-11-26 20:55:44.145547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.760 qpair failed and we were unable to recover it. 00:25:40.760 [2024-11-26 20:55:44.145801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.760 [2024-11-26 20:55:44.145879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.760 qpair failed and we were unable to recover it. 00:25:40.760 [2024-11-26 20:55:44.146071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.760 [2024-11-26 20:55:44.146151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.760 qpair failed and we were unable to recover it. 00:25:40.760 [2024-11-26 20:55:44.146405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.760 [2024-11-26 20:55:44.146486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.760 qpair failed and we were unable to recover it. 00:25:40.760 [2024-11-26 20:55:44.146775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.760 [2024-11-26 20:55:44.146853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.760 qpair failed and we were unable to recover it. 00:25:40.760 [2024-11-26 20:55:44.147091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.760 [2024-11-26 20:55:44.147150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.760 qpair failed and we were unable to recover it. 00:25:40.760 [2024-11-26 20:55:44.147431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.760 [2024-11-26 20:55:44.147512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.760 qpair failed and we were unable to recover it. 00:25:40.760 [2024-11-26 20:55:44.147807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.760 [2024-11-26 20:55:44.147886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.760 qpair failed and we were unable to recover it. 00:25:40.760 [2024-11-26 20:55:44.148091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.760 [2024-11-26 20:55:44.148151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.760 qpair failed and we were unable to recover it. 00:25:40.760 [2024-11-26 20:55:44.148417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.760 [2024-11-26 20:55:44.148519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.760 qpair failed and we were unable to recover it. 00:25:40.760 [2024-11-26 20:55:44.148789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.760 [2024-11-26 20:55:44.148854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.760 qpair failed and we were unable to recover it. 00:25:40.760 [2024-11-26 20:55:44.149064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.760 [2024-11-26 20:55:44.149125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.760 qpair failed and we were unable to recover it. 00:25:40.760 [2024-11-26 20:55:44.149391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.760 [2024-11-26 20:55:44.149453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.760 qpair failed and we were unable to recover it. 00:25:40.760 [2024-11-26 20:55:44.149648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.760 [2024-11-26 20:55:44.149709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.760 qpair failed and we were unable to recover it. 00:25:40.760 [2024-11-26 20:55:44.149916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.760 [2024-11-26 20:55:44.149976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.761 qpair failed and we were unable to recover it. 00:25:40.761 [2024-11-26 20:55:44.150238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.761 [2024-11-26 20:55:44.150298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.761 qpair failed and we were unable to recover it. 00:25:40.761 [2024-11-26 20:55:44.150550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.761 [2024-11-26 20:55:44.150611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.761 qpair failed and we were unable to recover it. 00:25:40.761 [2024-11-26 20:55:44.150844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.761 [2024-11-26 20:55:44.150904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.761 qpair failed and we were unable to recover it. 00:25:40.761 [2024-11-26 20:55:44.151091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.761 [2024-11-26 20:55:44.151153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.761 qpair failed and we were unable to recover it. 00:25:40.761 [2024-11-26 20:55:44.151424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.761 [2024-11-26 20:55:44.151503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.761 qpair failed and we were unable to recover it. 00:25:40.761 [2024-11-26 20:55:44.151795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.761 [2024-11-26 20:55:44.151873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.761 qpair failed and we were unable to recover it. 00:25:40.761 [2024-11-26 20:55:44.152110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.761 [2024-11-26 20:55:44.152169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.761 qpair failed and we were unable to recover it. 00:25:40.761 [2024-11-26 20:55:44.152419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.761 [2024-11-26 20:55:44.152501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.761 qpair failed and we were unable to recover it. 00:25:40.761 [2024-11-26 20:55:44.152718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.761 [2024-11-26 20:55:44.152796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.761 qpair failed and we were unable to recover it. 00:25:40.761 [2024-11-26 20:55:44.153012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.761 [2024-11-26 20:55:44.153071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.761 qpair failed and we were unable to recover it. 00:25:40.761 [2024-11-26 20:55:44.153299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.761 [2024-11-26 20:55:44.153373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.761 qpair failed and we were unable to recover it. 00:25:40.761 [2024-11-26 20:55:44.153601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.761 [2024-11-26 20:55:44.153708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.761 qpair failed and we were unable to recover it. 00:25:40.761 [2024-11-26 20:55:44.154005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.761 [2024-11-26 20:55:44.154083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.761 qpair failed and we were unable to recover it. 00:25:40.761 [2024-11-26 20:55:44.154346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.761 [2024-11-26 20:55:44.154410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.761 qpair failed and we were unable to recover it. 00:25:40.761 [2024-11-26 20:55:44.154680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.761 [2024-11-26 20:55:44.154759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.761 qpair failed and we were unable to recover it. 00:25:40.761 [2024-11-26 20:55:44.154951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.761 [2024-11-26 20:55:44.155013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.761 qpair failed and we were unable to recover it. 00:25:40.761 [2024-11-26 20:55:44.155227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.761 [2024-11-26 20:55:44.155289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.761 qpair failed and we were unable to recover it. 00:25:40.761 [2024-11-26 20:55:44.155573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.761 [2024-11-26 20:55:44.155654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.761 qpair failed and we were unable to recover it. 00:25:40.761 [2024-11-26 20:55:44.155962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.761 [2024-11-26 20:55:44.156039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.761 qpair failed and we were unable to recover it. 00:25:40.761 [2024-11-26 20:55:44.156232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.761 [2024-11-26 20:55:44.156296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.761 qpair failed and we were unable to recover it. 00:25:40.761 [2024-11-26 20:55:44.156579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.761 [2024-11-26 20:55:44.156658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.761 qpair failed and we were unable to recover it. 00:25:40.761 [2024-11-26 20:55:44.156954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.761 [2024-11-26 20:55:44.157031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.761 qpair failed and we were unable to recover it. 00:25:40.761 [2024-11-26 20:55:44.157259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.761 [2024-11-26 20:55:44.157335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.761 qpair failed and we were unable to recover it. 00:25:40.761 [2024-11-26 20:55:44.157574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.761 [2024-11-26 20:55:44.157653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.761 qpair failed and we were unable to recover it. 00:25:40.761 [2024-11-26 20:55:44.157908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.761 [2024-11-26 20:55:44.157987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.761 qpair failed and we were unable to recover it. 00:25:40.761 [2024-11-26 20:55:44.158273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.761 [2024-11-26 20:55:44.158361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.761 qpair failed and we were unable to recover it. 00:25:40.761 [2024-11-26 20:55:44.158657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.761 [2024-11-26 20:55:44.158735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.761 qpair failed and we were unable to recover it. 00:25:40.761 [2024-11-26 20:55:44.159013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.761 [2024-11-26 20:55:44.159073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.761 qpair failed and we were unable to recover it. 00:25:40.761 [2024-11-26 20:55:44.159324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.761 [2024-11-26 20:55:44.159386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.761 qpair failed and we were unable to recover it. 00:25:40.761 [2024-11-26 20:55:44.159628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.761 [2024-11-26 20:55:44.159706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.761 qpair failed and we were unable to recover it. 00:25:40.761 [2024-11-26 20:55:44.159994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.761 [2024-11-26 20:55:44.160072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.761 qpair failed and we were unable to recover it. 00:25:40.761 [2024-11-26 20:55:44.160289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.761 [2024-11-26 20:55:44.160363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.761 qpair failed and we were unable to recover it. 00:25:40.761 [2024-11-26 20:55:44.160625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.761 [2024-11-26 20:55:44.160704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.761 qpair failed and we were unable to recover it. 00:25:40.761 [2024-11-26 20:55:44.160965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.761 [2024-11-26 20:55:44.161042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.761 qpair failed and we were unable to recover it. 00:25:40.761 [2024-11-26 20:55:44.161260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.761 [2024-11-26 20:55:44.161330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.761 qpair failed and we were unable to recover it. 00:25:40.761 [2024-11-26 20:55:44.161577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.761 [2024-11-26 20:55:44.161657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.761 qpair failed and we were unable to recover it. 00:25:40.761 [2024-11-26 20:55:44.161915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.761 [2024-11-26 20:55:44.161994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.761 qpair failed and we were unable to recover it. 00:25:40.761 [2024-11-26 20:55:44.162232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.761 [2024-11-26 20:55:44.162292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.761 qpair failed and we were unable to recover it. 00:25:40.762 [2024-11-26 20:55:44.162547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.762 [2024-11-26 20:55:44.162635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.762 qpair failed and we were unable to recover it. 00:25:40.762 [2024-11-26 20:55:44.162931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.762 [2024-11-26 20:55:44.163009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.762 qpair failed and we were unable to recover it. 00:25:40.762 [2024-11-26 20:55:44.163286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.762 [2024-11-26 20:55:44.163362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.762 qpair failed and we were unable to recover it. 00:25:40.762 [2024-11-26 20:55:44.163653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.762 [2024-11-26 20:55:44.163731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.762 qpair failed and we were unable to recover it. 00:25:40.762 [2024-11-26 20:55:44.164041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.762 [2024-11-26 20:55:44.164119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.762 qpair failed and we were unable to recover it. 00:25:40.762 [2024-11-26 20:55:44.164400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.762 [2024-11-26 20:55:44.164462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.762 qpair failed and we were unable to recover it. 00:25:40.762 [2024-11-26 20:55:44.164741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.762 [2024-11-26 20:55:44.164819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.762 qpair failed and we were unable to recover it. 00:25:40.762 [2024-11-26 20:55:44.165082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.762 [2024-11-26 20:55:44.165161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.762 qpair failed and we were unable to recover it. 00:25:40.762 [2024-11-26 20:55:44.165413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.762 [2024-11-26 20:55:44.165495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.762 qpair failed and we were unable to recover it. 00:25:40.762 [2024-11-26 20:55:44.165783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.762 [2024-11-26 20:55:44.165861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.762 qpair failed and we were unable to recover it. 00:25:40.762 [2024-11-26 20:55:44.166156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.762 [2024-11-26 20:55:44.166235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.762 qpair failed and we were unable to recover it. 00:25:40.762 [2024-11-26 20:55:44.166535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.762 [2024-11-26 20:55:44.166615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.762 qpair failed and we were unable to recover it. 00:25:40.762 [2024-11-26 20:55:44.166796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.762 [2024-11-26 20:55:44.166872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.762 qpair failed and we were unable to recover it. 00:25:40.762 [2024-11-26 20:55:44.167138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.762 [2024-11-26 20:55:44.167199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.762 qpair failed and we were unable to recover it. 00:25:40.762 [2024-11-26 20:55:44.167570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.762 [2024-11-26 20:55:44.167657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.762 qpair failed and we were unable to recover it. 00:25:40.762 [2024-11-26 20:55:44.167850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.762 [2024-11-26 20:55:44.167912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.762 qpair failed and we were unable to recover it. 00:25:40.762 [2024-11-26 20:55:44.168178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.762 [2024-11-26 20:55:44.168238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.762 qpair failed and we were unable to recover it. 00:25:40.762 [2024-11-26 20:55:44.168505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.762 [2024-11-26 20:55:44.168586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.762 qpair failed and we were unable to recover it. 00:25:40.762 [2024-11-26 20:55:44.168785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.762 [2024-11-26 20:55:44.168864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.762 qpair failed and we were unable to recover it. 00:25:40.762 [2024-11-26 20:55:44.169101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.762 [2024-11-26 20:55:44.169165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.762 qpair failed and we were unable to recover it. 00:25:40.762 [2024-11-26 20:55:44.169428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.762 [2024-11-26 20:55:44.169467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.762 qpair failed and we were unable to recover it. 00:25:40.762 [2024-11-26 20:55:44.169616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.762 [2024-11-26 20:55:44.169650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.762 qpair failed and we were unable to recover it. 00:25:40.762 [2024-11-26 20:55:44.169814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.762 [2024-11-26 20:55:44.169851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.762 qpair failed and we were unable to recover it. 00:25:40.762 [2024-11-26 20:55:44.169978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.762 [2024-11-26 20:55:44.170045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.762 qpair failed and we were unable to recover it. 00:25:40.762 [2024-11-26 20:55:44.170214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.762 [2024-11-26 20:55:44.170250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.762 qpair failed and we were unable to recover it. 00:25:40.762 [2024-11-26 20:55:44.170420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.762 [2024-11-26 20:55:44.170454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.762 qpair failed and we were unable to recover it. 00:25:40.762 [2024-11-26 20:55:44.170598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.762 [2024-11-26 20:55:44.170633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.762 qpair failed and we were unable to recover it. 00:25:40.762 [2024-11-26 20:55:44.170769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.762 [2024-11-26 20:55:44.170805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.762 qpair failed and we were unable to recover it. 00:25:40.762 [2024-11-26 20:55:44.170989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.762 [2024-11-26 20:55:44.171025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.762 qpair failed and we were unable to recover it. 00:25:40.762 [2024-11-26 20:55:44.171174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.762 [2024-11-26 20:55:44.171210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.762 qpair failed and we were unable to recover it. 00:25:40.762 [2024-11-26 20:55:44.171351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.762 [2024-11-26 20:55:44.171386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.762 qpair failed and we were unable to recover it. 00:25:40.762 [2024-11-26 20:55:44.171524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.762 [2024-11-26 20:55:44.171557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.762 qpair failed and we were unable to recover it. 00:25:40.762 [2024-11-26 20:55:44.171739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.762 [2024-11-26 20:55:44.171774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.762 qpair failed and we were unable to recover it. 00:25:40.762 [2024-11-26 20:55:44.171911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.762 [2024-11-26 20:55:44.171944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.762 qpair failed and we were unable to recover it. 00:25:40.762 [2024-11-26 20:55:44.172089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.762 [2024-11-26 20:55:44.172124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.762 qpair failed and we were unable to recover it. 00:25:40.762 [2024-11-26 20:55:44.172242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.762 [2024-11-26 20:55:44.172275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.762 qpair failed and we were unable to recover it. 00:25:40.762 [2024-11-26 20:55:44.172427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.762 [2024-11-26 20:55:44.172479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.762 qpair failed and we were unable to recover it. 00:25:40.762 [2024-11-26 20:55:44.172636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.762 [2024-11-26 20:55:44.172675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.762 qpair failed and we were unable to recover it. 00:25:40.763 [2024-11-26 20:55:44.172818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.763 [2024-11-26 20:55:44.172854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.763 qpair failed and we were unable to recover it. 00:25:40.763 [2024-11-26 20:55:44.172994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.763 [2024-11-26 20:55:44.173027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.763 qpair failed and we were unable to recover it. 00:25:40.763 [2024-11-26 20:55:44.173142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.763 [2024-11-26 20:55:44.173189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.763 qpair failed and we were unable to recover it. 00:25:40.763 [2024-11-26 20:55:44.173319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.763 [2024-11-26 20:55:44.173353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.763 qpair failed and we were unable to recover it. 00:25:40.763 [2024-11-26 20:55:44.173459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.763 [2024-11-26 20:55:44.173522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.763 qpair failed and we were unable to recover it. 00:25:40.763 [2024-11-26 20:55:44.173668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.763 [2024-11-26 20:55:44.173703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.763 qpair failed and we were unable to recover it. 00:25:40.763 [2024-11-26 20:55:44.173820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.763 [2024-11-26 20:55:44.173854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.763 qpair failed and we were unable to recover it. 00:25:40.763 [2024-11-26 20:55:44.173970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.763 [2024-11-26 20:55:44.174004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.763 qpair failed and we were unable to recover it. 00:25:40.763 [2024-11-26 20:55:44.174135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.763 [2024-11-26 20:55:44.174170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.763 qpair failed and we were unable to recover it. 00:25:40.763 [2024-11-26 20:55:44.174317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.763 [2024-11-26 20:55:44.174370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.763 qpair failed and we were unable to recover it. 00:25:40.763 [2024-11-26 20:55:44.174514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.763 [2024-11-26 20:55:44.174549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.763 qpair failed and we were unable to recover it. 00:25:40.763 [2024-11-26 20:55:44.174670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.763 [2024-11-26 20:55:44.174701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.763 qpair failed and we were unable to recover it. 00:25:40.763 [2024-11-26 20:55:44.174844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.763 [2024-11-26 20:55:44.174879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.763 qpair failed and we were unable to recover it. 00:25:40.763 [2024-11-26 20:55:44.174990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.763 [2024-11-26 20:55:44.175023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.763 qpair failed and we were unable to recover it. 00:25:40.763 [2024-11-26 20:55:44.175126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.763 [2024-11-26 20:55:44.175158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.763 qpair failed and we were unable to recover it. 00:25:40.763 [2024-11-26 20:55:44.175269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.763 [2024-11-26 20:55:44.175301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.763 qpair failed and we were unable to recover it. 00:25:40.763 [2024-11-26 20:55:44.175475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.763 [2024-11-26 20:55:44.175510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.763 qpair failed and we were unable to recover it. 00:25:40.763 [2024-11-26 20:55:44.175674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.763 [2024-11-26 20:55:44.175708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.763 qpair failed and we were unable to recover it. 00:25:40.763 [2024-11-26 20:55:44.175842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.763 [2024-11-26 20:55:44.175875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:40.763 qpair failed and we were unable to recover it. 00:25:40.763 [2024-11-26 20:55:44.175988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.763 [2024-11-26 20:55:44.176028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.763 qpair failed and we were unable to recover it. 00:25:40.763 [2024-11-26 20:55:44.176140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.763 [2024-11-26 20:55:44.176174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.763 qpair failed and we were unable to recover it. 00:25:40.763 [2024-11-26 20:55:44.176284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.763 [2024-11-26 20:55:44.176337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.763 qpair failed and we were unable to recover it. 00:25:40.763 [2024-11-26 20:55:44.176450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.763 [2024-11-26 20:55:44.176483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.763 qpair failed and we were unable to recover it. 00:25:40.763 [2024-11-26 20:55:44.176595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.763 [2024-11-26 20:55:44.176627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.763 qpair failed and we were unable to recover it. 00:25:40.763 [2024-11-26 20:55:44.176774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.763 [2024-11-26 20:55:44.176805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.763 qpair failed and we were unable to recover it. 00:25:40.763 [2024-11-26 20:55:44.176949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.763 [2024-11-26 20:55:44.176981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.763 qpair failed and we were unable to recover it. 00:25:40.763 [2024-11-26 20:55:44.177101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.763 [2024-11-26 20:55:44.177134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.763 qpair failed and we were unable to recover it. 00:25:40.763 [2024-11-26 20:55:44.177294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.763 [2024-11-26 20:55:44.177356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.763 qpair failed and we were unable to recover it. 00:25:40.763 [2024-11-26 20:55:44.177483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.763 [2024-11-26 20:55:44.177516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.763 qpair failed and we were unable to recover it. 00:25:40.763 [2024-11-26 20:55:44.177632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.763 [2024-11-26 20:55:44.177663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.763 qpair failed and we were unable to recover it. 00:25:40.763 [2024-11-26 20:55:44.177760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.763 [2024-11-26 20:55:44.177791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.763 qpair failed and we were unable to recover it. 00:25:40.763 [2024-11-26 20:55:44.177898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.763 [2024-11-26 20:55:44.177931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.763 qpair failed and we were unable to recover it. 00:25:40.763 [2024-11-26 20:55:44.178050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.763 [2024-11-26 20:55:44.178082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.763 qpair failed and we were unable to recover it. 00:25:40.763 [2024-11-26 20:55:44.178188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.764 [2024-11-26 20:55:44.178219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.764 qpair failed and we were unable to recover it. 00:25:40.764 [2024-11-26 20:55:44.178334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.764 [2024-11-26 20:55:44.178367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.764 qpair failed and we were unable to recover it. 00:25:40.764 [2024-11-26 20:55:44.178472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.764 [2024-11-26 20:55:44.178504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.764 qpair failed and we were unable to recover it. 00:25:40.764 [2024-11-26 20:55:44.178623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.764 [2024-11-26 20:55:44.178654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.764 qpair failed and we were unable to recover it. 00:25:40.764 [2024-11-26 20:55:44.178782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.764 [2024-11-26 20:55:44.178813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.764 qpair failed and we were unable to recover it. 00:25:40.764 [2024-11-26 20:55:44.178916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.764 [2024-11-26 20:55:44.178946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.764 qpair failed and we were unable to recover it. 00:25:40.764 [2024-11-26 20:55:44.179091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.764 [2024-11-26 20:55:44.179126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.764 qpair failed and we were unable to recover it. 00:25:40.764 [2024-11-26 20:55:44.179243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.764 [2024-11-26 20:55:44.179276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.764 qpair failed and we were unable to recover it. 00:25:40.764 [2024-11-26 20:55:44.179413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.764 [2024-11-26 20:55:44.179445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.764 qpair failed and we were unable to recover it. 00:25:40.764 [2024-11-26 20:55:44.179553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.764 [2024-11-26 20:55:44.179591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.764 qpair failed and we were unable to recover it. 00:25:40.764 [2024-11-26 20:55:44.179727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.764 [2024-11-26 20:55:44.179757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.764 qpair failed and we were unable to recover it. 00:25:40.764 [2024-11-26 20:55:44.179890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.764 [2024-11-26 20:55:44.179930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.764 qpair failed and we were unable to recover it. 00:25:40.764 [2024-11-26 20:55:44.180090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.764 [2024-11-26 20:55:44.180143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.764 qpair failed and we were unable to recover it. 00:25:40.764 [2024-11-26 20:55:44.180359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.764 [2024-11-26 20:55:44.180390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.764 qpair failed and we were unable to recover it. 00:25:40.764 [2024-11-26 20:55:44.180500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.764 [2024-11-26 20:55:44.180529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.764 qpair failed and we were unable to recover it. 00:25:40.764 [2024-11-26 20:55:44.180670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.764 [2024-11-26 20:55:44.180702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.764 qpair failed and we were unable to recover it. 00:25:40.764 [2024-11-26 20:55:44.180884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.764 [2024-11-26 20:55:44.180964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.764 qpair failed and we were unable to recover it. 00:25:40.764 [2024-11-26 20:55:44.181164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.764 [2024-11-26 20:55:44.181226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.764 qpair failed and we were unable to recover it. 00:25:40.764 [2024-11-26 20:55:44.181411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.764 [2024-11-26 20:55:44.181443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.764 qpair failed and we were unable to recover it. 00:25:40.764 [2024-11-26 20:55:44.181553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.764 [2024-11-26 20:55:44.181583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.764 qpair failed and we were unable to recover it. 00:25:40.764 [2024-11-26 20:55:44.181821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.764 [2024-11-26 20:55:44.181899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.764 qpair failed and we were unable to recover it. 00:25:40.764 [2024-11-26 20:55:44.182166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.764 [2024-11-26 20:55:44.182226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.764 qpair failed and we were unable to recover it. 00:25:40.764 [2024-11-26 20:55:44.182447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.764 [2024-11-26 20:55:44.182479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.764 qpair failed and we were unable to recover it. 00:25:40.764 [2024-11-26 20:55:44.182598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.764 [2024-11-26 20:55:44.182628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.764 qpair failed and we were unable to recover it. 00:25:40.764 [2024-11-26 20:55:44.182743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.764 [2024-11-26 20:55:44.182773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.764 qpair failed and we were unable to recover it. 00:25:40.764 [2024-11-26 20:55:44.182999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.764 [2024-11-26 20:55:44.183058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.764 qpair failed and we were unable to recover it. 00:25:40.764 [2024-11-26 20:55:44.183267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.764 [2024-11-26 20:55:44.183354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.764 qpair failed and we were unable to recover it. 00:25:40.764 [2024-11-26 20:55:44.183496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.764 [2024-11-26 20:55:44.183527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.764 qpair failed and we were unable to recover it. 00:25:40.764 [2024-11-26 20:55:44.183656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.764 [2024-11-26 20:55:44.183685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.764 qpair failed and we were unable to recover it. 00:25:40.764 [2024-11-26 20:55:44.183836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.764 [2024-11-26 20:55:44.183896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.764 qpair failed and we were unable to recover it. 00:25:40.764 [2024-11-26 20:55:44.184135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.764 [2024-11-26 20:55:44.184195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.764 qpair failed and we were unable to recover it. 00:25:40.764 [2024-11-26 20:55:44.184409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.764 [2024-11-26 20:55:44.184442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.764 qpair failed and we were unable to recover it. 00:25:40.764 [2024-11-26 20:55:44.184562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.764 [2024-11-26 20:55:44.184593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.764 qpair failed and we were unable to recover it. 00:25:40.764 [2024-11-26 20:55:44.184694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.764 [2024-11-26 20:55:44.184725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.764 qpair failed and we were unable to recover it. 00:25:40.764 [2024-11-26 20:55:44.184901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.764 [2024-11-26 20:55:44.184958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.764 qpair failed and we were unable to recover it. 00:25:40.764 [2024-11-26 20:55:44.185191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.764 [2024-11-26 20:55:44.185251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.764 qpair failed and we were unable to recover it. 00:25:40.764 [2024-11-26 20:55:44.185434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.764 [2024-11-26 20:55:44.185472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.764 qpair failed and we were unable to recover it. 00:25:40.764 [2024-11-26 20:55:44.185610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.764 [2024-11-26 20:55:44.185641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.764 qpair failed and we were unable to recover it. 00:25:40.764 [2024-11-26 20:55:44.185833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.764 [2024-11-26 20:55:44.185893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.765 qpair failed and we were unable to recover it. 00:25:40.765 [2024-11-26 20:55:44.186163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.765 [2024-11-26 20:55:44.186223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.765 qpair failed and we were unable to recover it. 00:25:40.765 [2024-11-26 20:55:44.186404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.765 [2024-11-26 20:55:44.186435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.765 qpair failed and we were unable to recover it. 00:25:40.765 [2024-11-26 20:55:44.186563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.765 [2024-11-26 20:55:44.186593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.765 qpair failed and we were unable to recover it. 00:25:40.765 [2024-11-26 20:55:44.186745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.765 [2024-11-26 20:55:44.186777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.765 qpair failed and we were unable to recover it. 00:25:40.765 [2024-11-26 20:55:44.186878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.765 [2024-11-26 20:55:44.186940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.765 qpair failed and we were unable to recover it. 00:25:40.765 [2024-11-26 20:55:44.187139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.765 [2024-11-26 20:55:44.187200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.765 qpair failed and we were unable to recover it. 00:25:40.765 [2024-11-26 20:55:44.187439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.765 [2024-11-26 20:55:44.187473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.765 qpair failed and we were unable to recover it. 00:25:40.765 [2024-11-26 20:55:44.187650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.765 [2024-11-26 20:55:44.187710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.765 qpair failed and we were unable to recover it. 00:25:40.765 [2024-11-26 20:55:44.187906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.765 [2024-11-26 20:55:44.187967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.765 qpair failed and we were unable to recover it. 00:25:40.765 [2024-11-26 20:55:44.188239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.765 [2024-11-26 20:55:44.188299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.765 qpair failed and we were unable to recover it. 00:25:40.765 [2024-11-26 20:55:44.188480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.765 [2024-11-26 20:55:44.188512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.765 qpair failed and we were unable to recover it. 00:25:40.765 [2024-11-26 20:55:44.188698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.765 [2024-11-26 20:55:44.188776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.765 qpair failed and we were unable to recover it. 00:25:40.765 [2024-11-26 20:55:44.188972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.765 [2024-11-26 20:55:44.189035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.765 qpair failed and we were unable to recover it. 00:25:40.765 [2024-11-26 20:55:44.189262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.765 [2024-11-26 20:55:44.189337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.765 qpair failed and we were unable to recover it. 00:25:40.765 [2024-11-26 20:55:44.189459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.765 [2024-11-26 20:55:44.189490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.765 qpair failed and we were unable to recover it. 00:25:40.765 [2024-11-26 20:55:44.189622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.765 [2024-11-26 20:55:44.189652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.765 qpair failed and we were unable to recover it. 00:25:40.765 [2024-11-26 20:55:44.189810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.765 [2024-11-26 20:55:44.189840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.765 qpair failed and we were unable to recover it. 00:25:40.765 [2024-11-26 20:55:44.190088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.765 [2024-11-26 20:55:44.190147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.765 qpair failed and we were unable to recover it. 00:25:40.765 [2024-11-26 20:55:44.190392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.765 [2024-11-26 20:55:44.190425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.765 qpair failed and we were unable to recover it. 00:25:40.765 [2024-11-26 20:55:44.190522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.765 [2024-11-26 20:55:44.190553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.765 qpair failed and we were unable to recover it. 00:25:40.765 [2024-11-26 20:55:44.190758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.765 [2024-11-26 20:55:44.190837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.765 qpair failed and we were unable to recover it. 00:25:40.765 [2024-11-26 20:55:44.191079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.765 [2024-11-26 20:55:44.191139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.765 qpair failed and we were unable to recover it. 00:25:40.765 [2024-11-26 20:55:44.191331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.765 [2024-11-26 20:55:44.191385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.765 qpair failed and we were unable to recover it. 00:25:40.765 [2024-11-26 20:55:44.191506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.765 [2024-11-26 20:55:44.191536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.765 qpair failed and we were unable to recover it. 00:25:40.765 [2024-11-26 20:55:44.191718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.765 [2024-11-26 20:55:44.191798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.765 qpair failed and we were unable to recover it. 00:25:40.765 [2024-11-26 20:55:44.192014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.765 [2024-11-26 20:55:44.192074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.765 qpair failed and we were unable to recover it. 00:25:40.765 [2024-11-26 20:55:44.192279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.765 [2024-11-26 20:55:44.192367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.765 qpair failed and we were unable to recover it. 00:25:40.765 [2024-11-26 20:55:44.192503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.765 [2024-11-26 20:55:44.192535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.765 qpair failed and we were unable to recover it. 00:25:40.765 [2024-11-26 20:55:44.192769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.765 [2024-11-26 20:55:44.192828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.765 qpair failed and we were unable to recover it. 00:25:40.766 [2024-11-26 20:55:44.193090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.766 [2024-11-26 20:55:44.193149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.766 qpair failed and we were unable to recover it. 00:25:40.766 [2024-11-26 20:55:44.193414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.766 [2024-11-26 20:55:44.193447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.766 qpair failed and we were unable to recover it. 00:25:40.766 [2024-11-26 20:55:44.193554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.766 [2024-11-26 20:55:44.193584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.766 qpair failed and we were unable to recover it. 00:25:40.766 [2024-11-26 20:55:44.193826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.766 [2024-11-26 20:55:44.193903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.766 qpair failed and we were unable to recover it. 00:25:40.766 [2024-11-26 20:55:44.194107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.766 [2024-11-26 20:55:44.194165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.766 qpair failed and we were unable to recover it. 00:25:40.766 [2024-11-26 20:55:44.194424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.766 [2024-11-26 20:55:44.194456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.766 qpair failed and we were unable to recover it. 00:25:40.766 [2024-11-26 20:55:44.194589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.766 [2024-11-26 20:55:44.194619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.766 qpair failed and we were unable to recover it. 00:25:40.766 [2024-11-26 20:55:44.194902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.766 [2024-11-26 20:55:44.194934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.766 qpair failed and we were unable to recover it. 00:25:40.766 [2024-11-26 20:55:44.195210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.766 [2024-11-26 20:55:44.195282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.766 qpair failed and we were unable to recover it. 00:25:40.766 [2024-11-26 20:55:44.195452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.766 [2024-11-26 20:55:44.195482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.766 qpair failed and we were unable to recover it. 00:25:40.766 [2024-11-26 20:55:44.195617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.766 [2024-11-26 20:55:44.195649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.766 qpair failed and we were unable to recover it. 00:25:40.766 [2024-11-26 20:55:44.195811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.766 [2024-11-26 20:55:44.195890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.766 qpair failed and we were unable to recover it. 00:25:40.766 [2024-11-26 20:55:44.196075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.766 [2024-11-26 20:55:44.196123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.766 qpair failed and we were unable to recover it. 00:25:40.766 [2024-11-26 20:55:44.196317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.766 [2024-11-26 20:55:44.196350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.766 qpair failed and we were unable to recover it. 00:25:40.766 [2024-11-26 20:55:44.196508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.766 [2024-11-26 20:55:44.196539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.766 qpair failed and we were unable to recover it. 00:25:40.766 [2024-11-26 20:55:44.196643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.766 [2024-11-26 20:55:44.196674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.766 qpair failed and we were unable to recover it. 00:25:40.766 [2024-11-26 20:55:44.196899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.766 [2024-11-26 20:55:44.196977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.766 qpair failed and we were unable to recover it. 00:25:40.766 [2024-11-26 20:55:44.197202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.766 [2024-11-26 20:55:44.197262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.766 qpair failed and we were unable to recover it. 00:25:40.766 [2024-11-26 20:55:44.197578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.766 [2024-11-26 20:55:44.197657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.766 qpair failed and we were unable to recover it. 00:25:40.766 [2024-11-26 20:55:44.197906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.766 [2024-11-26 20:55:44.197986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.766 qpair failed and we were unable to recover it. 00:25:40.766 [2024-11-26 20:55:44.198265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.766 [2024-11-26 20:55:44.198339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.766 qpair failed and we were unable to recover it. 00:25:40.766 [2024-11-26 20:55:44.198580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.766 [2024-11-26 20:55:44.198612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.766 qpair failed and we were unable to recover it. 00:25:40.766 [2024-11-26 20:55:44.198756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.766 [2024-11-26 20:55:44.198786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.766 qpair failed and we were unable to recover it. 00:25:40.766 [2024-11-26 20:55:44.199055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.766 [2024-11-26 20:55:44.199135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.766 qpair failed and we were unable to recover it. 00:25:40.766 [2024-11-26 20:55:44.199391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.766 [2024-11-26 20:55:44.199424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.766 qpair failed and we were unable to recover it. 00:25:40.766 [2024-11-26 20:55:44.199557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.766 [2024-11-26 20:55:44.199587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.766 qpair failed and we were unable to recover it. 00:25:40.766 [2024-11-26 20:55:44.199720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.766 [2024-11-26 20:55:44.199750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.766 qpair failed and we were unable to recover it. 00:25:40.766 [2024-11-26 20:55:44.199939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.766 [2024-11-26 20:55:44.199999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.766 qpair failed and we were unable to recover it. 00:25:40.766 [2024-11-26 20:55:44.200220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.766 [2024-11-26 20:55:44.200280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.766 qpair failed and we were unable to recover it. 00:25:40.766 [2024-11-26 20:55:44.200544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.766 [2024-11-26 20:55:44.200605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.766 qpair failed and we were unable to recover it. 00:25:40.766 [2024-11-26 20:55:44.200820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.766 [2024-11-26 20:55:44.200881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.766 qpair failed and we were unable to recover it. 00:25:40.766 [2024-11-26 20:55:44.201081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.766 [2024-11-26 20:55:44.201144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.766 qpair failed and we were unable to recover it. 00:25:40.766 [2024-11-26 20:55:44.201379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.766 [2024-11-26 20:55:44.201442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.766 qpair failed and we were unable to recover it. 00:25:40.766 [2024-11-26 20:55:44.201694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.766 [2024-11-26 20:55:44.201727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.766 qpair failed and we were unable to recover it. 00:25:40.766 [2024-11-26 20:55:44.201851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.766 [2024-11-26 20:55:44.201881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.766 qpair failed and we were unable to recover it. 00:25:40.766 [2024-11-26 20:55:44.201981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.766 [2024-11-26 20:55:44.202012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.766 qpair failed and we were unable to recover it. 00:25:40.766 [2024-11-26 20:55:44.202173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.766 [2024-11-26 20:55:44.202203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.766 qpair failed and we were unable to recover it. 00:25:40.766 [2024-11-26 20:55:44.202461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.766 [2024-11-26 20:55:44.202541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.767 qpair failed and we were unable to recover it. 00:25:40.767 [2024-11-26 20:55:44.202828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.767 [2024-11-26 20:55:44.202907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.767 qpair failed and we were unable to recover it. 00:25:40.767 [2024-11-26 20:55:44.203131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.767 [2024-11-26 20:55:44.203163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.767 qpair failed and we were unable to recover it. 00:25:40.767 [2024-11-26 20:55:44.203273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.767 [2024-11-26 20:55:44.203310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.767 qpair failed and we were unable to recover it. 00:25:40.767 [2024-11-26 20:55:44.203419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.767 [2024-11-26 20:55:44.203450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.767 qpair failed and we were unable to recover it. 00:25:40.767 [2024-11-26 20:55:44.203649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.767 [2024-11-26 20:55:44.203733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.767 qpair failed and we were unable to recover it. 00:25:40.767 [2024-11-26 20:55:44.203920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.767 [2024-11-26 20:55:44.203979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.767 qpair failed and we were unable to recover it. 00:25:40.767 [2024-11-26 20:55:44.204250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.767 [2024-11-26 20:55:44.204336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.767 qpair failed and we were unable to recover it. 00:25:40.767 [2024-11-26 20:55:44.204608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.767 [2024-11-26 20:55:44.204667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.767 qpair failed and we were unable to recover it. 00:25:40.767 [2024-11-26 20:55:44.204939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.767 [2024-11-26 20:55:44.204998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.767 qpair failed and we were unable to recover it. 00:25:40.767 [2024-11-26 20:55:44.205197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.767 [2024-11-26 20:55:44.205259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.767 qpair failed and we were unable to recover it. 00:25:40.767 [2024-11-26 20:55:44.205552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.767 [2024-11-26 20:55:44.205624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.767 qpair failed and we were unable to recover it. 00:25:40.767 [2024-11-26 20:55:44.205904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.767 [2024-11-26 20:55:44.205964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.767 qpair failed and we were unable to recover it. 00:25:40.767 [2024-11-26 20:55:44.206190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.767 [2024-11-26 20:55:44.206222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.767 qpair failed and we were unable to recover it. 00:25:40.767 [2024-11-26 20:55:44.206324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.767 [2024-11-26 20:55:44.206355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.767 qpair failed and we were unable to recover it. 00:25:40.767 [2024-11-26 20:55:44.206519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.767 [2024-11-26 20:55:44.206576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.767 qpair failed and we were unable to recover it. 00:25:40.767 [2024-11-26 20:55:44.206835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.767 [2024-11-26 20:55:44.206896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.767 qpair failed and we were unable to recover it. 00:25:40.767 [2024-11-26 20:55:44.207133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.767 [2024-11-26 20:55:44.207194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.767 qpair failed and we were unable to recover it. 00:25:40.767 [2024-11-26 20:55:44.207387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.767 [2024-11-26 20:55:44.207449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.767 qpair failed and we were unable to recover it. 00:25:40.767 [2024-11-26 20:55:44.207682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.767 [2024-11-26 20:55:44.207743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.767 qpair failed and we were unable to recover it. 00:25:40.767 [2024-11-26 20:55:44.208011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.767 [2024-11-26 20:55:44.208070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.767 qpair failed and we were unable to recover it. 00:25:40.767 [2024-11-26 20:55:44.208325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.767 [2024-11-26 20:55:44.208387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.767 qpair failed and we were unable to recover it. 00:25:40.767 [2024-11-26 20:55:44.208622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.767 [2024-11-26 20:55:44.208682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.767 qpair failed and we were unable to recover it. 00:25:40.767 [2024-11-26 20:55:44.208952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.767 [2024-11-26 20:55:44.208984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.767 qpair failed and we were unable to recover it. 00:25:40.767 [2024-11-26 20:55:44.209122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.767 [2024-11-26 20:55:44.209152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.767 qpair failed and we were unable to recover it. 00:25:40.767 [2024-11-26 20:55:44.209362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.767 [2024-11-26 20:55:44.209423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.767 qpair failed and we were unable to recover it. 00:25:40.767 [2024-11-26 20:55:44.209628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.767 [2024-11-26 20:55:44.209690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.767 qpair failed and we were unable to recover it. 00:25:40.767 [2024-11-26 20:55:44.209918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.767 [2024-11-26 20:55:44.209980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.767 qpair failed and we were unable to recover it. 00:25:40.767 [2024-11-26 20:55:44.210180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.767 [2024-11-26 20:55:44.210241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.767 qpair failed and we were unable to recover it. 00:25:40.767 [2024-11-26 20:55:44.210519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.767 [2024-11-26 20:55:44.210551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.767 qpair failed and we were unable to recover it. 00:25:40.767 [2024-11-26 20:55:44.210723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.767 [2024-11-26 20:55:44.210789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.767 qpair failed and we were unable to recover it. 00:25:40.767 [2024-11-26 20:55:44.211021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.767 [2024-11-26 20:55:44.211081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.767 qpair failed and we were unable to recover it. 00:25:40.767 [2024-11-26 20:55:44.211317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.767 [2024-11-26 20:55:44.211380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.767 qpair failed and we were unable to recover it. 00:25:40.767 [2024-11-26 20:55:44.211653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.767 [2024-11-26 20:55:44.211685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.767 qpair failed and we were unable to recover it. 00:25:40.767 [2024-11-26 20:55:44.211815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.767 [2024-11-26 20:55:44.211845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.768 qpair failed and we were unable to recover it. 00:25:40.768 [2024-11-26 20:55:44.212088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.768 [2024-11-26 20:55:44.212147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.768 qpair failed and we were unable to recover it. 00:25:40.768 [2024-11-26 20:55:44.212376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.768 [2024-11-26 20:55:44.212439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.768 qpair failed and we were unable to recover it. 00:25:40.768 [2024-11-26 20:55:44.212692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.768 [2024-11-26 20:55:44.212725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.768 qpair failed and we were unable to recover it. 00:25:40.768 [2024-11-26 20:55:44.212844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.768 [2024-11-26 20:55:44.212874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.768 qpair failed and we were unable to recover it. 00:25:40.768 [2024-11-26 20:55:44.213030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.768 [2024-11-26 20:55:44.213060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.768 qpair failed and we were unable to recover it. 00:25:40.768 [2024-11-26 20:55:44.213170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.768 [2024-11-26 20:55:44.213200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.768 qpair failed and we were unable to recover it. 00:25:40.768 [2024-11-26 20:55:44.213419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.768 [2024-11-26 20:55:44.213451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.768 qpair failed and we were unable to recover it. 00:25:40.768 [2024-11-26 20:55:44.213589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.768 [2024-11-26 20:55:44.213619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.768 qpair failed and we were unable to recover it. 00:25:40.768 [2024-11-26 20:55:44.213806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.768 [2024-11-26 20:55:44.213863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.768 qpair failed and we were unable to recover it. 00:25:40.768 [2024-11-26 20:55:44.214081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.768 [2024-11-26 20:55:44.214136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.768 qpair failed and we were unable to recover it. 00:25:40.768 [2024-11-26 20:55:44.214357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.768 [2024-11-26 20:55:44.214416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.768 qpair failed and we were unable to recover it. 00:25:40.768 [2024-11-26 20:55:44.214669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.768 [2024-11-26 20:55:44.214726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.768 qpair failed and we were unable to recover it. 00:25:40.768 [2024-11-26 20:55:44.214909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.768 [2024-11-26 20:55:44.214967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.768 qpair failed and we were unable to recover it. 00:25:40.768 [2024-11-26 20:55:44.215150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.768 [2024-11-26 20:55:44.215207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.768 qpair failed and we were unable to recover it. 00:25:40.768 [2024-11-26 20:55:44.215438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.768 [2024-11-26 20:55:44.215495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.768 qpair failed and we were unable to recover it. 00:25:40.768 [2024-11-26 20:55:44.215706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.768 [2024-11-26 20:55:44.215763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.768 qpair failed and we were unable to recover it. 00:25:40.768 [2024-11-26 20:55:44.216008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.768 [2024-11-26 20:55:44.216080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.768 qpair failed and we were unable to recover it. 00:25:40.768 [2024-11-26 20:55:44.216341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.768 [2024-11-26 20:55:44.216389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.768 qpair failed and we were unable to recover it. 00:25:40.768 [2024-11-26 20:55:44.216606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.768 [2024-11-26 20:55:44.216654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.768 qpair failed and we were unable to recover it. 00:25:40.768 [2024-11-26 20:55:44.216860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.768 [2024-11-26 20:55:44.216892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.768 qpair failed and we were unable to recover it. 00:25:40.768 [2024-11-26 20:55:44.217001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.768 [2024-11-26 20:55:44.217031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.768 qpair failed and we were unable to recover it. 00:25:40.768 [2024-11-26 20:55:44.217169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.768 [2024-11-26 20:55:44.217213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.768 qpair failed and we were unable to recover it. 00:25:40.768 [2024-11-26 20:55:44.217425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.768 [2024-11-26 20:55:44.217473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.768 qpair failed and we were unable to recover it. 00:25:40.768 [2024-11-26 20:55:44.217685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.768 [2024-11-26 20:55:44.217736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.768 qpair failed and we were unable to recover it. 00:25:40.768 [2024-11-26 20:55:44.217975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.768 [2024-11-26 20:55:44.218035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.768 qpair failed and we were unable to recover it. 00:25:40.768 [2024-11-26 20:55:44.218262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.768 [2024-11-26 20:55:44.218365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.768 qpair failed and we were unable to recover it. 00:25:40.768 [2024-11-26 20:55:44.218548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.768 [2024-11-26 20:55:44.218594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.768 qpair failed and we were unable to recover it. 00:25:40.768 [2024-11-26 20:55:44.218810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.768 [2024-11-26 20:55:44.218856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.768 qpair failed and we were unable to recover it. 00:25:40.768 [2024-11-26 20:55:44.219052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.768 [2024-11-26 20:55:44.219106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.768 qpair failed and we were unable to recover it. 00:25:40.768 [2024-11-26 20:55:44.219340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.768 [2024-11-26 20:55:44.219388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.768 qpair failed and we were unable to recover it. 00:25:40.768 [2024-11-26 20:55:44.219590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.768 [2024-11-26 20:55:44.219637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.768 qpair failed and we were unable to recover it. 00:25:40.768 [2024-11-26 20:55:44.219822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.768 [2024-11-26 20:55:44.219887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.768 qpair failed and we were unable to recover it. 00:25:40.768 [2024-11-26 20:55:44.220078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.768 [2024-11-26 20:55:44.220130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.768 qpair failed and we were unable to recover it. 00:25:40.768 [2024-11-26 20:55:44.220345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.768 [2024-11-26 20:55:44.220398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.768 qpair failed and we were unable to recover it. 00:25:40.768 [2024-11-26 20:55:44.220638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.768 [2024-11-26 20:55:44.220691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.768 qpair failed and we were unable to recover it. 00:25:40.768 [2024-11-26 20:55:44.220898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.768 [2024-11-26 20:55:44.220947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.768 qpair failed and we were unable to recover it. 00:25:40.768 [2024-11-26 20:55:44.221147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.768 [2024-11-26 20:55:44.221195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.768 qpair failed and we were unable to recover it. 00:25:40.768 [2024-11-26 20:55:44.221346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.768 [2024-11-26 20:55:44.221394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.768 qpair failed and we were unable to recover it. 00:25:40.769 [2024-11-26 20:55:44.221614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.769 [2024-11-26 20:55:44.221664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.769 qpair failed and we were unable to recover it. 00:25:40.769 [2024-11-26 20:55:44.221861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.769 [2024-11-26 20:55:44.221910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.769 qpair failed and we were unable to recover it. 00:25:40.769 [2024-11-26 20:55:44.222053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.769 [2024-11-26 20:55:44.222102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.769 qpair failed and we were unable to recover it. 00:25:40.769 [2024-11-26 20:55:44.222238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.769 [2024-11-26 20:55:44.222284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.769 qpair failed and we were unable to recover it. 00:25:40.769 [2024-11-26 20:55:44.222446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.769 [2024-11-26 20:55:44.222494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.769 qpair failed and we were unable to recover it. 00:25:40.769 [2024-11-26 20:55:44.222702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.769 [2024-11-26 20:55:44.222751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.769 qpair failed and we were unable to recover it. 00:25:40.769 [2024-11-26 20:55:44.222972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.769 [2024-11-26 20:55:44.223022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.769 qpair failed and we were unable to recover it. 00:25:40.769 [2024-11-26 20:55:44.223216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.769 [2024-11-26 20:55:44.223268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.769 qpair failed and we were unable to recover it. 00:25:40.769 [2024-11-26 20:55:44.223498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.769 [2024-11-26 20:55:44.223547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.769 qpair failed and we were unable to recover it. 00:25:40.769 [2024-11-26 20:55:44.223782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.769 [2024-11-26 20:55:44.223832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.769 qpair failed and we were unable to recover it. 00:25:40.769 [2024-11-26 20:55:44.224046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.769 [2024-11-26 20:55:44.224078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.769 qpair failed and we were unable to recover it. 00:25:40.769 [2024-11-26 20:55:44.224236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.769 [2024-11-26 20:55:44.224285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.769 qpair failed and we were unable to recover it. 00:25:40.769 [2024-11-26 20:55:44.224473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.769 [2024-11-26 20:55:44.224527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.769 qpair failed and we were unable to recover it. 00:25:40.769 [2024-11-26 20:55:44.224717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.769 [2024-11-26 20:55:44.224769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.769 qpair failed and we were unable to recover it. 00:25:40.769 [2024-11-26 20:55:44.224968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.769 [2024-11-26 20:55:44.225020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.769 qpair failed and we were unable to recover it. 00:25:40.769 [2024-11-26 20:55:44.225210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.769 [2024-11-26 20:55:44.225263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.769 qpair failed and we were unable to recover it. 00:25:40.769 [2024-11-26 20:55:44.225455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.769 [2024-11-26 20:55:44.225507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.769 qpair failed and we were unable to recover it. 00:25:40.769 [2024-11-26 20:55:44.225748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.769 [2024-11-26 20:55:44.225800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.769 qpair failed and we were unable to recover it. 00:25:40.769 [2024-11-26 20:55:44.225999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.769 [2024-11-26 20:55:44.226037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.769 qpair failed and we were unable to recover it. 00:25:40.769 [2024-11-26 20:55:44.226232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.769 [2024-11-26 20:55:44.226284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.769 qpair failed and we were unable to recover it. 00:25:40.769 [2024-11-26 20:55:44.226478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.769 [2024-11-26 20:55:44.226533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.769 qpair failed and we were unable to recover it. 00:25:40.769 [2024-11-26 20:55:44.226703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.769 [2024-11-26 20:55:44.226758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.769 qpair failed and we were unable to recover it. 00:25:40.769 [2024-11-26 20:55:44.226985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.769 [2024-11-26 20:55:44.227037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.769 qpair failed and we were unable to recover it. 00:25:40.769 [2024-11-26 20:55:44.227242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.769 [2024-11-26 20:55:44.227294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.769 qpair failed and we were unable to recover it. 00:25:40.769 [2024-11-26 20:55:44.227509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.769 [2024-11-26 20:55:44.227563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.769 qpair failed and we were unable to recover it. 00:25:40.769 [2024-11-26 20:55:44.227765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.769 [2024-11-26 20:55:44.227818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.769 qpair failed and we were unable to recover it. 00:25:40.769 [2024-11-26 20:55:44.227989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.769 [2024-11-26 20:55:44.228041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.769 qpair failed and we were unable to recover it. 00:25:40.769 [2024-11-26 20:55:44.228279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.769 [2024-11-26 20:55:44.228353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.769 qpair failed and we were unable to recover it. 00:25:40.769 [2024-11-26 20:55:44.228545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.769 [2024-11-26 20:55:44.228597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.769 qpair failed and we were unable to recover it. 00:25:40.769 [2024-11-26 20:55:44.228832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.769 [2024-11-26 20:55:44.228884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.769 qpair failed and we were unable to recover it. 00:25:40.769 [2024-11-26 20:55:44.229172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.769 [2024-11-26 20:55:44.229225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.769 qpair failed and we were unable to recover it. 00:25:40.769 [2024-11-26 20:55:44.229430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.769 [2024-11-26 20:55:44.229483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.769 qpair failed and we were unable to recover it. 00:25:40.769 [2024-11-26 20:55:44.229726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.769 [2024-11-26 20:55:44.229778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.769 qpair failed and we were unable to recover it. 00:25:40.769 [2024-11-26 20:55:44.229998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.769 [2024-11-26 20:55:44.230049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.769 qpair failed and we were unable to recover it. 00:25:40.770 [2024-11-26 20:55:44.230223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.770 [2024-11-26 20:55:44.230275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.770 qpair failed and we were unable to recover it. 00:25:40.770 [2024-11-26 20:55:44.230520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.770 [2024-11-26 20:55:44.230573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.770 qpair failed and we were unable to recover it. 00:25:40.770 [2024-11-26 20:55:44.230779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.770 [2024-11-26 20:55:44.230833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.770 qpair failed and we were unable to recover it. 00:25:40.770 [2024-11-26 20:55:44.231048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.770 [2024-11-26 20:55:44.231101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.770 qpair failed and we were unable to recover it. 00:25:40.770 [2024-11-26 20:55:44.231326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.770 [2024-11-26 20:55:44.231379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.770 qpair failed and we were unable to recover it. 00:25:40.770 [2024-11-26 20:55:44.231574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.770 [2024-11-26 20:55:44.231626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.770 qpair failed and we were unable to recover it. 00:25:40.770 [2024-11-26 20:55:44.231837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.770 [2024-11-26 20:55:44.231889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.770 qpair failed and we were unable to recover it. 00:25:40.770 [2024-11-26 20:55:44.232090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.770 [2024-11-26 20:55:44.232166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.770 qpair failed and we were unable to recover it. 00:25:40.770 [2024-11-26 20:55:44.232412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.770 [2024-11-26 20:55:44.232492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.770 qpair failed and we were unable to recover it. 00:25:40.770 [2024-11-26 20:55:44.232695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.770 [2024-11-26 20:55:44.232777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.770 qpair failed and we were unable to recover it. 00:25:40.770 [2024-11-26 20:55:44.232963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.770 [2024-11-26 20:55:44.233025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.770 qpair failed and we were unable to recover it. 00:25:40.770 [2024-11-26 20:55:44.233259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.770 [2024-11-26 20:55:44.233336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.770 qpair failed and we were unable to recover it. 00:25:40.770 [2024-11-26 20:55:44.233633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.770 [2024-11-26 20:55:44.233711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.770 qpair failed and we were unable to recover it. 00:25:40.770 [2024-11-26 20:55:44.233916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.770 [2024-11-26 20:55:44.233993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.770 qpair failed and we were unable to recover it. 00:25:40.770 [2024-11-26 20:55:44.234228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.770 [2024-11-26 20:55:44.234292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.770 qpair failed and we were unable to recover it. 00:25:40.770 [2024-11-26 20:55:44.234566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.770 [2024-11-26 20:55:44.234644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.770 qpair failed and we were unable to recover it. 00:25:40.770 [2024-11-26 20:55:44.234864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.770 [2024-11-26 20:55:44.234919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.770 qpair failed and we were unable to recover it. 00:25:40.770 [2024-11-26 20:55:44.235078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.770 [2024-11-26 20:55:44.235134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.770 qpair failed and we were unable to recover it. 00:25:40.770 [2024-11-26 20:55:44.235354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.770 [2024-11-26 20:55:44.235411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.770 qpair failed and we were unable to recover it. 00:25:40.770 [2024-11-26 20:55:44.235657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.770 [2024-11-26 20:55:44.235712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.770 qpair failed and we were unable to recover it. 00:25:40.770 [2024-11-26 20:55:44.235958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.770 [2024-11-26 20:55:44.236010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.770 qpair failed and we were unable to recover it. 00:25:40.770 [2024-11-26 20:55:44.236203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.770 [2024-11-26 20:55:44.236255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.770 qpair failed and we were unable to recover it. 00:25:40.770 [2024-11-26 20:55:44.236477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.770 [2024-11-26 20:55:44.236530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.770 qpair failed and we were unable to recover it. 00:25:40.770 [2024-11-26 20:55:44.236691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.770 [2024-11-26 20:55:44.236743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.770 qpair failed and we were unable to recover it. 00:25:40.770 [2024-11-26 20:55:44.236954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.770 [2024-11-26 20:55:44.237015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.770 qpair failed and we were unable to recover it. 00:25:40.770 [2024-11-26 20:55:44.237219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.770 [2024-11-26 20:55:44.237271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.770 qpair failed and we were unable to recover it. 00:25:40.770 [2024-11-26 20:55:44.237461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.770 [2024-11-26 20:55:44.237514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.770 qpair failed and we were unable to recover it. 00:25:40.770 [2024-11-26 20:55:44.237672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.770 [2024-11-26 20:55:44.237727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.770 qpair failed and we were unable to recover it. 00:25:40.770 [2024-11-26 20:55:44.237928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.770 [2024-11-26 20:55:44.237982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.770 qpair failed and we were unable to recover it. 00:25:40.770 [2024-11-26 20:55:44.238194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.770 [2024-11-26 20:55:44.238246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.770 qpair failed and we were unable to recover it. 00:25:40.770 [2024-11-26 20:55:44.238460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.770 [2024-11-26 20:55:44.238514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.770 qpair failed and we were unable to recover it. 00:25:40.770 [2024-11-26 20:55:44.238730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.770 [2024-11-26 20:55:44.238782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.770 qpair failed and we were unable to recover it. 00:25:40.770 [2024-11-26 20:55:44.238997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.770 [2024-11-26 20:55:44.239049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.770 qpair failed and we were unable to recover it. 00:25:40.770 [2024-11-26 20:55:44.239222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.771 [2024-11-26 20:55:44.239274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.771 qpair failed and we were unable to recover it. 00:25:40.771 [2024-11-26 20:55:44.239535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.771 [2024-11-26 20:55:44.239588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.771 qpair failed and we were unable to recover it. 00:25:40.771 [2024-11-26 20:55:44.239777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.771 [2024-11-26 20:55:44.239828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.771 qpair failed and we were unable to recover it. 00:25:40.771 [2024-11-26 20:55:44.240010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.771 [2024-11-26 20:55:44.240065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.771 qpair failed and we were unable to recover it. 00:25:40.771 [2024-11-26 20:55:44.240265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.771 [2024-11-26 20:55:44.240351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.771 qpair failed and we were unable to recover it. 00:25:40.771 [2024-11-26 20:55:44.240573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.771 [2024-11-26 20:55:44.240625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.771 qpair failed and we were unable to recover it. 00:25:40.771 [2024-11-26 20:55:44.240787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.771 [2024-11-26 20:55:44.240840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.771 qpair failed and we were unable to recover it. 00:25:40.771 [2024-11-26 20:55:44.240991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.771 [2024-11-26 20:55:44.241043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.771 qpair failed and we were unable to recover it. 00:25:40.771 [2024-11-26 20:55:44.241277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.771 [2024-11-26 20:55:44.241347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.771 qpair failed and we were unable to recover it. 00:25:40.771 [2024-11-26 20:55:44.241560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.771 [2024-11-26 20:55:44.241612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.771 qpair failed and we were unable to recover it. 00:25:40.771 [2024-11-26 20:55:44.241808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.771 [2024-11-26 20:55:44.241860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.771 qpair failed and we were unable to recover it. 00:25:40.771 [2024-11-26 20:55:44.242026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.771 [2024-11-26 20:55:44.242078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.771 qpair failed and we were unable to recover it. 00:25:40.771 [2024-11-26 20:55:44.242327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.771 [2024-11-26 20:55:44.242381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.771 qpair failed and we were unable to recover it. 00:25:40.771 [2024-11-26 20:55:44.242621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.771 [2024-11-26 20:55:44.242674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.771 qpair failed and we were unable to recover it. 00:25:40.771 [2024-11-26 20:55:44.242850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.771 [2024-11-26 20:55:44.242902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.771 qpair failed and we were unable to recover it. 00:25:40.771 [2024-11-26 20:55:44.243188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.771 [2024-11-26 20:55:44.243247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.771 qpair failed and we were unable to recover it. 00:25:40.771 [2024-11-26 20:55:44.243522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.771 [2024-11-26 20:55:44.243603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.771 qpair failed and we were unable to recover it. 00:25:40.771 [2024-11-26 20:55:44.243868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.771 [2024-11-26 20:55:44.243920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.771 qpair failed and we were unable to recover it. 00:25:40.771 [2024-11-26 20:55:44.244129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.771 [2024-11-26 20:55:44.244189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.771 qpair failed and we were unable to recover it. 00:25:40.771 [2024-11-26 20:55:44.244449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.771 [2024-11-26 20:55:44.244512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.771 qpair failed and we were unable to recover it. 00:25:40.771 [2024-11-26 20:55:44.244740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.771 [2024-11-26 20:55:44.244819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.771 qpair failed and we were unable to recover it. 00:25:40.771 [2024-11-26 20:55:44.245068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.771 [2024-11-26 20:55:44.245147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.771 qpair failed and we were unable to recover it. 00:25:40.771 [2024-11-26 20:55:44.245403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.771 [2024-11-26 20:55:44.245461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.771 qpair failed and we were unable to recover it. 00:25:40.771 [2024-11-26 20:55:44.245623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.771 [2024-11-26 20:55:44.245679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.771 qpair failed and we were unable to recover it. 00:25:40.771 [2024-11-26 20:55:44.245860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.771 [2024-11-26 20:55:44.245915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.771 qpair failed and we were unable to recover it. 00:25:40.771 [2024-11-26 20:55:44.246090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.771 [2024-11-26 20:55:44.246148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.771 qpair failed and we were unable to recover it. 00:25:40.771 [2024-11-26 20:55:44.246363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.771 [2024-11-26 20:55:44.246422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.771 qpair failed and we were unable to recover it. 00:25:40.771 [2024-11-26 20:55:44.246682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.771 [2024-11-26 20:55:44.246738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.771 qpair failed and we were unable to recover it. 00:25:40.771 [2024-11-26 20:55:44.246911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.771 [2024-11-26 20:55:44.246968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.771 qpair failed and we were unable to recover it. 00:25:40.771 [2024-11-26 20:55:44.247218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.771 [2024-11-26 20:55:44.247274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.771 qpair failed and we were unable to recover it. 00:25:40.771 [2024-11-26 20:55:44.247459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.771 [2024-11-26 20:55:44.247515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.771 qpair failed and we were unable to recover it. 00:25:40.771 [2024-11-26 20:55:44.247724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.771 [2024-11-26 20:55:44.247790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.771 qpair failed and we were unable to recover it. 00:25:40.771 [2024-11-26 20:55:44.248003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.771 [2024-11-26 20:55:44.248059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.771 qpair failed and we were unable to recover it. 00:25:40.771 [2024-11-26 20:55:44.248236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.771 [2024-11-26 20:55:44.248292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.771 qpair failed and we were unable to recover it. 00:25:40.771 [2024-11-26 20:55:44.248562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.771 [2024-11-26 20:55:44.248641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.771 qpair failed and we were unable to recover it. 00:25:40.771 [2024-11-26 20:55:44.248884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.771 [2024-11-26 20:55:44.248963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.771 qpair failed and we were unable to recover it. 00:25:40.771 [2024-11-26 20:55:44.249241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.771 [2024-11-26 20:55:44.249315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.771 qpair failed and we were unable to recover it. 00:25:40.771 [2024-11-26 20:55:44.249572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.771 [2024-11-26 20:55:44.249652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.771 qpair failed and we were unable to recover it. 00:25:40.771 [2024-11-26 20:55:44.249860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.772 [2024-11-26 20:55:44.249939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.772 qpair failed and we were unable to recover it. 00:25:40.772 [2024-11-26 20:55:44.250171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.772 [2024-11-26 20:55:44.250231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.772 qpair failed and we were unable to recover it. 00:25:40.772 [2024-11-26 20:55:44.250536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.772 [2024-11-26 20:55:44.250593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.772 qpair failed and we were unable to recover it. 00:25:40.772 [2024-11-26 20:55:44.250770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.772 [2024-11-26 20:55:44.250825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.772 qpair failed and we were unable to recover it. 00:25:40.772 [2024-11-26 20:55:44.251046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.772 [2024-11-26 20:55:44.251102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.772 qpair failed and we were unable to recover it. 00:25:40.772 [2024-11-26 20:55:44.251329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.772 [2024-11-26 20:55:44.251388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.772 qpair failed and we were unable to recover it. 00:25:40.772 [2024-11-26 20:55:44.251592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.772 [2024-11-26 20:55:44.251648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.772 qpair failed and we were unable to recover it. 00:25:40.772 [2024-11-26 20:55:44.251855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.772 [2024-11-26 20:55:44.251911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.772 qpair failed and we were unable to recover it. 00:25:40.772 [2024-11-26 20:55:44.252125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.772 [2024-11-26 20:55:44.252182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.772 qpair failed and we were unable to recover it. 00:25:40.772 [2024-11-26 20:55:44.252428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.772 [2024-11-26 20:55:44.252485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.772 qpair failed and we were unable to recover it. 00:25:40.772 [2024-11-26 20:55:44.252692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.772 [2024-11-26 20:55:44.252750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.772 qpair failed and we were unable to recover it. 00:25:40.772 [2024-11-26 20:55:44.252985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.772 [2024-11-26 20:55:44.253040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.772 qpair failed and we were unable to recover it. 00:25:40.772 [2024-11-26 20:55:44.253233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.772 [2024-11-26 20:55:44.253289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.772 qpair failed and we were unable to recover it. 00:25:40.772 [2024-11-26 20:55:44.253524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.772 [2024-11-26 20:55:44.253581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.772 qpair failed and we were unable to recover it. 00:25:40.772 [2024-11-26 20:55:44.253783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.772 [2024-11-26 20:55:44.253838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.772 qpair failed and we were unable to recover it. 00:25:40.772 [2024-11-26 20:55:44.254029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.772 [2024-11-26 20:55:44.254085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.772 qpair failed and we were unable to recover it. 00:25:40.772 [2024-11-26 20:55:44.254291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.772 [2024-11-26 20:55:44.254362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.772 qpair failed and we were unable to recover it. 00:25:40.772 [2024-11-26 20:55:44.254615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.772 [2024-11-26 20:55:44.254671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.772 qpair failed and we were unable to recover it. 00:25:40.772 [2024-11-26 20:55:44.254856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.772 [2024-11-26 20:55:44.254912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.772 qpair failed and we were unable to recover it. 00:25:40.772 [2024-11-26 20:55:44.255125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.772 [2024-11-26 20:55:44.255182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.772 qpair failed and we were unable to recover it. 00:25:40.772 [2024-11-26 20:55:44.255389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.772 [2024-11-26 20:55:44.255447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.772 qpair failed and we were unable to recover it. 00:25:40.772 [2024-11-26 20:55:44.255618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.772 [2024-11-26 20:55:44.255674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.772 qpair failed and we were unable to recover it. 00:25:40.772 [2024-11-26 20:55:44.255897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.772 [2024-11-26 20:55:44.255952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.772 qpair failed and we were unable to recover it. 00:25:40.772 [2024-11-26 20:55:44.256159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.772 [2024-11-26 20:55:44.256231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.772 qpair failed and we were unable to recover it. 00:25:40.772 [2024-11-26 20:55:44.256408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.772 [2024-11-26 20:55:44.256464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.772 qpair failed and we were unable to recover it. 00:25:40.772 [2024-11-26 20:55:44.256666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.772 [2024-11-26 20:55:44.256718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.772 qpair failed and we were unable to recover it. 00:25:40.772 [2024-11-26 20:55:44.256922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.772 [2024-11-26 20:55:44.256974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.772 qpair failed and we were unable to recover it. 00:25:40.772 [2024-11-26 20:55:44.257180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.772 [2024-11-26 20:55:44.257233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.772 qpair failed and we were unable to recover it. 00:25:40.772 [2024-11-26 20:55:44.257428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.772 [2024-11-26 20:55:44.257481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.772 qpair failed and we were unable to recover it. 00:25:40.772 [2024-11-26 20:55:44.257656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.772 [2024-11-26 20:55:44.257708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.772 qpair failed and we were unable to recover it. 00:25:40.772 [2024-11-26 20:55:44.257912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.772 [2024-11-26 20:55:44.257964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.772 qpair failed and we were unable to recover it. 00:25:40.772 [2024-11-26 20:55:44.258170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.772 [2024-11-26 20:55:44.258222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.772 qpair failed and we were unable to recover it. 00:25:40.772 [2024-11-26 20:55:44.258410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.772 [2024-11-26 20:55:44.258464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.772 qpair failed and we were unable to recover it. 00:25:40.772 [2024-11-26 20:55:44.258670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.772 [2024-11-26 20:55:44.258734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.772 qpair failed and we were unable to recover it. 00:25:40.772 [2024-11-26 20:55:44.258959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.772 [2024-11-26 20:55:44.259011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.772 qpair failed and we were unable to recover it. 00:25:40.772 [2024-11-26 20:55:44.259177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.772 [2024-11-26 20:55:44.259230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.772 qpair failed and we were unable to recover it. 00:25:40.772 [2024-11-26 20:55:44.259456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.772 [2024-11-26 20:55:44.259509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.772 qpair failed and we were unable to recover it. 00:25:40.772 [2024-11-26 20:55:44.259704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.772 [2024-11-26 20:55:44.259757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.772 qpair failed and we were unable to recover it. 00:25:40.772 [2024-11-26 20:55:44.259990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.772 [2024-11-26 20:55:44.260043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.773 qpair failed and we were unable to recover it. 00:25:40.773 [2024-11-26 20:55:44.260211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.773 [2024-11-26 20:55:44.260264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.773 qpair failed and we were unable to recover it. 00:25:40.773 [2024-11-26 20:55:44.260460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.773 [2024-11-26 20:55:44.260512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.773 qpair failed and we were unable to recover it. 00:25:40.773 [2024-11-26 20:55:44.260719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.773 [2024-11-26 20:55:44.260772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.773 qpair failed and we were unable to recover it. 00:25:40.773 [2024-11-26 20:55:44.260970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.773 [2024-11-26 20:55:44.261021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.773 qpair failed and we were unable to recover it. 00:25:40.773 [2024-11-26 20:55:44.261189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.773 [2024-11-26 20:55:44.261243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.773 qpair failed and we were unable to recover it. 00:25:40.773 [2024-11-26 20:55:44.261495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.773 [2024-11-26 20:55:44.261549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.773 qpair failed and we were unable to recover it. 00:25:40.773 [2024-11-26 20:55:44.261740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.773 [2024-11-26 20:55:44.261792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.773 qpair failed and we were unable to recover it. 00:25:40.773 [2024-11-26 20:55:44.261948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.773 [2024-11-26 20:55:44.262001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.773 qpair failed and we were unable to recover it. 00:25:40.773 [2024-11-26 20:55:44.262218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.773 [2024-11-26 20:55:44.262272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.773 qpair failed and we were unable to recover it. 00:25:40.773 [2024-11-26 20:55:44.262454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.773 [2024-11-26 20:55:44.262508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.773 qpair failed and we were unable to recover it. 00:25:40.773 [2024-11-26 20:55:44.262687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.773 [2024-11-26 20:55:44.262741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.773 qpair failed and we were unable to recover it. 00:25:40.773 [2024-11-26 20:55:44.262941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.773 [2024-11-26 20:55:44.262994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.773 qpair failed and we were unable to recover it. 00:25:40.773 [2024-11-26 20:55:44.263194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.773 [2024-11-26 20:55:44.263247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.773 qpair failed and we were unable to recover it. 00:25:40.773 [2024-11-26 20:55:44.263421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.773 [2024-11-26 20:55:44.263474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.773 qpair failed and we were unable to recover it. 00:25:40.773 [2024-11-26 20:55:44.263673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.773 [2024-11-26 20:55:44.263726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.773 qpair failed and we were unable to recover it. 00:25:40.773 [2024-11-26 20:55:44.263928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.773 [2024-11-26 20:55:44.263980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.773 qpair failed and we were unable to recover it. 00:25:40.773 [2024-11-26 20:55:44.264210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.773 [2024-11-26 20:55:44.264261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.773 qpair failed and we were unable to recover it. 00:25:40.773 [2024-11-26 20:55:44.264471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.773 [2024-11-26 20:55:44.264551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.773 qpair failed and we were unable to recover it. 00:25:40.773 [2024-11-26 20:55:44.264776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.773 [2024-11-26 20:55:44.264832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.773 qpair failed and we were unable to recover it. 00:25:40.773 [2024-11-26 20:55:44.265055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.773 [2024-11-26 20:55:44.265108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.773 qpair failed and we were unable to recover it. 00:25:40.773 [2024-11-26 20:55:44.265325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.773 [2024-11-26 20:55:44.265381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.773 qpair failed and we were unable to recover it. 00:25:40.773 [2024-11-26 20:55:44.265619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.773 [2024-11-26 20:55:44.265689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.773 qpair failed and we were unable to recover it. 00:25:40.773 [2024-11-26 20:55:44.265939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.773 [2024-11-26 20:55:44.266003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.773 qpair failed and we were unable to recover it. 00:25:40.773 [2024-11-26 20:55:44.266322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.773 [2024-11-26 20:55:44.266383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.773 qpair failed and we were unable to recover it. 00:25:40.773 [2024-11-26 20:55:44.266605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.773 [2024-11-26 20:55:44.266662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.773 qpair failed and we were unable to recover it. 00:25:40.773 [2024-11-26 20:55:44.266881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.773 [2024-11-26 20:55:44.266936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.773 qpair failed and we were unable to recover it. 00:25:40.773 [2024-11-26 20:55:44.267181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.773 [2024-11-26 20:55:44.267237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.773 qpair failed and we were unable to recover it. 00:25:40.773 [2024-11-26 20:55:44.267499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.773 [2024-11-26 20:55:44.267554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.773 qpair failed and we were unable to recover it. 00:25:40.773 [2024-11-26 20:55:44.267758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.773 [2024-11-26 20:55:44.267810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.773 qpair failed and we were unable to recover it. 00:25:40.773 [2024-11-26 20:55:44.267991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.773 [2024-11-26 20:55:44.268042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.773 qpair failed and we were unable to recover it. 00:25:40.773 [2024-11-26 20:55:44.268273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.773 [2024-11-26 20:55:44.268344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.773 qpair failed and we were unable to recover it. 00:25:40.773 [2024-11-26 20:55:44.268598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.773 [2024-11-26 20:55:44.268652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.773 qpair failed and we were unable to recover it. 00:25:40.773 [2024-11-26 20:55:44.268807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.773 [2024-11-26 20:55:44.268859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.773 qpair failed and we were unable to recover it. 00:25:40.773 [2024-11-26 20:55:44.269063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.773 [2024-11-26 20:55:44.269118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.773 qpair failed and we were unable to recover it. 00:25:40.773 [2024-11-26 20:55:44.269360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.773 [2024-11-26 20:55:44.269415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.773 qpair failed and we were unable to recover it. 00:25:40.773 [2024-11-26 20:55:44.269622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.773 [2024-11-26 20:55:44.269702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.773 qpair failed and we were unable to recover it. 00:25:40.773 [2024-11-26 20:55:44.269887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.773 [2024-11-26 20:55:44.269944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.773 qpair failed and we were unable to recover it. 00:25:40.773 [2024-11-26 20:55:44.270163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.774 [2024-11-26 20:55:44.270217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.774 qpair failed and we were unable to recover it. 00:25:40.774 [2024-11-26 20:55:44.270487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.774 [2024-11-26 20:55:44.270542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.774 qpair failed and we were unable to recover it. 00:25:40.774 [2024-11-26 20:55:44.270758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.774 [2024-11-26 20:55:44.270812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.774 qpair failed and we were unable to recover it. 00:25:40.774 [2024-11-26 20:55:44.271070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.774 [2024-11-26 20:55:44.271148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.774 qpair failed and we were unable to recover it. 00:25:40.774 [2024-11-26 20:55:44.271384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.774 [2024-11-26 20:55:44.271445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.774 qpair failed and we were unable to recover it. 00:25:40.774 [2024-11-26 20:55:44.271740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.774 [2024-11-26 20:55:44.271820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.774 qpair failed and we were unable to recover it. 00:25:40.774 [2024-11-26 20:55:44.272030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.774 [2024-11-26 20:55:44.272084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.774 qpair failed and we were unable to recover it. 00:25:40.774 [2024-11-26 20:55:44.272329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.774 [2024-11-26 20:55:44.272391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.774 qpair failed and we were unable to recover it. 00:25:40.774 [2024-11-26 20:55:44.272626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.774 [2024-11-26 20:55:44.272704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.774 qpair failed and we were unable to recover it. 00:25:40.774 [2024-11-26 20:55:44.273009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.774 [2024-11-26 20:55:44.273088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.774 qpair failed and we were unable to recover it. 00:25:40.774 [2024-11-26 20:55:44.273400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.774 [2024-11-26 20:55:44.273459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.774 qpair failed and we were unable to recover it. 00:25:40.774 [2024-11-26 20:55:44.273693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.774 [2024-11-26 20:55:44.273750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.774 qpair failed and we were unable to recover it. 00:25:40.774 [2024-11-26 20:55:44.273941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.774 [2024-11-26 20:55:44.273997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.774 qpair failed and we were unable to recover it. 00:25:40.774 [2024-11-26 20:55:44.274216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.774 [2024-11-26 20:55:44.274274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.774 qpair failed and we were unable to recover it. 00:25:40.774 [2024-11-26 20:55:44.274539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.774 [2024-11-26 20:55:44.274597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.774 qpair failed and we were unable to recover it. 00:25:40.774 [2024-11-26 20:55:44.274796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.774 [2024-11-26 20:55:44.274852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.774 qpair failed and we were unable to recover it. 00:25:40.774 [2024-11-26 20:55:44.275063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.774 [2024-11-26 20:55:44.275121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.774 qpair failed and we were unable to recover it. 00:25:40.774 [2024-11-26 20:55:44.275382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.774 [2024-11-26 20:55:44.275440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.774 qpair failed and we were unable to recover it. 00:25:40.774 [2024-11-26 20:55:44.275664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.774 [2024-11-26 20:55:44.275721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.774 qpair failed and we were unable to recover it. 00:25:40.774 [2024-11-26 20:55:44.275908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.774 [2024-11-26 20:55:44.275965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.774 qpair failed and we were unable to recover it. 00:25:40.774 [2024-11-26 20:55:44.276157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.774 [2024-11-26 20:55:44.276214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.774 qpair failed and we were unable to recover it. 00:25:40.774 [2024-11-26 20:55:44.276463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.774 [2024-11-26 20:55:44.276517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.774 qpair failed and we were unable to recover it. 00:25:40.774 [2024-11-26 20:55:44.276690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.774 [2024-11-26 20:55:44.276743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.774 qpair failed and we were unable to recover it. 00:25:40.774 [2024-11-26 20:55:44.276910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.774 [2024-11-26 20:55:44.276962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.774 qpair failed and we were unable to recover it. 00:25:40.774 [2024-11-26 20:55:44.277103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.774 [2024-11-26 20:55:44.277171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.774 qpair failed and we were unable to recover it. 00:25:40.774 [2024-11-26 20:55:44.277328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.774 [2024-11-26 20:55:44.277383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.774 qpair failed and we were unable to recover it. 00:25:40.774 [2024-11-26 20:55:44.277601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.774 [2024-11-26 20:55:44.277653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.774 qpair failed and we were unable to recover it. 00:25:40.774 [2024-11-26 20:55:44.277895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.774 [2024-11-26 20:55:44.277948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.774 qpair failed and we were unable to recover it. 00:25:40.774 [2024-11-26 20:55:44.278152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.774 [2024-11-26 20:55:44.278208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.774 qpair failed and we were unable to recover it. 00:25:40.774 [2024-11-26 20:55:44.278438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.774 [2024-11-26 20:55:44.278493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.774 qpair failed and we were unable to recover it. 00:25:40.774 [2024-11-26 20:55:44.278733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.774 [2024-11-26 20:55:44.278786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.774 qpair failed and we were unable to recover it. 00:25:40.774 [2024-11-26 20:55:44.278987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.774 [2024-11-26 20:55:44.279042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.774 qpair failed and we were unable to recover it. 00:25:40.774 [2024-11-26 20:55:44.279217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.774 [2024-11-26 20:55:44.279270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.774 qpair failed and we were unable to recover it. 00:25:40.774 [2024-11-26 20:55:44.279480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.774 [2024-11-26 20:55:44.279534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.774 qpair failed and we were unable to recover it. 00:25:40.774 [2024-11-26 20:55:44.279742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.775 [2024-11-26 20:55:44.279794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.775 qpair failed and we were unable to recover it. 00:25:40.775 [2024-11-26 20:55:44.280003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.775 [2024-11-26 20:55:44.280055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.775 qpair failed and we were unable to recover it. 00:25:40.775 [2024-11-26 20:55:44.280228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.775 [2024-11-26 20:55:44.280281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.775 qpair failed and we were unable to recover it. 00:25:40.775 [2024-11-26 20:55:44.280548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.775 [2024-11-26 20:55:44.280600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.775 qpair failed and we were unable to recover it. 00:25:40.775 [2024-11-26 20:55:44.280808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.775 [2024-11-26 20:55:44.280862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.775 qpair failed and we were unable to recover it. 00:25:40.775 [2024-11-26 20:55:44.281064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.775 [2024-11-26 20:55:44.281116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.775 qpair failed and we were unable to recover it. 00:25:40.775 [2024-11-26 20:55:44.281340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.775 [2024-11-26 20:55:44.281395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.775 qpair failed and we were unable to recover it. 00:25:40.775 [2024-11-26 20:55:44.281579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.775 [2024-11-26 20:55:44.281631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.775 qpair failed and we were unable to recover it. 00:25:40.775 [2024-11-26 20:55:44.281821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.775 [2024-11-26 20:55:44.281873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.775 qpair failed and we were unable to recover it. 00:25:40.775 [2024-11-26 20:55:44.282075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.775 [2024-11-26 20:55:44.282127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.775 qpair failed and we were unable to recover it. 00:25:40.775 [2024-11-26 20:55:44.282333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.775 [2024-11-26 20:55:44.282386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.775 qpair failed and we were unable to recover it. 00:25:40.775 [2024-11-26 20:55:44.282588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.775 [2024-11-26 20:55:44.282662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.775 qpair failed and we were unable to recover it. 00:25:40.775 [2024-11-26 20:55:44.282906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.775 [2024-11-26 20:55:44.282967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.775 qpair failed and we were unable to recover it. 00:25:40.775 [2024-11-26 20:55:44.283202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.775 [2024-11-26 20:55:44.283264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.775 qpair failed and we were unable to recover it. 00:25:40.775 [2024-11-26 20:55:44.283471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.775 [2024-11-26 20:55:44.283527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.775 qpair failed and we were unable to recover it. 00:25:40.775 [2024-11-26 20:55:44.283842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.775 [2024-11-26 20:55:44.283932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.775 qpair failed and we were unable to recover it. 00:25:40.775 [2024-11-26 20:55:44.284165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.775 [2024-11-26 20:55:44.284225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.775 qpair failed and we were unable to recover it. 00:25:40.775 [2024-11-26 20:55:44.284482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.775 [2024-11-26 20:55:44.284563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.775 qpair failed and we were unable to recover it. 00:25:40.775 [2024-11-26 20:55:44.284823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.775 [2024-11-26 20:55:44.284900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.775 qpair failed and we were unable to recover it. 00:25:40.775 [2024-11-26 20:55:44.285085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.775 [2024-11-26 20:55:44.285138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.775 qpair failed and we were unable to recover it. 00:25:40.775 [2024-11-26 20:55:44.285339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.775 [2024-11-26 20:55:44.285394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.775 qpair failed and we were unable to recover it. 00:25:40.775 [2024-11-26 20:55:44.285565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.775 [2024-11-26 20:55:44.285618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.775 qpair failed and we were unable to recover it. 00:25:40.775 [2024-11-26 20:55:44.285786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.775 [2024-11-26 20:55:44.285839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.775 qpair failed and we were unable to recover it. 00:25:40.775 [2024-11-26 20:55:44.286034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.775 [2024-11-26 20:55:44.286086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.775 qpair failed and we were unable to recover it. 00:25:40.775 [2024-11-26 20:55:44.286326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.775 [2024-11-26 20:55:44.286379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.775 qpair failed and we were unable to recover it. 00:25:40.775 [2024-11-26 20:55:44.286528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.775 [2024-11-26 20:55:44.286580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.775 qpair failed and we were unable to recover it. 00:25:40.775 [2024-11-26 20:55:44.286798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.775 [2024-11-26 20:55:44.286851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.775 qpair failed and we were unable to recover it. 00:25:40.775 [2024-11-26 20:55:44.287081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.775 [2024-11-26 20:55:44.287136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.775 qpair failed and we were unable to recover it. 00:25:40.775 [2024-11-26 20:55:44.287378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.775 [2024-11-26 20:55:44.287431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.775 qpair failed and we were unable to recover it. 00:25:40.775 [2024-11-26 20:55:44.287634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.775 [2024-11-26 20:55:44.287710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.775 qpair failed and we were unable to recover it. 00:25:40.775 [2024-11-26 20:55:44.288004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.775 [2024-11-26 20:55:44.288095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.775 qpair failed and we were unable to recover it. 00:25:40.775 [2024-11-26 20:55:44.288340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.775 [2024-11-26 20:55:44.288401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.775 qpair failed and we were unable to recover it. 00:25:40.775 [2024-11-26 20:55:44.288664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.775 [2024-11-26 20:55:44.288742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.775 qpair failed and we were unable to recover it. 00:25:40.775 [2024-11-26 20:55:44.289003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.775 [2024-11-26 20:55:44.289083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.775 qpair failed and we were unable to recover it. 00:25:40.775 [2024-11-26 20:55:44.289317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.775 [2024-11-26 20:55:44.289379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.775 qpair failed and we were unable to recover it. 00:25:40.775 [2024-11-26 20:55:44.289616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.775 [2024-11-26 20:55:44.289677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.775 qpair failed and we were unable to recover it. 00:25:40.775 [2024-11-26 20:55:44.289879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.775 [2024-11-26 20:55:44.289958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.775 qpair failed and we were unable to recover it. 00:25:40.775 [2024-11-26 20:55:44.290174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.775 [2024-11-26 20:55:44.290230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.775 qpair failed and we were unable to recover it. 00:25:40.775 [2024-11-26 20:55:44.290514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.775 [2024-11-26 20:55:44.290572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.775 qpair failed and we were unable to recover it. 00:25:40.776 [2024-11-26 20:55:44.290786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.776 [2024-11-26 20:55:44.290841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.776 qpair failed and we were unable to recover it. 00:25:40.776 [2024-11-26 20:55:44.291029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.776 [2024-11-26 20:55:44.291085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.776 qpair failed and we were unable to recover it. 00:25:40.776 [2024-11-26 20:55:44.291323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.776 [2024-11-26 20:55:44.291381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.776 qpair failed and we were unable to recover it. 00:25:40.776 [2024-11-26 20:55:44.291596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.776 [2024-11-26 20:55:44.291652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.776 qpair failed and we were unable to recover it. 00:25:40.776 [2024-11-26 20:55:44.291838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.776 [2024-11-26 20:55:44.291894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.776 qpair failed and we were unable to recover it. 00:25:40.776 [2024-11-26 20:55:44.292126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.776 [2024-11-26 20:55:44.292183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.776 qpair failed and we were unable to recover it. 00:25:40.776 [2024-11-26 20:55:44.292421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.776 [2024-11-26 20:55:44.292478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.776 qpair failed and we were unable to recover it. 00:25:40.776 [2024-11-26 20:55:44.292701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.776 [2024-11-26 20:55:44.292757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.776 qpair failed and we were unable to recover it. 00:25:40.776 [2024-11-26 20:55:44.292931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.776 [2024-11-26 20:55:44.292992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.776 qpair failed and we were unable to recover it. 00:25:40.776 [2024-11-26 20:55:44.293151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.776 [2024-11-26 20:55:44.293211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.776 qpair failed and we were unable to recover it. 00:25:40.776 [2024-11-26 20:55:44.293531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.776 [2024-11-26 20:55:44.293620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.776 qpair failed and we were unable to recover it. 00:25:40.776 [2024-11-26 20:55:44.293824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.776 [2024-11-26 20:55:44.293902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.776 qpair failed and we were unable to recover it. 00:25:40.776 [2024-11-26 20:55:44.294083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.776 [2024-11-26 20:55:44.294157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.776 qpair failed and we were unable to recover it. 00:25:40.776 [2024-11-26 20:55:44.294358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.776 [2024-11-26 20:55:44.294415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.776 qpair failed and we were unable to recover it. 00:25:40.776 [2024-11-26 20:55:44.294631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.776 [2024-11-26 20:55:44.294692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.776 qpair failed and we were unable to recover it. 00:25:40.776 [2024-11-26 20:55:44.294931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.776 [2024-11-26 20:55:44.294987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.776 qpair failed and we were unable to recover it. 00:25:40.776 [2024-11-26 20:55:44.295191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.776 [2024-11-26 20:55:44.295252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.776 qpair failed and we were unable to recover it. 00:25:40.776 [2024-11-26 20:55:44.295514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.776 [2024-11-26 20:55:44.295575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.776 qpair failed and we were unable to recover it. 00:25:40.776 [2024-11-26 20:55:44.295885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.776 [2024-11-26 20:55:44.295976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.776 qpair failed and we were unable to recover it. 00:25:40.776 [2024-11-26 20:55:44.296219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.776 [2024-11-26 20:55:44.296282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.776 qpair failed and we were unable to recover it. 00:25:40.776 [2024-11-26 20:55:44.296580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.776 [2024-11-26 20:55:44.296651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.776 qpair failed and we were unable to recover it. 00:25:40.776 [2024-11-26 20:55:44.296962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.776 [2024-11-26 20:55:44.297021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.776 qpair failed and we were unable to recover it. 00:25:40.776 [2024-11-26 20:55:44.297190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.776 [2024-11-26 20:55:44.297247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.776 qpair failed and we were unable to recover it. 00:25:40.776 [2024-11-26 20:55:44.297510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.776 [2024-11-26 20:55:44.297567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.776 qpair failed and we were unable to recover it. 00:25:40.776 [2024-11-26 20:55:44.297775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.776 [2024-11-26 20:55:44.297845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.776 qpair failed and we were unable to recover it. 00:25:40.776 [2024-11-26 20:55:44.298131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.776 [2024-11-26 20:55:44.298186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.776 qpair failed and we were unable to recover it. 00:25:40.776 [2024-11-26 20:55:44.298412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.776 [2024-11-26 20:55:44.298470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.776 qpair failed and we were unable to recover it. 00:25:40.776 [2024-11-26 20:55:44.298658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.776 [2024-11-26 20:55:44.298713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.776 qpair failed and we were unable to recover it. 00:25:40.776 [2024-11-26 20:55:44.298896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.776 [2024-11-26 20:55:44.298954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.776 qpair failed and we were unable to recover it. 00:25:40.776 [2024-11-26 20:55:44.299208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.776 [2024-11-26 20:55:44.299266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.776 qpair failed and we were unable to recover it. 00:25:40.776 [2024-11-26 20:55:44.299563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.776 [2024-11-26 20:55:44.299655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.776 qpair failed and we were unable to recover it. 00:25:40.776 [2024-11-26 20:55:44.299962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.776 [2024-11-26 20:55:44.300045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.776 qpair failed and we were unable to recover it. 00:25:40.776 [2024-11-26 20:55:44.300329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.776 [2024-11-26 20:55:44.300392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.776 qpair failed and we were unable to recover it. 00:25:40.776 [2024-11-26 20:55:44.300666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.776 [2024-11-26 20:55:44.300727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.776 qpair failed and we were unable to recover it. 00:25:40.776 [2024-11-26 20:55:44.301037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.776 [2024-11-26 20:55:44.301116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.776 qpair failed and we were unable to recover it. 00:25:40.776 [2024-11-26 20:55:44.301388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.776 [2024-11-26 20:55:44.301450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.776 qpair failed and we were unable to recover it. 00:25:40.776 [2024-11-26 20:55:44.301734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.776 [2024-11-26 20:55:44.301795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.776 qpair failed and we were unable to recover it. 00:25:40.776 [2024-11-26 20:55:44.302031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.776 [2024-11-26 20:55:44.302092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.776 qpair failed and we were unable to recover it. 00:25:40.776 [2024-11-26 20:55:44.302354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.777 [2024-11-26 20:55:44.302416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.777 qpair failed and we were unable to recover it. 00:25:40.777 [2024-11-26 20:55:44.302652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.777 [2024-11-26 20:55:44.302715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.777 qpair failed and we were unable to recover it. 00:25:40.777 [2024-11-26 20:55:44.302947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.777 [2024-11-26 20:55:44.303008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.777 qpair failed and we were unable to recover it. 00:25:40.777 [2024-11-26 20:55:44.303239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.777 [2024-11-26 20:55:44.303319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.777 qpair failed and we were unable to recover it. 00:25:40.777 [2024-11-26 20:55:44.303562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.777 [2024-11-26 20:55:44.303625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.777 qpair failed and we were unable to recover it. 00:25:40.777 [2024-11-26 20:55:44.303893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.777 [2024-11-26 20:55:44.303954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.777 qpair failed and we were unable to recover it. 00:25:40.777 [2024-11-26 20:55:44.304159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.777 [2024-11-26 20:55:44.304219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.777 qpair failed and we were unable to recover it. 00:25:40.777 [2024-11-26 20:55:44.304463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.777 [2024-11-26 20:55:44.304545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.777 qpair failed and we were unable to recover it. 00:25:40.777 [2024-11-26 20:55:44.304783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.777 [2024-11-26 20:55:44.304860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.777 qpair failed and we were unable to recover it. 00:25:40.777 [2024-11-26 20:55:44.305132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.777 [2024-11-26 20:55:44.305191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.777 qpair failed and we were unable to recover it. 00:25:40.777 [2024-11-26 20:55:44.305437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.777 [2024-11-26 20:55:44.305499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.777 qpair failed and we were unable to recover it. 00:25:40.777 [2024-11-26 20:55:44.305789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.777 [2024-11-26 20:55:44.305850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.777 qpair failed and we were unable to recover it. 00:25:40.777 [2024-11-26 20:55:44.306031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.777 [2024-11-26 20:55:44.306091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.777 qpair failed and we were unable to recover it. 00:25:40.777 [2024-11-26 20:55:44.306292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.777 [2024-11-26 20:55:44.306369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.777 qpair failed and we were unable to recover it. 00:25:40.777 [2024-11-26 20:55:44.306635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.777 [2024-11-26 20:55:44.306695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.777 qpair failed and we were unable to recover it. 00:25:40.777 [2024-11-26 20:55:44.306928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.777 [2024-11-26 20:55:44.306990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.777 qpair failed and we were unable to recover it. 00:25:40.777 [2024-11-26 20:55:44.307259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.777 [2024-11-26 20:55:44.307334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.777 qpair failed and we were unable to recover it. 00:25:40.777 [2024-11-26 20:55:44.307554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.777 [2024-11-26 20:55:44.307615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.777 qpair failed and we were unable to recover it. 00:25:40.777 [2024-11-26 20:55:44.307801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.777 [2024-11-26 20:55:44.307862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.777 qpair failed and we were unable to recover it. 00:25:40.777 [2024-11-26 20:55:44.308062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.777 [2024-11-26 20:55:44.308123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.777 qpair failed and we were unable to recover it. 00:25:40.777 [2024-11-26 20:55:44.308366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.777 [2024-11-26 20:55:44.308440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.777 qpair failed and we were unable to recover it. 00:25:40.777 [2024-11-26 20:55:44.308642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.777 [2024-11-26 20:55:44.308705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.777 qpair failed and we were unable to recover it. 00:25:40.777 [2024-11-26 20:55:44.308972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.777 [2024-11-26 20:55:44.309032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.777 qpair failed and we were unable to recover it. 00:25:40.777 [2024-11-26 20:55:44.309231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.777 [2024-11-26 20:55:44.309292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.777 qpair failed and we were unable to recover it. 00:25:40.777 [2024-11-26 20:55:44.309544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.777 [2024-11-26 20:55:44.309607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.777 qpair failed and we were unable to recover it. 00:25:40.777 [2024-11-26 20:55:44.309805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.777 [2024-11-26 20:55:44.309865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.777 qpair failed and we were unable to recover it. 00:25:40.777 [2024-11-26 20:55:44.310133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.777 [2024-11-26 20:55:44.310193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.777 qpair failed and we were unable to recover it. 00:25:40.777 [2024-11-26 20:55:44.310446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.777 [2024-11-26 20:55:44.310508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.777 qpair failed and we were unable to recover it. 00:25:40.777 [2024-11-26 20:55:44.310740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.777 [2024-11-26 20:55:44.310801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.777 qpair failed and we were unable to recover it. 00:25:40.777 [2024-11-26 20:55:44.311028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.777 [2024-11-26 20:55:44.311089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.777 qpair failed and we were unable to recover it. 00:25:40.777 [2024-11-26 20:55:44.311275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.777 [2024-11-26 20:55:44.311354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.777 qpair failed and we were unable to recover it. 00:25:40.777 [2024-11-26 20:55:44.311624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.777 [2024-11-26 20:55:44.311686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.777 qpair failed and we were unable to recover it. 00:25:40.777 [2024-11-26 20:55:44.311952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.777 [2024-11-26 20:55:44.312030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.777 qpair failed and we were unable to recover it. 00:25:40.777 [2024-11-26 20:55:44.312273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.777 [2024-11-26 20:55:44.312348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.777 qpair failed and we were unable to recover it. 00:25:40.777 [2024-11-26 20:55:44.312619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.777 [2024-11-26 20:55:44.312682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.777 qpair failed and we were unable to recover it. 00:25:40.777 [2024-11-26 20:55:44.312927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.777 [2024-11-26 20:55:44.313005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.777 qpair failed and we were unable to recover it. 00:25:40.777 [2024-11-26 20:55:44.313239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.777 [2024-11-26 20:55:44.313299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.777 qpair failed and we were unable to recover it. 00:25:40.777 [2024-11-26 20:55:44.313589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.777 [2024-11-26 20:55:44.313669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.777 qpair failed and we were unable to recover it. 00:25:40.777 [2024-11-26 20:55:44.313873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.778 [2024-11-26 20:55:44.313952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.778 qpair failed and we were unable to recover it. 00:25:40.778 [2024-11-26 20:55:44.314180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.778 [2024-11-26 20:55:44.314243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.778 qpair failed and we were unable to recover it. 00:25:40.778 [2024-11-26 20:55:44.314495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.778 [2024-11-26 20:55:44.314575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.778 qpair failed and we were unable to recover it. 00:25:40.778 [2024-11-26 20:55:44.314833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.778 [2024-11-26 20:55:44.314915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.778 qpair failed and we were unable to recover it. 00:25:40.778 [2024-11-26 20:55:44.315149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.778 [2024-11-26 20:55:44.315213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.778 qpair failed and we were unable to recover it. 00:25:40.778 [2024-11-26 20:55:44.315452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.778 [2024-11-26 20:55:44.315537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.778 qpair failed and we were unable to recover it. 00:25:40.778 [2024-11-26 20:55:44.315781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.778 [2024-11-26 20:55:44.315841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.778 qpair failed and we were unable to recover it. 00:25:40.778 [2024-11-26 20:55:44.316107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.778 [2024-11-26 20:55:44.316168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.778 qpair failed and we were unable to recover it. 00:25:40.778 [2024-11-26 20:55:44.316351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.778 [2024-11-26 20:55:44.316412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.778 qpair failed and we were unable to recover it. 00:25:40.778 [2024-11-26 20:55:44.316691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.778 [2024-11-26 20:55:44.316770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.778 qpair failed and we were unable to recover it. 00:25:40.778 [2024-11-26 20:55:44.317020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.778 [2024-11-26 20:55:44.317099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.778 qpair failed and we were unable to recover it. 00:25:40.778 [2024-11-26 20:55:44.317388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.778 [2024-11-26 20:55:44.317468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.778 qpair failed and we were unable to recover it. 00:25:40.778 [2024-11-26 20:55:44.317646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.778 [2024-11-26 20:55:44.317708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.778 qpair failed and we were unable to recover it. 00:25:40.778 [2024-11-26 20:55:44.317908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.778 [2024-11-26 20:55:44.317969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.778 qpair failed and we were unable to recover it. 00:25:40.778 [2024-11-26 20:55:44.318145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.778 [2024-11-26 20:55:44.318205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.778 qpair failed and we were unable to recover it. 00:25:40.778 [2024-11-26 20:55:44.318432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.778 [2024-11-26 20:55:44.318516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.778 qpair failed and we were unable to recover it. 00:25:40.778 [2024-11-26 20:55:44.318802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.778 [2024-11-26 20:55:44.318880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.778 qpair failed and we were unable to recover it. 00:25:40.778 [2024-11-26 20:55:44.319131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.778 [2024-11-26 20:55:44.319192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.778 qpair failed and we were unable to recover it. 00:25:40.778 [2024-11-26 20:55:44.319449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.778 [2024-11-26 20:55:44.319529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.778 qpair failed and we were unable to recover it. 00:25:40.778 [2024-11-26 20:55:44.319766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.778 [2024-11-26 20:55:44.319828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.778 qpair failed and we were unable to recover it. 00:25:40.778 [2024-11-26 20:55:44.320037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.778 [2024-11-26 20:55:44.320099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.778 qpair failed and we were unable to recover it. 00:25:40.778 [2024-11-26 20:55:44.320363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.778 [2024-11-26 20:55:44.320425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.778 qpair failed and we were unable to recover it. 00:25:40.778 [2024-11-26 20:55:44.320727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.778 [2024-11-26 20:55:44.320822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.778 qpair failed and we were unable to recover it. 00:25:40.778 [2024-11-26 20:55:44.321034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.778 [2024-11-26 20:55:44.321095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.778 qpair failed and we were unable to recover it. 00:25:40.778 [2024-11-26 20:55:44.321398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.778 [2024-11-26 20:55:44.321479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.778 qpair failed and we were unable to recover it. 00:25:40.778 [2024-11-26 20:55:44.321766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.778 [2024-11-26 20:55:44.321827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.778 qpair failed and we were unable to recover it. 00:25:40.778 [2024-11-26 20:55:44.322057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.778 [2024-11-26 20:55:44.322117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.778 qpair failed and we were unable to recover it. 00:25:40.778 [2024-11-26 20:55:44.322334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.778 [2024-11-26 20:55:44.322396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.778 qpair failed and we were unable to recover it. 00:25:40.778 [2024-11-26 20:55:44.322665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.778 [2024-11-26 20:55:44.322745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.778 qpair failed and we were unable to recover it. 00:25:40.778 [2024-11-26 20:55:44.322980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.778 [2024-11-26 20:55:44.323040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.778 qpair failed and we were unable to recover it. 00:25:40.778 [2024-11-26 20:55:44.323278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.778 [2024-11-26 20:55:44.323354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.778 qpair failed and we were unable to recover it. 00:25:40.778 [2024-11-26 20:55:44.323665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.778 [2024-11-26 20:55:44.323745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.778 qpair failed and we were unable to recover it. 00:25:40.778 [2024-11-26 20:55:44.324043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.778 [2024-11-26 20:55:44.324121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.778 qpair failed and we were unable to recover it. 00:25:40.778 [2024-11-26 20:55:44.324382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.778 [2024-11-26 20:55:44.324462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.778 qpair failed and we were unable to recover it. 00:25:40.778 [2024-11-26 20:55:44.324760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.778 [2024-11-26 20:55:44.324841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.778 qpair failed and we were unable to recover it. 00:25:40.778 [2024-11-26 20:55:44.325063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.778 [2024-11-26 20:55:44.325122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.778 qpair failed and we were unable to recover it. 00:25:40.778 [2024-11-26 20:55:44.325374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.778 [2024-11-26 20:55:44.325458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.778 qpair failed and we were unable to recover it. 00:25:40.778 [2024-11-26 20:55:44.325745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.778 [2024-11-26 20:55:44.325824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.778 qpair failed and we were unable to recover it. 00:25:40.778 [2024-11-26 20:55:44.326090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.778 [2024-11-26 20:55:44.326150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.779 qpair failed and we were unable to recover it. 00:25:40.779 [2024-11-26 20:55:44.326397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.779 [2024-11-26 20:55:44.326477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.779 qpair failed and we were unable to recover it. 00:25:40.779 [2024-11-26 20:55:44.326745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.779 [2024-11-26 20:55:44.326825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.779 qpair failed and we were unable to recover it. 00:25:40.779 [2024-11-26 20:55:44.327055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.779 [2024-11-26 20:55:44.327116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.779 qpair failed and we were unable to recover it. 00:25:40.779 [2024-11-26 20:55:44.327348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.779 [2024-11-26 20:55:44.327411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.779 qpair failed and we were unable to recover it. 00:25:40.779 [2024-11-26 20:55:44.327642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.779 [2024-11-26 20:55:44.327722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.779 qpair failed and we were unable to recover it. 00:25:40.779 [2024-11-26 20:55:44.327964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.779 [2024-11-26 20:55:44.328025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.779 qpair failed and we were unable to recover it. 00:25:40.779 [2024-11-26 20:55:44.328287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.779 [2024-11-26 20:55:44.328363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.779 qpair failed and we were unable to recover it. 00:25:40.779 [2024-11-26 20:55:44.328652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.779 [2024-11-26 20:55:44.328729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.779 qpair failed and we were unable to recover it. 00:25:40.779 [2024-11-26 20:55:44.328992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.779 [2024-11-26 20:55:44.329070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.779 qpair failed and we were unable to recover it. 00:25:40.779 [2024-11-26 20:55:44.329267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.779 [2024-11-26 20:55:44.329345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.779 qpair failed and we were unable to recover it. 00:25:40.779 [2024-11-26 20:55:44.329604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.779 [2024-11-26 20:55:44.329685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.779 qpair failed and we were unable to recover it. 00:25:40.779 [2024-11-26 20:55:44.329975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.779 [2024-11-26 20:55:44.330054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.779 qpair failed and we were unable to recover it. 00:25:40.779 [2024-11-26 20:55:44.330254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.779 [2024-11-26 20:55:44.330341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.779 qpair failed and we were unable to recover it. 00:25:40.779 [2024-11-26 20:55:44.330565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.779 [2024-11-26 20:55:44.330645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.779 qpair failed and we were unable to recover it. 00:25:40.779 [2024-11-26 20:55:44.330939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.779 [2024-11-26 20:55:44.331018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.779 qpair failed and we were unable to recover it. 00:25:40.779 [2024-11-26 20:55:44.331239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.779 [2024-11-26 20:55:44.331299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.779 qpair failed and we were unable to recover it. 00:25:40.779 [2024-11-26 20:55:44.331582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.779 [2024-11-26 20:55:44.331644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.779 qpair failed and we were unable to recover it. 00:25:40.779 [2024-11-26 20:55:44.331896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.779 [2024-11-26 20:55:44.331957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.779 qpair failed and we were unable to recover it. 00:25:40.779 [2024-11-26 20:55:44.332176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.779 [2024-11-26 20:55:44.332236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.779 qpair failed and we were unable to recover it. 00:25:40.779 [2024-11-26 20:55:44.332553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.779 [2024-11-26 20:55:44.332639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.779 qpair failed and we were unable to recover it. 00:25:40.779 [2024-11-26 20:55:44.332936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.779 [2024-11-26 20:55:44.333014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.779 qpair failed and we were unable to recover it. 00:25:40.779 [2024-11-26 20:55:44.333250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.779 [2024-11-26 20:55:44.333322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.779 qpair failed and we were unable to recover it. 00:25:40.779 [2024-11-26 20:55:44.333622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.779 [2024-11-26 20:55:44.333700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.779 qpair failed and we were unable to recover it. 00:25:40.779 [2024-11-26 20:55:44.333962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.779 [2024-11-26 20:55:44.334053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.779 qpair failed and we were unable to recover it. 00:25:40.779 [2024-11-26 20:55:44.334287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.779 [2024-11-26 20:55:44.334378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.779 qpair failed and we were unable to recover it. 00:25:40.779 [2024-11-26 20:55:44.334643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.779 [2024-11-26 20:55:44.334721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.779 qpair failed and we were unable to recover it. 00:25:40.779 [2024-11-26 20:55:44.334977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.779 [2024-11-26 20:55:44.335057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.779 qpair failed and we were unable to recover it. 00:25:40.779 [2024-11-26 20:55:44.335260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.779 [2024-11-26 20:55:44.335336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.779 qpair failed and we were unable to recover it. 00:25:40.779 [2024-11-26 20:55:44.335623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.779 [2024-11-26 20:55:44.335702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.779 qpair failed and we were unable to recover it. 00:25:40.779 [2024-11-26 20:55:44.335962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.779 [2024-11-26 20:55:44.336044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.779 qpair failed and we were unable to recover it. 00:25:40.779 [2024-11-26 20:55:44.336331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.779 [2024-11-26 20:55:44.336395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.779 qpair failed and we were unable to recover it. 00:25:40.779 [2024-11-26 20:55:44.336611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.779 [2024-11-26 20:55:44.336690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.779 qpair failed and we were unable to recover it. 00:25:40.779 [2024-11-26 20:55:44.336947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.779 [2024-11-26 20:55:44.337026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.779 qpair failed and we were unable to recover it. 00:25:40.779 [2024-11-26 20:55:44.337223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.779 [2024-11-26 20:55:44.337284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.780 qpair failed and we were unable to recover it. 00:25:40.780 [2024-11-26 20:55:44.337561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.780 [2024-11-26 20:55:44.337645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.780 qpair failed and we were unable to recover it. 00:25:40.780 [2024-11-26 20:55:44.337920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.780 [2024-11-26 20:55:44.337999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.780 qpair failed and we were unable to recover it. 00:25:40.780 [2024-11-26 20:55:44.338225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.780 [2024-11-26 20:55:44.338288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.780 qpair failed and we were unable to recover it. 00:25:40.780 [2024-11-26 20:55:44.338581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.780 [2024-11-26 20:55:44.338661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.780 qpair failed and we were unable to recover it. 00:25:40.780 [2024-11-26 20:55:44.338954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.780 [2024-11-26 20:55:44.339033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.780 qpair failed and we were unable to recover it. 00:25:40.780 [2024-11-26 20:55:44.339271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.780 [2024-11-26 20:55:44.339345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.780 qpair failed and we were unable to recover it. 00:25:40.780 [2024-11-26 20:55:44.339579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.780 [2024-11-26 20:55:44.339658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.780 qpair failed and we were unable to recover it. 00:25:40.780 [2024-11-26 20:55:44.339924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.780 [2024-11-26 20:55:44.340004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.780 qpair failed and we were unable to recover it. 00:25:40.780 [2024-11-26 20:55:44.340243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.780 [2024-11-26 20:55:44.340332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.780 qpair failed and we were unable to recover it. 00:25:40.780 [2024-11-26 20:55:44.340566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.780 [2024-11-26 20:55:44.340659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.780 qpair failed and we were unable to recover it. 00:25:40.780 [2024-11-26 20:55:44.340958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.780 [2024-11-26 20:55:44.341037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.780 qpair failed and we were unable to recover it. 00:25:40.780 [2024-11-26 20:55:44.341231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.780 [2024-11-26 20:55:44.341291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.780 qpair failed and we were unable to recover it. 00:25:40.780 [2024-11-26 20:55:44.341617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.780 [2024-11-26 20:55:44.341696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.780 qpair failed and we were unable to recover it. 00:25:40.780 [2024-11-26 20:55:44.341988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.780 [2024-11-26 20:55:44.342066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.780 qpair failed and we were unable to recover it. 00:25:40.780 [2024-11-26 20:55:44.342346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.780 [2024-11-26 20:55:44.342409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.780 qpair failed and we were unable to recover it. 00:25:40.780 [2024-11-26 20:55:44.342667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.780 [2024-11-26 20:55:44.342745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:40.780 qpair failed and we were unable to recover it. 00:25:40.780 [2024-11-26 20:55:44.343098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.780 [2024-11-26 20:55:44.343197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.780 qpair failed and we were unable to recover it. 00:25:40.780 [2024-11-26 20:55:44.343492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.780 [2024-11-26 20:55:44.343556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.780 qpair failed and we were unable to recover it. 00:25:40.780 [2024-11-26 20:55:44.343869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.780 [2024-11-26 20:55:44.343934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.780 qpair failed and we were unable to recover it. 00:25:40.780 [2024-11-26 20:55:44.344245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.780 [2024-11-26 20:55:44.344335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.780 qpair failed and we were unable to recover it. 00:25:40.780 [2024-11-26 20:55:44.344664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.780 [2024-11-26 20:55:44.344729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.780 qpair failed and we were unable to recover it. 00:25:40.780 [2024-11-26 20:55:44.345044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.780 [2024-11-26 20:55:44.345108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.780 qpair failed and we were unable to recover it. 00:25:40.780 [2024-11-26 20:55:44.345355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.780 [2024-11-26 20:55:44.345416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.780 qpair failed and we were unable to recover it. 00:25:40.780 [2024-11-26 20:55:44.345714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.780 [2024-11-26 20:55:44.345781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.780 qpair failed and we were unable to recover it. 00:25:40.780 [2024-11-26 20:55:44.346025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.780 [2024-11-26 20:55:44.346090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.780 qpair failed and we were unable to recover it. 00:25:40.780 [2024-11-26 20:55:44.346344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.780 [2024-11-26 20:55:44.346406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.780 qpair failed and we were unable to recover it. 00:25:40.780 [2024-11-26 20:55:44.346625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.780 [2024-11-26 20:55:44.346688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.780 qpair failed and we were unable to recover it. 00:25:40.780 [2024-11-26 20:55:44.346937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.780 [2024-11-26 20:55:44.347006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.780 qpair failed and we were unable to recover it. 00:25:40.780 [2024-11-26 20:55:44.347332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.780 [2024-11-26 20:55:44.347393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.780 qpair failed and we were unable to recover it. 00:25:40.780 [2024-11-26 20:55:44.347638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.780 [2024-11-26 20:55:44.347703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.780 qpair failed and we were unable to recover it. 00:25:40.780 [2024-11-26 20:55:44.347962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.780 [2024-11-26 20:55:44.348026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.780 qpair failed and we were unable to recover it. 00:25:40.780 [2024-11-26 20:55:44.348229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.780 [2024-11-26 20:55:44.348287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.780 qpair failed and we were unable to recover it. 00:25:40.780 [2024-11-26 20:55:44.348576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.780 [2024-11-26 20:55:44.348661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.780 qpair failed and we were unable to recover it. 00:25:40.780 [2024-11-26 20:55:44.348904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.780 [2024-11-26 20:55:44.348969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.780 qpair failed and we were unable to recover it. 00:25:40.780 [2024-11-26 20:55:44.349283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.780 [2024-11-26 20:55:44.349385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.780 qpair failed and we were unable to recover it. 00:25:40.780 [2024-11-26 20:55:44.349581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.780 [2024-11-26 20:55:44.349661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.780 qpair failed and we were unable to recover it. 00:25:40.780 [2024-11-26 20:55:44.349946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.780 [2024-11-26 20:55:44.350028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.780 qpair failed and we were unable to recover it. 00:25:40.780 [2024-11-26 20:55:44.350337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.780 [2024-11-26 20:55:44.350419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.780 qpair failed and we were unable to recover it. 00:25:40.781 [2024-11-26 20:55:44.350721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.781 [2024-11-26 20:55:44.350785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.781 qpair failed and we were unable to recover it. 00:25:40.781 [2024-11-26 20:55:44.351073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.781 [2024-11-26 20:55:44.351138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.781 qpair failed and we were unable to recover it. 00:25:40.781 [2024-11-26 20:55:44.351437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.781 [2024-11-26 20:55:44.351514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.781 qpair failed and we were unable to recover it. 00:25:40.781 [2024-11-26 20:55:44.351793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.781 [2024-11-26 20:55:44.351857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.781 qpair failed and we were unable to recover it. 00:25:40.781 [2024-11-26 20:55:44.352080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.781 [2024-11-26 20:55:44.352144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.781 qpair failed and we were unable to recover it. 00:25:40.781 [2024-11-26 20:55:44.352445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.781 [2024-11-26 20:55:44.352517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.781 qpair failed and we were unable to recover it. 00:25:40.781 [2024-11-26 20:55:44.352781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.781 [2024-11-26 20:55:44.352863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.781 qpair failed and we were unable to recover it. 00:25:40.781 [2024-11-26 20:55:44.353119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.781 [2024-11-26 20:55:44.353182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.781 qpair failed and we were unable to recover it. 00:25:40.781 [2024-11-26 20:55:44.353454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.781 [2024-11-26 20:55:44.353515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.781 qpair failed and we were unable to recover it. 00:25:40.781 [2024-11-26 20:55:44.353793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.781 [2024-11-26 20:55:44.353856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.781 qpair failed and we were unable to recover it. 00:25:40.781 [2024-11-26 20:55:44.354059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.781 [2024-11-26 20:55:44.354125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.781 qpair failed and we were unable to recover it. 00:25:40.781 [2024-11-26 20:55:44.354371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.781 [2024-11-26 20:55:44.354438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.781 qpair failed and we were unable to recover it. 00:25:40.781 [2024-11-26 20:55:44.354640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.781 [2024-11-26 20:55:44.354700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.781 qpair failed and we were unable to recover it. 00:25:40.781 [2024-11-26 20:55:44.354929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.781 [2024-11-26 20:55:44.354993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.781 qpair failed and we were unable to recover it. 00:25:40.781 [2024-11-26 20:55:44.355258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.781 [2024-11-26 20:55:44.355342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.781 qpair failed and we were unable to recover it. 00:25:40.781 [2024-11-26 20:55:44.355585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.781 [2024-11-26 20:55:44.355666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.781 qpair failed and we were unable to recover it. 00:25:40.781 [2024-11-26 20:55:44.355934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.781 [2024-11-26 20:55:44.356004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.781 qpair failed and we were unable to recover it. 00:25:40.781 [2024-11-26 20:55:44.356337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.781 [2024-11-26 20:55:44.356435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.781 qpair failed and we were unable to recover it. 00:25:40.781 [2024-11-26 20:55:44.356733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.781 [2024-11-26 20:55:44.356798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.781 qpair failed and we were unable to recover it. 00:25:40.781 [2024-11-26 20:55:44.357069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.781 [2024-11-26 20:55:44.357140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.781 qpair failed and we were unable to recover it. 00:25:40.781 [2024-11-26 20:55:44.357464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.781 [2024-11-26 20:55:44.357532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.781 qpair failed and we were unable to recover it. 00:25:40.781 [2024-11-26 20:55:44.357766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.781 [2024-11-26 20:55:44.357830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.781 qpair failed and we were unable to recover it. 00:25:40.781 [2024-11-26 20:55:44.358118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.781 [2024-11-26 20:55:44.358181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.781 qpair failed and we were unable to recover it. 00:25:40.781 [2024-11-26 20:55:44.358412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.781 [2024-11-26 20:55:44.358478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.781 qpair failed and we were unable to recover it. 00:25:40.781 [2024-11-26 20:55:44.358774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.781 [2024-11-26 20:55:44.358842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.781 qpair failed and we were unable to recover it. 00:25:40.781 [2024-11-26 20:55:44.359138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.781 [2024-11-26 20:55:44.359202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.781 qpair failed and we were unable to recover it. 00:25:40.781 [2024-11-26 20:55:44.359483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.781 [2024-11-26 20:55:44.359549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.781 qpair failed and we were unable to recover it. 00:25:40.781 [2024-11-26 20:55:44.359760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.781 [2024-11-26 20:55:44.359835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.781 qpair failed and we were unable to recover it. 00:25:40.781 [2024-11-26 20:55:44.360157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.781 [2024-11-26 20:55:44.360224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.781 qpair failed and we were unable to recover it. 00:25:40.781 [2024-11-26 20:55:44.360474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.781 [2024-11-26 20:55:44.360538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.781 qpair failed and we were unable to recover it. 00:25:40.781 [2024-11-26 20:55:44.360792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.781 [2024-11-26 20:55:44.360856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.781 qpair failed and we were unable to recover it. 00:25:40.781 [2024-11-26 20:55:44.361149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.781 [2024-11-26 20:55:44.361213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.781 qpair failed and we were unable to recover it. 00:25:40.781 [2024-11-26 20:55:44.361504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.781 [2024-11-26 20:55:44.361585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.781 qpair failed and we were unable to recover it. 00:25:40.781 [2024-11-26 20:55:44.361833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.781 [2024-11-26 20:55:44.361898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.781 qpair failed and we were unable to recover it. 00:25:40.781 [2024-11-26 20:55:44.362136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.781 [2024-11-26 20:55:44.362200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.781 qpair failed and we were unable to recover it. 00:25:40.781 [2024-11-26 20:55:44.362464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.781 [2024-11-26 20:55:44.362529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.781 qpair failed and we were unable to recover it. 00:25:40.781 [2024-11-26 20:55:44.362804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.781 [2024-11-26 20:55:44.362876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.781 qpair failed and we were unable to recover it. 00:25:40.781 [2024-11-26 20:55:44.363128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.781 [2024-11-26 20:55:44.363194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.781 qpair failed and we were unable to recover it. 00:25:40.781 [2024-11-26 20:55:44.363474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.782 [2024-11-26 20:55:44.363540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.782 qpair failed and we were unable to recover it. 00:25:40.782 [2024-11-26 20:55:44.363764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.782 [2024-11-26 20:55:44.363828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.782 qpair failed and we were unable to recover it. 00:25:40.782 [2024-11-26 20:55:44.364016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.782 [2024-11-26 20:55:44.364080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.782 qpair failed and we were unable to recover it. 00:25:40.782 [2024-11-26 20:55:44.364341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.782 [2024-11-26 20:55:44.364410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.782 qpair failed and we were unable to recover it. 00:25:40.782 [2024-11-26 20:55:44.364640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.782 [2024-11-26 20:55:44.364705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.782 qpair failed and we were unable to recover it. 00:25:40.782 [2024-11-26 20:55:44.364986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.782 [2024-11-26 20:55:44.365050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.782 qpair failed and we were unable to recover it. 00:25:40.782 [2024-11-26 20:55:44.365241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.782 [2024-11-26 20:55:44.365321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.782 qpair failed and we were unable to recover it. 00:25:40.782 [2024-11-26 20:55:44.365550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.782 [2024-11-26 20:55:44.365628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.782 qpair failed and we were unable to recover it. 00:25:40.782 [2024-11-26 20:55:44.365954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.782 [2024-11-26 20:55:44.366020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.782 qpair failed and we were unable to recover it. 00:25:40.782 [2024-11-26 20:55:44.366279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.782 [2024-11-26 20:55:44.366368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.782 qpair failed and we were unable to recover it. 00:25:40.782 [2024-11-26 20:55:44.366661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.782 [2024-11-26 20:55:44.366724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.782 qpair failed and we were unable to recover it. 00:25:40.782 [2024-11-26 20:55:44.367006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.782 [2024-11-26 20:55:44.367076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.782 qpair failed and we were unable to recover it. 00:25:40.782 [2024-11-26 20:55:44.367347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.782 [2024-11-26 20:55:44.367413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.782 qpair failed and we were unable to recover it. 00:25:40.782 [2024-11-26 20:55:44.367620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.782 [2024-11-26 20:55:44.367684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.782 qpair failed and we were unable to recover it. 00:25:40.782 [2024-11-26 20:55:44.367933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.782 [2024-11-26 20:55:44.367996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.782 qpair failed and we were unable to recover it. 00:25:40.782 [2024-11-26 20:55:44.368269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.782 [2024-11-26 20:55:44.368368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.782 qpair failed and we were unable to recover it. 00:25:40.782 [2024-11-26 20:55:44.368667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.782 [2024-11-26 20:55:44.368735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.782 qpair failed and we were unable to recover it. 00:25:40.782 [2024-11-26 20:55:44.368928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.782 [2024-11-26 20:55:44.368993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.782 qpair failed and we were unable to recover it. 00:25:40.782 [2024-11-26 20:55:44.369257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.782 [2024-11-26 20:55:44.369347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.782 qpair failed and we were unable to recover it. 00:25:40.782 [2024-11-26 20:55:44.369627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.782 [2024-11-26 20:55:44.369690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.782 qpair failed and we were unable to recover it. 00:25:40.782 [2024-11-26 20:55:44.369965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.782 [2024-11-26 20:55:44.370034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.782 qpair failed and we were unable to recover it. 00:25:40.782 [2024-11-26 20:55:44.370336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.782 [2024-11-26 20:55:44.370417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.782 qpair failed and we were unable to recover it. 00:25:40.782 [2024-11-26 20:55:44.370710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.782 [2024-11-26 20:55:44.370775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.782 qpair failed and we were unable to recover it. 00:25:40.782 [2024-11-26 20:55:44.371015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.782 [2024-11-26 20:55:44.371085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.782 qpair failed and we were unable to recover it. 00:25:40.782 [2024-11-26 20:55:44.371336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.782 [2024-11-26 20:55:44.371403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.782 qpair failed and we were unable to recover it. 00:25:40.782 [2024-11-26 20:55:44.371690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.782 [2024-11-26 20:55:44.371754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.782 qpair failed and we were unable to recover it. 00:25:40.782 [2024-11-26 20:55:44.372037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.782 [2024-11-26 20:55:44.372102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.782 qpair failed and we were unable to recover it. 00:25:40.782 [2024-11-26 20:55:44.372389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.782 [2024-11-26 20:55:44.372455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.782 qpair failed and we were unable to recover it. 00:25:40.782 [2024-11-26 20:55:44.372746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.782 [2024-11-26 20:55:44.372813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.782 qpair failed and we were unable to recover it. 00:25:40.782 [2024-11-26 20:55:44.373099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.782 [2024-11-26 20:55:44.373164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.782 qpair failed and we were unable to recover it. 00:25:40.782 [2024-11-26 20:55:44.373371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.782 [2024-11-26 20:55:44.373437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.782 qpair failed and we were unable to recover it. 00:25:40.782 [2024-11-26 20:55:44.373669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.782 [2024-11-26 20:55:44.373733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.782 qpair failed and we were unable to recover it. 00:25:40.782 [2024-11-26 20:55:44.373977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.782 [2024-11-26 20:55:44.374058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.782 qpair failed and we were unable to recover it. 00:25:40.782 [2024-11-26 20:55:44.374281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.782 [2024-11-26 20:55:44.374377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.782 qpair failed and we were unable to recover it. 00:25:40.782 [2024-11-26 20:55:44.374631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.782 [2024-11-26 20:55:44.374694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.782 qpair failed and we were unable to recover it. 00:25:40.782 [2024-11-26 20:55:44.374955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.782 [2024-11-26 20:55:44.375020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.782 qpair failed and we were unable to recover it. 00:25:40.782 [2024-11-26 20:55:44.375242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.782 [2024-11-26 20:55:44.375336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.782 qpair failed and we were unable to recover it. 00:25:40.782 [2024-11-26 20:55:44.375577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.782 [2024-11-26 20:55:44.375644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.782 qpair failed and we were unable to recover it. 00:25:40.782 [2024-11-26 20:55:44.375899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.782 [2024-11-26 20:55:44.375964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.783 qpair failed and we were unable to recover it. 00:25:40.783 [2024-11-26 20:55:44.376192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.783 [2024-11-26 20:55:44.376255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.783 qpair failed and we were unable to recover it. 00:25:40.783 [2024-11-26 20:55:44.376557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.783 [2024-11-26 20:55:44.376632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.783 qpair failed and we were unable to recover it. 00:25:40.783 [2024-11-26 20:55:44.376942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.783 [2024-11-26 20:55:44.377007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.783 qpair failed and we were unable to recover it. 00:25:40.783 [2024-11-26 20:55:44.377269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.783 [2024-11-26 20:55:44.377356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.783 qpair failed and we were unable to recover it. 00:25:40.783 [2024-11-26 20:55:44.377594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.783 [2024-11-26 20:55:44.377657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.783 qpair failed and we were unable to recover it. 00:25:40.783 [2024-11-26 20:55:44.377932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.783 [2024-11-26 20:55:44.378012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.783 qpair failed and we were unable to recover it. 00:25:40.783 [2024-11-26 20:55:44.378331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.783 [2024-11-26 20:55:44.378398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.783 qpair failed and we were unable to recover it. 00:25:40.783 [2024-11-26 20:55:44.378651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.783 [2024-11-26 20:55:44.378716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.783 qpair failed and we were unable to recover it. 00:25:40.783 [2024-11-26 20:55:44.378959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.783 [2024-11-26 20:55:44.379023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.783 qpair failed and we were unable to recover it. 00:25:40.783 [2024-11-26 20:55:44.379320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.783 [2024-11-26 20:55:44.379398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.783 qpair failed and we were unable to recover it. 00:25:40.783 [2024-11-26 20:55:44.379711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.783 [2024-11-26 20:55:44.379776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.783 qpair failed and we were unable to recover it. 00:25:40.783 [2024-11-26 20:55:44.380049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.783 [2024-11-26 20:55:44.380113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.783 qpair failed and we were unable to recover it. 00:25:40.783 [2024-11-26 20:55:44.380363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.783 [2024-11-26 20:55:44.380430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.783 qpair failed and we were unable to recover it. 00:25:40.783 [2024-11-26 20:55:44.380656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.783 [2024-11-26 20:55:44.380738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.783 qpair failed and we were unable to recover it. 00:25:40.783 [2024-11-26 20:55:44.381019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.783 [2024-11-26 20:55:44.381087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.783 qpair failed and we were unable to recover it. 00:25:40.783 [2024-11-26 20:55:44.381387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.783 [2024-11-26 20:55:44.381454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.783 qpair failed and we were unable to recover it. 00:25:40.783 [2024-11-26 20:55:44.381657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.783 [2024-11-26 20:55:44.381722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.783 qpair failed and we were unable to recover it. 00:25:40.783 [2024-11-26 20:55:44.381932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.783 [2024-11-26 20:55:44.381995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.783 qpair failed and we were unable to recover it. 00:25:40.783 [2024-11-26 20:55:44.382295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.783 [2024-11-26 20:55:44.382380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.783 qpair failed and we were unable to recover it. 00:25:40.783 [2024-11-26 20:55:44.382667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.783 [2024-11-26 20:55:44.382731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.783 qpair failed and we were unable to recover it. 00:25:40.783 [2024-11-26 20:55:44.382933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.783 [2024-11-26 20:55:44.382997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.783 qpair failed and we were unable to recover it. 00:25:40.783 [2024-11-26 20:55:44.383234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.783 [2024-11-26 20:55:44.383297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.783 qpair failed and we were unable to recover it. 00:25:40.783 [2024-11-26 20:55:44.383617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.783 [2024-11-26 20:55:44.383685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.783 qpair failed and we were unable to recover it. 00:25:40.783 [2024-11-26 20:55:44.383951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.783 [2024-11-26 20:55:44.384016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.783 qpair failed and we were unable to recover it. 00:25:40.783 [2024-11-26 20:55:44.384273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.783 [2024-11-26 20:55:44.384360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.783 qpair failed and we were unable to recover it. 00:25:40.783 [2024-11-26 20:55:44.384607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.783 [2024-11-26 20:55:44.384671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.783 qpair failed and we were unable to recover it. 00:25:40.783 [2024-11-26 20:55:44.384920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.783 [2024-11-26 20:55:44.384988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.783 qpair failed and we were unable to recover it. 00:25:40.783 [2024-11-26 20:55:44.385301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.783 [2024-11-26 20:55:44.385389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.783 qpair failed and we were unable to recover it. 00:25:40.783 [2024-11-26 20:55:44.385673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.783 [2024-11-26 20:55:44.385737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.783 qpair failed and we were unable to recover it. 00:25:40.783 [2024-11-26 20:55:44.385934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.783 [2024-11-26 20:55:44.386000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.783 qpair failed and we were unable to recover it. 00:25:40.783 [2024-11-26 20:55:44.386239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.783 [2024-11-26 20:55:44.386326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.783 qpair failed and we were unable to recover it. 00:25:40.783 [2024-11-26 20:55:44.386588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.783 [2024-11-26 20:55:44.386656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.783 qpair failed and we were unable to recover it. 00:25:40.783 [2024-11-26 20:55:44.386938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.783 [2024-11-26 20:55:44.387001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.783 qpair failed and we were unable to recover it. 00:25:40.783 [2024-11-26 20:55:44.387286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.783 [2024-11-26 20:55:44.387371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.783 qpair failed and we were unable to recover it. 00:25:40.783 [2024-11-26 20:55:44.387633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.784 [2024-11-26 20:55:44.387698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.784 qpair failed and we were unable to recover it. 00:25:40.784 [2024-11-26 20:55:44.387948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.784 [2024-11-26 20:55:44.388013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.784 qpair failed and we were unable to recover it. 00:25:40.784 [2024-11-26 20:55:44.388278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.784 [2024-11-26 20:55:44.388384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.784 qpair failed and we were unable to recover it. 00:25:40.784 [2024-11-26 20:55:44.388634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.784 [2024-11-26 20:55:44.388699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.784 qpair failed and we were unable to recover it. 00:25:40.784 [2024-11-26 20:55:44.389001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.784 [2024-11-26 20:55:44.389078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.784 qpair failed and we were unable to recover it. 00:25:40.784 [2024-11-26 20:55:44.389339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.784 [2024-11-26 20:55:44.389407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.784 qpair failed and we were unable to recover it. 00:25:40.784 [2024-11-26 20:55:44.389697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.784 [2024-11-26 20:55:44.389762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.784 qpair failed and we were unable to recover it. 00:25:40.784 [2024-11-26 20:55:44.390018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.784 [2024-11-26 20:55:44.390081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.784 qpair failed and we were unable to recover it. 00:25:40.784 [2024-11-26 20:55:44.390288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.784 [2024-11-26 20:55:44.390377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.784 qpair failed and we were unable to recover it. 00:25:40.784 [2024-11-26 20:55:44.390660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.784 [2024-11-26 20:55:44.390729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.784 qpair failed and we were unable to recover it. 00:25:40.784 [2024-11-26 20:55:44.390938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.784 [2024-11-26 20:55:44.391002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.784 qpair failed and we were unable to recover it. 00:25:40.784 [2024-11-26 20:55:44.391273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.784 [2024-11-26 20:55:44.391355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.784 qpair failed and we were unable to recover it. 00:25:40.784 [2024-11-26 20:55:44.391625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.784 [2024-11-26 20:55:44.391689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.784 qpair failed and we were unable to recover it. 00:25:40.784 [2024-11-26 20:55:44.391926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.784 [2024-11-26 20:55:44.391998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.784 qpair failed and we were unable to recover it. 00:25:40.784 [2024-11-26 20:55:44.392235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.784 [2024-11-26 20:55:44.392299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.784 qpair failed and we were unable to recover it. 00:25:40.784 [2024-11-26 20:55:44.392616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.784 [2024-11-26 20:55:44.392680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.784 qpair failed and we were unable to recover it. 00:25:40.784 [2024-11-26 20:55:44.392923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.784 [2024-11-26 20:55:44.392999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.784 qpair failed and we were unable to recover it. 00:25:40.784 [2024-11-26 20:55:44.393262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.784 [2024-11-26 20:55:44.393369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.784 qpair failed and we were unable to recover it. 00:25:40.784 [2024-11-26 20:55:44.393622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.784 [2024-11-26 20:55:44.393686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.784 qpair failed and we were unable to recover it. 00:25:40.784 [2024-11-26 20:55:44.393969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.784 [2024-11-26 20:55:44.394033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.784 qpair failed and we were unable to recover it. 00:25:40.784 [2024-11-26 20:55:44.394279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.784 [2024-11-26 20:55:44.394366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.784 qpair failed and we were unable to recover it. 00:25:40.784 [2024-11-26 20:55:44.394581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.784 [2024-11-26 20:55:44.394648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.784 qpair failed and we were unable to recover it. 00:25:40.784 [2024-11-26 20:55:44.394957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.784 [2024-11-26 20:55:44.395023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.784 qpair failed and we were unable to recover it. 00:25:40.784 [2024-11-26 20:55:44.395327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.784 [2024-11-26 20:55:44.395394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.784 qpair failed and we were unable to recover it. 00:25:40.784 [2024-11-26 20:55:44.395636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.784 [2024-11-26 20:55:44.395702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.784 qpair failed and we were unable to recover it. 00:25:40.784 [2024-11-26 20:55:44.395975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.784 [2024-11-26 20:55:44.396052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.784 qpair failed and we were unable to recover it. 00:25:40.784 [2024-11-26 20:55:44.396381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.784 [2024-11-26 20:55:44.396448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.784 qpair failed and we were unable to recover it. 00:25:40.784 [2024-11-26 20:55:44.396734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.784 [2024-11-26 20:55:44.396798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.784 qpair failed and we were unable to recover it. 00:25:40.784 [2024-11-26 20:55:44.397037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.784 [2024-11-26 20:55:44.397103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.784 qpair failed and we were unable to recover it. 00:25:40.784 [2024-11-26 20:55:44.397356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.784 [2024-11-26 20:55:44.397437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.784 qpair failed and we were unable to recover it. 00:25:40.784 [2024-11-26 20:55:44.397720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.784 [2024-11-26 20:55:44.397786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.784 qpair failed and we were unable to recover it. 00:25:40.784 [2024-11-26 20:55:44.398016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.784 [2024-11-26 20:55:44.398080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.784 qpair failed and we were unable to recover it. 00:25:40.784 [2024-11-26 20:55:44.398336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.784 [2024-11-26 20:55:44.398401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.784 qpair failed and we were unable to recover it. 00:25:40.784 [2024-11-26 20:55:44.398621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.784 [2024-11-26 20:55:44.398685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.784 qpair failed and we were unable to recover it. 00:25:40.784 [2024-11-26 20:55:44.398977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.784 [2024-11-26 20:55:44.399043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.784 qpair failed and we were unable to recover it. 00:25:40.784 [2024-11-26 20:55:44.399341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.784 [2024-11-26 20:55:44.399407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.784 qpair failed and we were unable to recover it. 00:25:40.784 [2024-11-26 20:55:44.399662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.784 [2024-11-26 20:55:44.399726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.784 qpair failed and we were unable to recover it. 00:25:40.784 [2024-11-26 20:55:44.399934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.784 [2024-11-26 20:55:44.399996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.784 qpair failed and we were unable to recover it. 00:25:40.784 [2024-11-26 20:55:44.400229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.784 [2024-11-26 20:55:44.400299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.785 qpair failed and we were unable to recover it. 00:25:40.785 [2024-11-26 20:55:44.400651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.785 [2024-11-26 20:55:44.400715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.785 qpair failed and we were unable to recover it. 00:25:40.785 [2024-11-26 20:55:44.400966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.785 [2024-11-26 20:55:44.401030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.785 qpair failed and we were unable to recover it. 00:25:40.785 [2024-11-26 20:55:44.401339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.785 [2024-11-26 20:55:44.401405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.785 qpair failed and we were unable to recover it. 00:25:40.785 [2024-11-26 20:55:44.401699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.785 [2024-11-26 20:55:44.401768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.785 qpair failed and we were unable to recover it. 00:25:40.785 [2024-11-26 20:55:44.402050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.785 [2024-11-26 20:55:44.402126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.785 qpair failed and we were unable to recover it. 00:25:40.785 [2024-11-26 20:55:44.402416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.785 [2024-11-26 20:55:44.402484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.785 qpair failed and we were unable to recover it. 00:25:40.785 [2024-11-26 20:55:44.402752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.785 [2024-11-26 20:55:44.402815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.785 qpair failed and we were unable to recover it. 00:25:40.785 [2024-11-26 20:55:44.403058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.785 [2024-11-26 20:55:44.403135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.785 qpair failed and we were unable to recover it. 00:25:40.785 [2024-11-26 20:55:44.403359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.785 [2024-11-26 20:55:44.403426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.785 qpair failed and we were unable to recover it. 00:25:40.785 [2024-11-26 20:55:44.403668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.785 [2024-11-26 20:55:44.403731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.785 qpair failed and we were unable to recover it. 00:25:40.785 [2024-11-26 20:55:44.403977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.785 [2024-11-26 20:55:44.404043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.785 qpair failed and we were unable to recover it. 00:25:40.785 [2024-11-26 20:55:44.404322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.785 [2024-11-26 20:55:44.404388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.785 qpair failed and we were unable to recover it. 00:25:40.785 [2024-11-26 20:55:44.404675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.785 [2024-11-26 20:55:44.404740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.785 qpair failed and we were unable to recover it. 00:25:40.785 [2024-11-26 20:55:44.404963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.785 [2024-11-26 20:55:44.405031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.785 qpair failed and we were unable to recover it. 00:25:40.785 [2024-11-26 20:55:44.405287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.785 [2024-11-26 20:55:44.405376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.785 qpair failed and we were unable to recover it. 00:25:40.785 [2024-11-26 20:55:44.405670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.785 [2024-11-26 20:55:44.405734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.785 qpair failed and we were unable to recover it. 00:25:40.785 [2024-11-26 20:55:44.405977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.785 [2024-11-26 20:55:44.406046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.785 qpair failed and we were unable to recover it. 00:25:40.785 [2024-11-26 20:55:44.406278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.785 [2024-11-26 20:55:44.406369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.785 qpair failed and we were unable to recover it. 00:25:40.785 [2024-11-26 20:55:44.406671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.785 [2024-11-26 20:55:44.406736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.785 qpair failed and we were unable to recover it. 00:25:40.785 [2024-11-26 20:55:44.406985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.785 [2024-11-26 20:55:44.407048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.785 qpair failed and we were unable to recover it. 00:25:40.785 [2024-11-26 20:55:44.407343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.785 [2024-11-26 20:55:44.407423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.785 qpair failed and we were unable to recover it. 00:25:40.785 [2024-11-26 20:55:44.407689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.785 [2024-11-26 20:55:44.407752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.785 qpair failed and we were unable to recover it. 00:25:40.785 [2024-11-26 20:55:44.407993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.785 [2024-11-26 20:55:44.408056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.785 qpair failed and we were unable to recover it. 00:25:40.785 [2024-11-26 20:55:44.408332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.785 [2024-11-26 20:55:44.408399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.785 qpair failed and we were unable to recover it. 00:25:40.785 [2024-11-26 20:55:44.408685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.785 [2024-11-26 20:55:44.408751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.785 qpair failed and we were unable to recover it. 00:25:40.785 [2024-11-26 20:55:44.409058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.785 [2024-11-26 20:55:44.409122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.785 qpair failed and we were unable to recover it. 00:25:40.785 [2024-11-26 20:55:44.409420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.785 [2024-11-26 20:55:44.409485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.785 qpair failed and we were unable to recover it. 00:25:40.785 [2024-11-26 20:55:44.409686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.785 [2024-11-26 20:55:44.409749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.785 qpair failed and we were unable to recover it. 00:25:40.785 [2024-11-26 20:55:44.410038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.785 [2024-11-26 20:55:44.410103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.785 qpair failed and we were unable to recover it. 00:25:40.785 [2024-11-26 20:55:44.410402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.785 [2024-11-26 20:55:44.410467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.785 qpair failed and we were unable to recover it. 00:25:40.785 [2024-11-26 20:55:44.410730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.785 [2024-11-26 20:55:44.410793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.785 qpair failed and we were unable to recover it. 00:25:40.785 [2024-11-26 20:55:44.411073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.785 [2024-11-26 20:55:44.411148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.785 qpair failed and we were unable to recover it. 00:25:40.785 [2024-11-26 20:55:44.411389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.785 [2024-11-26 20:55:44.411473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.785 qpair failed and we were unable to recover it. 00:25:40.785 [2024-11-26 20:55:44.411711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.785 [2024-11-26 20:55:44.411775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.785 qpair failed and we were unable to recover it. 00:25:40.785 [2024-11-26 20:55:44.412001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.785 [2024-11-26 20:55:44.412063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.785 qpair failed and we were unable to recover it. 00:25:40.785 [2024-11-26 20:55:44.412356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.785 [2024-11-26 20:55:44.412420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.785 qpair failed and we were unable to recover it. 00:25:40.785 [2024-11-26 20:55:44.412701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.785 [2024-11-26 20:55:44.412766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.785 qpair failed and we were unable to recover it. 00:25:40.785 [2024-11-26 20:55:44.413045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.785 [2024-11-26 20:55:44.413113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.785 qpair failed and we were unable to recover it. 00:25:40.785 [2024-11-26 20:55:44.413368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.786 [2024-11-26 20:55:44.413434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.786 qpair failed and we were unable to recover it. 00:25:40.786 [2024-11-26 20:55:44.413637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.786 [2024-11-26 20:55:44.413700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.786 qpair failed and we were unable to recover it. 00:25:40.786 [2024-11-26 20:55:44.413981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.786 [2024-11-26 20:55:44.414044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.786 qpair failed and we were unable to recover it. 00:25:40.786 [2024-11-26 20:55:44.414316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.786 [2024-11-26 20:55:44.414386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.786 qpair failed and we were unable to recover it. 00:25:40.786 [2024-11-26 20:55:44.414683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.786 [2024-11-26 20:55:44.414747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.786 qpair failed and we were unable to recover it. 00:25:40.786 [2024-11-26 20:55:44.414993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.786 [2024-11-26 20:55:44.415056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.786 qpair failed and we were unable to recover it. 00:25:40.786 [2024-11-26 20:55:44.415351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.786 [2024-11-26 20:55:44.415416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.786 qpair failed and we were unable to recover it. 00:25:40.786 [2024-11-26 20:55:44.415730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.786 [2024-11-26 20:55:44.415799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.786 qpair failed and we were unable to recover it. 00:25:40.786 [2024-11-26 20:55:44.416047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.786 [2024-11-26 20:55:44.416112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.786 qpair failed and we were unable to recover it. 00:25:40.786 [2024-11-26 20:55:44.416355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.786 [2024-11-26 20:55:44.416421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.786 qpair failed and we were unable to recover it. 00:25:40.786 [2024-11-26 20:55:44.416676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.786 [2024-11-26 20:55:44.416740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.786 qpair failed and we were unable to recover it. 00:25:40.786 [2024-11-26 20:55:44.416995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.786 [2024-11-26 20:55:44.417072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.786 qpair failed and we were unable to recover it. 00:25:40.786 [2024-11-26 20:55:44.417289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.786 [2024-11-26 20:55:44.417378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.786 qpair failed and we were unable to recover it. 00:25:40.786 [2024-11-26 20:55:44.417636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.786 [2024-11-26 20:55:44.417700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.786 qpair failed and we were unable to recover it. 00:25:40.786 [2024-11-26 20:55:44.417975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.786 [2024-11-26 20:55:44.418038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.786 qpair failed and we were unable to recover it. 00:25:40.786 [2024-11-26 20:55:44.418328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.786 [2024-11-26 20:55:44.418404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.786 qpair failed and we were unable to recover it. 00:25:40.786 [2024-11-26 20:55:44.418709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.786 [2024-11-26 20:55:44.418773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.786 qpair failed and we were unable to recover it. 00:25:40.786 [2024-11-26 20:55:44.419012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.786 [2024-11-26 20:55:44.419074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.786 qpair failed and we were unable to recover it. 00:25:40.786 [2024-11-26 20:55:44.419338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.786 [2024-11-26 20:55:44.419407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.786 qpair failed and we were unable to recover it. 00:25:40.786 [2024-11-26 20:55:44.419657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.786 [2024-11-26 20:55:44.419724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.786 qpair failed and we were unable to recover it. 00:25:40.786 [2024-11-26 20:55:44.419985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.786 [2024-11-26 20:55:44.420054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.786 qpair failed and we were unable to recover it. 00:25:40.786 [2024-11-26 20:55:44.420361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.786 [2024-11-26 20:55:44.420429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.786 qpair failed and we were unable to recover it. 00:25:40.786 [2024-11-26 20:55:44.420687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.786 [2024-11-26 20:55:44.420752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.786 qpair failed and we were unable to recover it. 00:25:40.786 [2024-11-26 20:55:44.420996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.786 [2024-11-26 20:55:44.421059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.786 qpair failed and we were unable to recover it. 00:25:40.786 [2024-11-26 20:55:44.421334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.786 [2024-11-26 20:55:44.421413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.786 qpair failed and we were unable to recover it. 00:25:40.786 [2024-11-26 20:55:44.421608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.786 [2024-11-26 20:55:44.421673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.786 qpair failed and we were unable to recover it. 00:25:40.786 [2024-11-26 20:55:44.421961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.786 [2024-11-26 20:55:44.422025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.786 qpair failed and we were unable to recover it. 00:25:40.786 [2024-11-26 20:55:44.422227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.786 [2024-11-26 20:55:44.422292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.786 qpair failed and we were unable to recover it. 00:25:40.786 [2024-11-26 20:55:44.422531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.786 [2024-11-26 20:55:44.422598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.786 qpair failed and we were unable to recover it. 00:25:40.786 [2024-11-26 20:55:44.422849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.786 [2024-11-26 20:55:44.422915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.786 qpair failed and we were unable to recover it. 00:25:40.786 [2024-11-26 20:55:44.423166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.786 [2024-11-26 20:55:44.423238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.786 qpair failed and we were unable to recover it. 00:25:40.786 [2024-11-26 20:55:44.423500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.786 [2024-11-26 20:55:44.423566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.786 qpair failed and we were unable to recover it. 00:25:40.786 [2024-11-26 20:55:44.423841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.786 [2024-11-26 20:55:44.423907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.786 qpair failed and we were unable to recover it. 00:25:40.786 [2024-11-26 20:55:44.424150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.786 [2024-11-26 20:55:44.424214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.786 qpair failed and we were unable to recover it. 00:25:40.786 [2024-11-26 20:55:44.424522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.786 [2024-11-26 20:55:44.424601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.786 qpair failed and we were unable to recover it. 00:25:40.786 [2024-11-26 20:55:44.424894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.786 [2024-11-26 20:55:44.424956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.786 qpair failed and we were unable to recover it. 00:25:40.786 [2024-11-26 20:55:44.425202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.786 [2024-11-26 20:55:44.425266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.786 qpair failed and we were unable to recover it. 00:25:40.786 [2024-11-26 20:55:44.425585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.786 [2024-11-26 20:55:44.425656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.786 qpair failed and we were unable to recover it. 00:25:40.786 [2024-11-26 20:55:44.425936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.786 [2024-11-26 20:55:44.426001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.786 qpair failed and we were unable to recover it. 00:25:40.787 [2024-11-26 20:55:44.426202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.787 [2024-11-26 20:55:44.426267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.787 qpair failed and we were unable to recover it. 00:25:40.787 [2024-11-26 20:55:44.426513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.787 [2024-11-26 20:55:44.426576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.787 qpair failed and we were unable to recover it. 00:25:40.787 [2024-11-26 20:55:44.426834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.787 [2024-11-26 20:55:44.426898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.787 qpair failed and we were unable to recover it. 00:25:40.787 [2024-11-26 20:55:44.427184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.787 [2024-11-26 20:55:44.427251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:40.787 qpair failed and we were unable to recover it. 00:25:40.787 [2024-11-26 20:55:44.427494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.787 [2024-11-26 20:55:44.427560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.067 qpair failed and we were unable to recover it. 00:25:41.067 [2024-11-26 20:55:44.427840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.067 [2024-11-26 20:55:44.427906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.067 qpair failed and we were unable to recover it. 00:25:41.067 [2024-11-26 20:55:44.428169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.067 [2024-11-26 20:55:44.428233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.067 qpair failed and we were unable to recover it. 00:25:41.067 [2024-11-26 20:55:44.428545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.067 [2024-11-26 20:55:44.428613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.067 qpair failed and we were unable to recover it. 00:25:41.067 [2024-11-26 20:55:44.428850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.067 [2024-11-26 20:55:44.428913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.067 qpair failed and we were unable to recover it. 00:25:41.067 [2024-11-26 20:55:44.429151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.067 [2024-11-26 20:55:44.429215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.067 qpair failed and we were unable to recover it. 00:25:41.067 [2024-11-26 20:55:44.429445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.067 [2024-11-26 20:55:44.429479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.067 qpair failed and we were unable to recover it. 00:25:41.067 [2024-11-26 20:55:44.429626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.067 [2024-11-26 20:55:44.429660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.067 qpair failed and we were unable to recover it. 00:25:41.067 [2024-11-26 20:55:44.429815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.067 [2024-11-26 20:55:44.429850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.067 qpair failed and we were unable to recover it. 00:25:41.067 [2024-11-26 20:55:44.429950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.067 [2024-11-26 20:55:44.429983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.067 qpair failed and we were unable to recover it. 00:25:41.067 [2024-11-26 20:55:44.430129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.067 [2024-11-26 20:55:44.430163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.067 qpair failed and we were unable to recover it. 00:25:41.067 [2024-11-26 20:55:44.430293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.067 [2024-11-26 20:55:44.430339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.067 qpair failed and we were unable to recover it. 00:25:41.067 [2024-11-26 20:55:44.430495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.067 [2024-11-26 20:55:44.430532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.067 qpair failed and we were unable to recover it. 00:25:41.067 [2024-11-26 20:55:44.430668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.067 [2024-11-26 20:55:44.430704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.067 qpair failed and we were unable to recover it. 00:25:41.067 [2024-11-26 20:55:44.430844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.067 [2024-11-26 20:55:44.430877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.067 qpair failed and we were unable to recover it. 00:25:41.067 [2024-11-26 20:55:44.431047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.067 [2024-11-26 20:55:44.431087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.067 qpair failed and we were unable to recover it. 00:25:41.067 [2024-11-26 20:55:44.431233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.067 [2024-11-26 20:55:44.431295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.067 qpair failed and we were unable to recover it. 00:25:41.067 [2024-11-26 20:55:44.431464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.067 [2024-11-26 20:55:44.431498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.067 qpair failed and we were unable to recover it. 00:25:41.067 [2024-11-26 20:55:44.431639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.067 [2024-11-26 20:55:44.431684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.067 qpair failed and we were unable to recover it. 00:25:41.067 [2024-11-26 20:55:44.431843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.067 [2024-11-26 20:55:44.431916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.067 qpair failed and we were unable to recover it. 00:25:41.067 [2024-11-26 20:55:44.432158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.067 [2024-11-26 20:55:44.432267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.067 qpair failed and we were unable to recover it. 00:25:41.068 [2024-11-26 20:55:44.432489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.068 [2024-11-26 20:55:44.432543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.068 qpair failed and we were unable to recover it. 00:25:41.068 [2024-11-26 20:55:44.432731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.068 [2024-11-26 20:55:44.432769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.068 qpair failed and we were unable to recover it. 00:25:41.068 [2024-11-26 20:55:44.432920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.068 [2024-11-26 20:55:44.432958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.068 qpair failed and we were unable to recover it. 00:25:41.068 [2024-11-26 20:55:44.433115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.068 [2024-11-26 20:55:44.433157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.068 qpair failed and we were unable to recover it. 00:25:41.068 [2024-11-26 20:55:44.433352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.068 [2024-11-26 20:55:44.433388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.068 qpair failed and we were unable to recover it. 00:25:41.068 [2024-11-26 20:55:44.433529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.068 [2024-11-26 20:55:44.433564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.068 qpair failed and we were unable to recover it. 00:25:41.068 [2024-11-26 20:55:44.433703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.068 [2024-11-26 20:55:44.433739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.068 qpair failed and we were unable to recover it. 00:25:41.068 [2024-11-26 20:55:44.433970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.068 [2024-11-26 20:55:44.434010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.068 qpair failed and we were unable to recover it. 00:25:41.068 [2024-11-26 20:55:44.434293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.068 [2024-11-26 20:55:44.434371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.068 qpair failed and we were unable to recover it. 00:25:41.068 [2024-11-26 20:55:44.434490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.068 [2024-11-26 20:55:44.434525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.068 qpair failed and we were unable to recover it. 00:25:41.068 [2024-11-26 20:55:44.434694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.068 [2024-11-26 20:55:44.434729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.068 qpair failed and we were unable to recover it. 00:25:41.068 [2024-11-26 20:55:44.434894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.068 [2024-11-26 20:55:44.434963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.068 qpair failed and we were unable to recover it. 00:25:41.068 [2024-11-26 20:55:44.435246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.068 [2024-11-26 20:55:44.435330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.068 qpair failed and we were unable to recover it. 00:25:41.068 [2024-11-26 20:55:44.435458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.068 [2024-11-26 20:55:44.435493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.068 qpair failed and we were unable to recover it. 00:25:41.068 [2024-11-26 20:55:44.435605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.068 [2024-11-26 20:55:44.435640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.068 qpair failed and we were unable to recover it. 00:25:41.068 [2024-11-26 20:55:44.435808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.068 [2024-11-26 20:55:44.435881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.068 qpair failed and we were unable to recover it. 00:25:41.068 [2024-11-26 20:55:44.436166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.068 [2024-11-26 20:55:44.436231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.068 qpair failed and we were unable to recover it. 00:25:41.068 [2024-11-26 20:55:44.436442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.068 [2024-11-26 20:55:44.436477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.068 qpair failed and we were unable to recover it. 00:25:41.068 [2024-11-26 20:55:44.436617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.068 [2024-11-26 20:55:44.436651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.068 qpair failed and we were unable to recover it. 00:25:41.068 [2024-11-26 20:55:44.436803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.068 [2024-11-26 20:55:44.436840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.068 qpair failed and we were unable to recover it. 00:25:41.068 [2024-11-26 20:55:44.437050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.068 [2024-11-26 20:55:44.437128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.068 qpair failed and we were unable to recover it. 00:25:41.068 [2024-11-26 20:55:44.437372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.068 [2024-11-26 20:55:44.437408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.068 qpair failed and we were unable to recover it. 00:25:41.068 [2024-11-26 20:55:44.437526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.068 [2024-11-26 20:55:44.437561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.068 qpair failed and we were unable to recover it. 00:25:41.068 [2024-11-26 20:55:44.437720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.068 [2024-11-26 20:55:44.437755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.068 qpair failed and we were unable to recover it. 00:25:41.068 [2024-11-26 20:55:44.437902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.068 [2024-11-26 20:55:44.437937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.068 qpair failed and we were unable to recover it. 00:25:41.068 [2024-11-26 20:55:44.438076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.068 [2024-11-26 20:55:44.438112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.068 qpair failed and we were unable to recover it. 00:25:41.068 [2024-11-26 20:55:44.438282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.068 [2024-11-26 20:55:44.438327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.068 qpair failed and we were unable to recover it. 00:25:41.068 [2024-11-26 20:55:44.438481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.068 [2024-11-26 20:55:44.438517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.068 qpair failed and we were unable to recover it. 00:25:41.068 [2024-11-26 20:55:44.438656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.068 [2024-11-26 20:55:44.438692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.068 qpair failed and we were unable to recover it. 00:25:41.068 [2024-11-26 20:55:44.438854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.068 [2024-11-26 20:55:44.438924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.068 qpair failed and we were unable to recover it. 00:25:41.068 [2024-11-26 20:55:44.439264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.068 [2024-11-26 20:55:44.439299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.068 qpair failed and we were unable to recover it. 00:25:41.068 [2024-11-26 20:55:44.439450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.068 [2024-11-26 20:55:44.439485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.068 qpair failed and we were unable to recover it. 00:25:41.068 [2024-11-26 20:55:44.439624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.068 [2024-11-26 20:55:44.439660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.068 qpair failed and we were unable to recover it. 00:25:41.068 [2024-11-26 20:55:44.439808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.068 [2024-11-26 20:55:44.439842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.069 qpair failed and we were unable to recover it. 00:25:41.069 [2024-11-26 20:55:44.440034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.069 [2024-11-26 20:55:44.440109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.069 qpair failed and we were unable to recover it. 00:25:41.069 [2024-11-26 20:55:44.440368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.069 [2024-11-26 20:55:44.440404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.069 qpair failed and we were unable to recover it. 00:25:41.069 [2024-11-26 20:55:44.440570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.069 [2024-11-26 20:55:44.440606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.069 qpair failed and we were unable to recover it. 00:25:41.069 [2024-11-26 20:55:44.440779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.069 [2024-11-26 20:55:44.440867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.069 qpair failed and we were unable to recover it. 00:25:41.069 [2024-11-26 20:55:44.441096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.069 [2024-11-26 20:55:44.441163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.069 qpair failed and we were unable to recover it. 00:25:41.069 [2024-11-26 20:55:44.441395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.069 [2024-11-26 20:55:44.441431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.069 qpair failed and we were unable to recover it. 00:25:41.069 [2024-11-26 20:55:44.441544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.069 [2024-11-26 20:55:44.441587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.069 qpair failed and we were unable to recover it. 00:25:41.069 [2024-11-26 20:55:44.441737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.069 [2024-11-26 20:55:44.441771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.069 qpair failed and we were unable to recover it. 00:25:41.069 [2024-11-26 20:55:44.441899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.069 [2024-11-26 20:55:44.441935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.069 qpair failed and we were unable to recover it. 00:25:41.069 [2024-11-26 20:55:44.442209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.069 [2024-11-26 20:55:44.442277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.069 qpair failed and we were unable to recover it. 00:25:41.069 [2024-11-26 20:55:44.442506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.069 [2024-11-26 20:55:44.442541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.069 qpair failed and we were unable to recover it. 00:25:41.069 [2024-11-26 20:55:44.442682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.069 [2024-11-26 20:55:44.442763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.069 qpair failed and we were unable to recover it. 00:25:41.069 [2024-11-26 20:55:44.443066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.069 [2024-11-26 20:55:44.443133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.069 qpair failed and we were unable to recover it. 00:25:41.069 [2024-11-26 20:55:44.443347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.069 [2024-11-26 20:55:44.443383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.069 qpair failed and we were unable to recover it. 00:25:41.069 [2024-11-26 20:55:44.443528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.069 [2024-11-26 20:55:44.443563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.069 qpair failed and we were unable to recover it. 00:25:41.069 [2024-11-26 20:55:44.443781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.069 [2024-11-26 20:55:44.443817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.069 qpair failed and we were unable to recover it. 00:25:41.069 [2024-11-26 20:55:44.443930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.069 [2024-11-26 20:55:44.443964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.069 qpair failed and we were unable to recover it. 00:25:41.069 [2024-11-26 20:55:44.444177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.069 [2024-11-26 20:55:44.444243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.069 qpair failed and we were unable to recover it. 00:25:41.069 [2024-11-26 20:55:44.444435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.069 [2024-11-26 20:55:44.444471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.069 qpair failed and we were unable to recover it. 00:25:41.069 [2024-11-26 20:55:44.444646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.069 [2024-11-26 20:55:44.444680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.069 qpair failed and we were unable to recover it. 00:25:41.069 [2024-11-26 20:55:44.444863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.069 [2024-11-26 20:55:44.444937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.069 qpair failed and we were unable to recover it. 00:25:41.069 [2024-11-26 20:55:44.445176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.069 [2024-11-26 20:55:44.445242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.069 qpair failed and we were unable to recover it. 00:25:41.069 [2024-11-26 20:55:44.445498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.069 [2024-11-26 20:55:44.445534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.069 qpair failed and we were unable to recover it. 00:25:41.069 [2024-11-26 20:55:44.445685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.069 [2024-11-26 20:55:44.445758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.069 qpair failed and we were unable to recover it. 00:25:41.069 [2024-11-26 20:55:44.446049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.069 [2024-11-26 20:55:44.446106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.069 qpair failed and we were unable to recover it. 00:25:41.069 [2024-11-26 20:55:44.446285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.069 [2024-11-26 20:55:44.446373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.069 qpair failed and we were unable to recover it. 00:25:41.069 [2024-11-26 20:55:44.446554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.069 [2024-11-26 20:55:44.446613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.069 qpair failed and we were unable to recover it. 00:25:41.069 [2024-11-26 20:55:44.446826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.069 [2024-11-26 20:55:44.446878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.069 qpair failed and we were unable to recover it. 00:25:41.069 [2024-11-26 20:55:44.447050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.069 [2024-11-26 20:55:44.447102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.069 qpair failed and we were unable to recover it. 00:25:41.069 [2024-11-26 20:55:44.447273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.069 [2024-11-26 20:55:44.447366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.069 qpair failed and we were unable to recover it. 00:25:41.069 [2024-11-26 20:55:44.447518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.069 [2024-11-26 20:55:44.447554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.069 qpair failed and we were unable to recover it. 00:25:41.069 [2024-11-26 20:55:44.447675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.069 [2024-11-26 20:55:44.447711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.069 qpair failed and we were unable to recover it. 00:25:41.069 [2024-11-26 20:55:44.447851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.069 [2024-11-26 20:55:44.447886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.069 qpair failed and we were unable to recover it. 00:25:41.069 [2024-11-26 20:55:44.448094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.069 [2024-11-26 20:55:44.448175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.069 qpair failed and we were unable to recover it. 00:25:41.069 [2024-11-26 20:55:44.448440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.069 [2024-11-26 20:55:44.448475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.069 qpair failed and we were unable to recover it. 00:25:41.069 [2024-11-26 20:55:44.448615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.069 [2024-11-26 20:55:44.448650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.069 qpair failed and we were unable to recover it. 00:25:41.069 [2024-11-26 20:55:44.448817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.069 [2024-11-26 20:55:44.448851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.070 qpair failed and we were unable to recover it. 00:25:41.070 [2024-11-26 20:55:44.448956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.070 [2024-11-26 20:55:44.449014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.070 qpair failed and we were unable to recover it. 00:25:41.070 [2024-11-26 20:55:44.449257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.070 [2024-11-26 20:55:44.449379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.070 qpair failed and we were unable to recover it. 00:25:41.070 [2024-11-26 20:55:44.449506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.070 [2024-11-26 20:55:44.449539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.070 qpair failed and we were unable to recover it. 00:25:41.070 [2024-11-26 20:55:44.449686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.070 [2024-11-26 20:55:44.449721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.070 qpair failed and we were unable to recover it. 00:25:41.070 [2024-11-26 20:55:44.449863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.070 [2024-11-26 20:55:44.449897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.070 qpair failed and we were unable to recover it. 00:25:41.070 [2024-11-26 20:55:44.450028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.070 [2024-11-26 20:55:44.450068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.070 qpair failed and we were unable to recover it. 00:25:41.070 [2024-11-26 20:55:44.450280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.070 [2024-11-26 20:55:44.450377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.070 qpair failed and we were unable to recover it. 00:25:41.070 [2024-11-26 20:55:44.450497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.070 [2024-11-26 20:55:44.450532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.070 qpair failed and we were unable to recover it. 00:25:41.070 [2024-11-26 20:55:44.450660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.070 [2024-11-26 20:55:44.450697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.070 qpair failed and we were unable to recover it. 00:25:41.070 [2024-11-26 20:55:44.450832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.070 [2024-11-26 20:55:44.450867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.070 qpair failed and we were unable to recover it. 00:25:41.070 [2024-11-26 20:55:44.450972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.070 [2024-11-26 20:55:44.451007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.070 qpair failed and we were unable to recover it. 00:25:41.070 [2024-11-26 20:55:44.451151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.070 [2024-11-26 20:55:44.451186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.070 qpair failed and we were unable to recover it. 00:25:41.070 [2024-11-26 20:55:44.451297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.070 [2024-11-26 20:55:44.451338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.070 qpair failed and we were unable to recover it. 00:25:41.070 [2024-11-26 20:55:44.451473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.070 [2024-11-26 20:55:44.451508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.070 qpair failed and we were unable to recover it. 00:25:41.070 [2024-11-26 20:55:44.451620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.070 [2024-11-26 20:55:44.451661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.070 qpair failed and we were unable to recover it. 00:25:41.070 [2024-11-26 20:55:44.451768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.070 [2024-11-26 20:55:44.451802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.070 qpair failed and we were unable to recover it. 00:25:41.070 [2024-11-26 20:55:44.451951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.070 [2024-11-26 20:55:44.451987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.070 qpair failed and we were unable to recover it. 00:25:41.070 [2024-11-26 20:55:44.452217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.070 [2024-11-26 20:55:44.452280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.070 qpair failed and we were unable to recover it. 00:25:41.070 [2024-11-26 20:55:44.452486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.070 [2024-11-26 20:55:44.452520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.070 qpair failed and we were unable to recover it. 00:25:41.070 [2024-11-26 20:55:44.452638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.070 [2024-11-26 20:55:44.452694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.070 qpair failed and we were unable to recover it. 00:25:41.070 [2024-11-26 20:55:44.452903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.070 [2024-11-26 20:55:44.452973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.070 qpair failed and we were unable to recover it. 00:25:41.070 [2024-11-26 20:55:44.453250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.070 [2024-11-26 20:55:44.453285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.070 qpair failed and we were unable to recover it. 00:25:41.070 [2024-11-26 20:55:44.453416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.070 [2024-11-26 20:55:44.453451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.070 qpair failed and we were unable to recover it. 00:25:41.070 [2024-11-26 20:55:44.453556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.070 [2024-11-26 20:55:44.453593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.070 qpair failed and we were unable to recover it. 00:25:41.070 [2024-11-26 20:55:44.453711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.070 [2024-11-26 20:55:44.453746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.070 qpair failed and we were unable to recover it. 00:25:41.070 [2024-11-26 20:55:44.453888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.070 [2024-11-26 20:55:44.453953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.070 qpair failed and we were unable to recover it. 00:25:41.070 [2024-11-26 20:55:44.454253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.070 [2024-11-26 20:55:44.454361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.070 qpair failed and we were unable to recover it. 00:25:41.070 [2024-11-26 20:55:44.454509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.070 [2024-11-26 20:55:44.454545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.070 qpair failed and we were unable to recover it. 00:25:41.070 [2024-11-26 20:55:44.454724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.070 [2024-11-26 20:55:44.454759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.070 qpair failed and we were unable to recover it. 00:25:41.070 [2024-11-26 20:55:44.454888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.070 [2024-11-26 20:55:44.454923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.070 qpair failed and we were unable to recover it. 00:25:41.070 [2024-11-26 20:55:44.455020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.070 [2024-11-26 20:55:44.455054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.070 qpair failed and we were unable to recover it. 00:25:41.070 [2024-11-26 20:55:44.455273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.070 [2024-11-26 20:55:44.455318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.070 qpair failed and we were unable to recover it. 00:25:41.070 [2024-11-26 20:55:44.455453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.070 [2024-11-26 20:55:44.455487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.070 qpair failed and we were unable to recover it. 00:25:41.070 [2024-11-26 20:55:44.455625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.070 [2024-11-26 20:55:44.455678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.070 qpair failed and we were unable to recover it. 00:25:41.070 [2024-11-26 20:55:44.455866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.070 [2024-11-26 20:55:44.455909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.070 qpair failed and we were unable to recover it. 00:25:41.070 [2024-11-26 20:55:44.456115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.070 [2024-11-26 20:55:44.456186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.070 qpair failed and we were unable to recover it. 00:25:41.070 [2024-11-26 20:55:44.456459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.070 [2024-11-26 20:55:44.456496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.071 qpair failed and we were unable to recover it. 00:25:41.071 [2024-11-26 20:55:44.456648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.071 [2024-11-26 20:55:44.456683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.071 qpair failed and we were unable to recover it. 00:25:41.071 [2024-11-26 20:55:44.456839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.071 [2024-11-26 20:55:44.456873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.071 qpair failed and we were unable to recover it. 00:25:41.071 [2024-11-26 20:55:44.457060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.071 [2024-11-26 20:55:44.457098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.071 qpair failed and we were unable to recover it. 00:25:41.071 [2024-11-26 20:55:44.457260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.071 [2024-11-26 20:55:44.457292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.071 qpair failed and we were unable to recover it. 00:25:41.071 [2024-11-26 20:55:44.457429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.071 [2024-11-26 20:55:44.457462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.071 qpair failed and we were unable to recover it. 00:25:41.071 [2024-11-26 20:55:44.457627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.071 [2024-11-26 20:55:44.457664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.071 qpair failed and we were unable to recover it. 00:25:41.071 [2024-11-26 20:55:44.457852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.071 [2024-11-26 20:55:44.457889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.071 qpair failed and we were unable to recover it. 00:25:41.071 [2024-11-26 20:55:44.458038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.071 [2024-11-26 20:55:44.458077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.071 qpair failed and we were unable to recover it. 00:25:41.071 [2024-11-26 20:55:44.458230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.071 [2024-11-26 20:55:44.458270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.071 qpair failed and we were unable to recover it. 00:25:41.071 [2024-11-26 20:55:44.458457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.071 [2024-11-26 20:55:44.458499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.071 qpair failed and we were unable to recover it. 00:25:41.071 [2024-11-26 20:55:44.458677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.071 [2024-11-26 20:55:44.458731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.071 qpair failed and we were unable to recover it. 00:25:41.071 [2024-11-26 20:55:44.458861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.071 [2024-11-26 20:55:44.458901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.071 qpair failed and we were unable to recover it. 00:25:41.071 [2024-11-26 20:55:44.459056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.071 [2024-11-26 20:55:44.459094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.071 qpair failed and we were unable to recover it. 00:25:41.071 [2024-11-26 20:55:44.459253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.071 [2024-11-26 20:55:44.459292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.071 qpair failed and we were unable to recover it. 00:25:41.071 [2024-11-26 20:55:44.459437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.071 [2024-11-26 20:55:44.459476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.071 qpair failed and we were unable to recover it. 00:25:41.071 [2024-11-26 20:55:44.459666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.071 [2024-11-26 20:55:44.459700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.071 qpair failed and we were unable to recover it. 00:25:41.071 [2024-11-26 20:55:44.459839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.071 [2024-11-26 20:55:44.459874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.071 qpair failed and we were unable to recover it. 00:25:41.071 [2024-11-26 20:55:44.460009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.071 [2024-11-26 20:55:44.460063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.071 qpair failed and we were unable to recover it. 00:25:41.071 [2024-11-26 20:55:44.460205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.071 [2024-11-26 20:55:44.460241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.071 qpair failed and we were unable to recover it. 00:25:41.071 [2024-11-26 20:55:44.460406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.071 [2024-11-26 20:55:44.460446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.071 qpair failed and we were unable to recover it. 00:25:41.071 [2024-11-26 20:55:44.460600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.071 [2024-11-26 20:55:44.460638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.071 qpair failed and we were unable to recover it. 00:25:41.071 [2024-11-26 20:55:44.460800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.071 [2024-11-26 20:55:44.460840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.071 qpair failed and we were unable to recover it. 00:25:41.071 [2024-11-26 20:55:44.460972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.071 [2024-11-26 20:55:44.461011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.071 qpair failed and we were unable to recover it. 00:25:41.071 [2024-11-26 20:55:44.461143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.071 [2024-11-26 20:55:44.461195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.071 qpair failed and we were unable to recover it. 00:25:41.071 [2024-11-26 20:55:44.461314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.071 [2024-11-26 20:55:44.461350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.071 qpair failed and we were unable to recover it. 00:25:41.071 [2024-11-26 20:55:44.461505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.071 [2024-11-26 20:55:44.461544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.071 qpair failed and we were unable to recover it. 00:25:41.071 [2024-11-26 20:55:44.461696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.071 [2024-11-26 20:55:44.461735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.071 qpair failed and we were unable to recover it. 00:25:41.071 [2024-11-26 20:55:44.461904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.071 [2024-11-26 20:55:44.461940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.071 qpair failed and we were unable to recover it. 00:25:41.071 [2024-11-26 20:55:44.462090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.071 [2024-11-26 20:55:44.462127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.071 qpair failed and we were unable to recover it. 00:25:41.071 [2024-11-26 20:55:44.462247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.071 [2024-11-26 20:55:44.462284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.071 qpair failed and we were unable to recover it. 00:25:41.071 [2024-11-26 20:55:44.462443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.071 [2024-11-26 20:55:44.462480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.071 qpair failed and we were unable to recover it. 00:25:41.071 [2024-11-26 20:55:44.462605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.071 [2024-11-26 20:55:44.462642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.071 qpair failed and we were unable to recover it. 00:25:41.071 [2024-11-26 20:55:44.462776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.071 [2024-11-26 20:55:44.462842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.071 qpair failed and we were unable to recover it. 00:25:41.071 [2024-11-26 20:55:44.462966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.071 [2024-11-26 20:55:44.463005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.071 qpair failed and we were unable to recover it. 00:25:41.071 [2024-11-26 20:55:44.463184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.071 [2024-11-26 20:55:44.463218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.071 qpair failed and we were unable to recover it. 00:25:41.071 [2024-11-26 20:55:44.463327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.071 [2024-11-26 20:55:44.463362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.071 qpair failed and we were unable to recover it. 00:25:41.071 [2024-11-26 20:55:44.463548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.071 [2024-11-26 20:55:44.463605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.071 qpair failed and we were unable to recover it. 00:25:41.071 [2024-11-26 20:55:44.463815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.072 [2024-11-26 20:55:44.463856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.072 qpair failed and we were unable to recover it. 00:25:41.072 [2024-11-26 20:55:44.464038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.072 [2024-11-26 20:55:44.464106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.072 qpair failed and we were unable to recover it. 00:25:41.072 [2024-11-26 20:55:44.464333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.072 [2024-11-26 20:55:44.464370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.072 qpair failed and we were unable to recover it. 00:25:41.072 [2024-11-26 20:55:44.464484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.072 [2024-11-26 20:55:44.464517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.072 qpair failed and we were unable to recover it. 00:25:41.072 [2024-11-26 20:55:44.464651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.072 [2024-11-26 20:55:44.464692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.072 qpair failed and we were unable to recover it. 00:25:41.072 [2024-11-26 20:55:44.464816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.072 [2024-11-26 20:55:44.464856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.072 qpair failed and we were unable to recover it. 00:25:41.072 [2024-11-26 20:55:44.465021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.072 [2024-11-26 20:55:44.465072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.072 qpair failed and we were unable to recover it. 00:25:41.072 [2024-11-26 20:55:44.465241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.072 [2024-11-26 20:55:44.465274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.072 qpair failed and we were unable to recover it. 00:25:41.072 [2024-11-26 20:55:44.465475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.072 [2024-11-26 20:55:44.465513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.072 qpair failed and we were unable to recover it. 00:25:41.072 [2024-11-26 20:55:44.465670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.072 [2024-11-26 20:55:44.465722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.072 qpair failed and we were unable to recover it. 00:25:41.072 [2024-11-26 20:55:44.465895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.072 [2024-11-26 20:55:44.465936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.072 qpair failed and we were unable to recover it. 00:25:41.072 [2024-11-26 20:55:44.466124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.072 [2024-11-26 20:55:44.466184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.072 qpair failed and we were unable to recover it. 00:25:41.072 [2024-11-26 20:55:44.466434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.072 [2024-11-26 20:55:44.466469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.072 qpair failed and we were unable to recover it. 00:25:41.072 [2024-11-26 20:55:44.466586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.072 [2024-11-26 20:55:44.466622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.072 qpair failed and we were unable to recover it. 00:25:41.072 [2024-11-26 20:55:44.466900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.072 [2024-11-26 20:55:44.466960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.072 qpair failed and we were unable to recover it. 00:25:41.072 [2024-11-26 20:55:44.467232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.072 [2024-11-26 20:55:44.467290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.072 qpair failed and we were unable to recover it. 00:25:41.072 [2024-11-26 20:55:44.467573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.072 [2024-11-26 20:55:44.467611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.072 qpair failed and we were unable to recover it. 00:25:41.072 [2024-11-26 20:55:44.467769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.072 [2024-11-26 20:55:44.467817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.072 qpair failed and we were unable to recover it. 00:25:41.072 [2024-11-26 20:55:44.467960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.072 [2024-11-26 20:55:44.467999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.072 qpair failed and we were unable to recover it. 00:25:41.072 [2024-11-26 20:55:44.468171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.072 [2024-11-26 20:55:44.468206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.072 qpair failed and we were unable to recover it. 00:25:41.072 [2024-11-26 20:55:44.468345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.072 [2024-11-26 20:55:44.468379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.072 qpair failed and we were unable to recover it. 00:25:41.072 [2024-11-26 20:55:44.468501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.072 [2024-11-26 20:55:44.468541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.072 qpair failed and we were unable to recover it. 00:25:41.072 [2024-11-26 20:55:44.468673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.072 [2024-11-26 20:55:44.468714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.072 qpair failed and we were unable to recover it. 00:25:41.072 [2024-11-26 20:55:44.468923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.072 [2024-11-26 20:55:44.468957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.072 qpair failed and we were unable to recover it. 00:25:41.072 [2024-11-26 20:55:44.469063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.072 [2024-11-26 20:55:44.469096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.072 qpair failed and we were unable to recover it. 00:25:41.072 [2024-11-26 20:55:44.469272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.072 [2024-11-26 20:55:44.469322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.072 qpair failed and we were unable to recover it. 00:25:41.072 [2024-11-26 20:55:44.469466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.072 [2024-11-26 20:55:44.469519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.072 qpair failed and we were unable to recover it. 00:25:41.072 [2024-11-26 20:55:44.469720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.072 [2024-11-26 20:55:44.469754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.072 qpair failed and we were unable to recover it. 00:25:41.072 [2024-11-26 20:55:44.469892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.072 [2024-11-26 20:55:44.469928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.072 qpair failed and we were unable to recover it. 00:25:41.072 [2024-11-26 20:55:44.470107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.072 [2024-11-26 20:55:44.470145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.072 qpair failed and we were unable to recover it. 00:25:41.072 [2024-11-26 20:55:44.470291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.072 [2024-11-26 20:55:44.470345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.072 qpair failed and we were unable to recover it. 00:25:41.072 [2024-11-26 20:55:44.470570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.072 [2024-11-26 20:55:44.470630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.072 qpair failed and we were unable to recover it. 00:25:41.072 [2024-11-26 20:55:44.470862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.073 [2024-11-26 20:55:44.470924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.073 qpair failed and we were unable to recover it. 00:25:41.073 [2024-11-26 20:55:44.471214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.073 [2024-11-26 20:55:44.471286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.073 qpair failed and we were unable to recover it. 00:25:41.073 [2024-11-26 20:55:44.471569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.073 [2024-11-26 20:55:44.471604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.073 qpair failed and we were unable to recover it. 00:25:41.073 [2024-11-26 20:55:44.471775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.073 [2024-11-26 20:55:44.471826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.073 qpair failed and we were unable to recover it. 00:25:41.073 [2024-11-26 20:55:44.471982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.073 [2024-11-26 20:55:44.472050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.073 qpair failed and we were unable to recover it. 00:25:41.073 [2024-11-26 20:55:44.472325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.073 [2024-11-26 20:55:44.472368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.073 qpair failed and we were unable to recover it. 00:25:41.073 [2024-11-26 20:55:44.472521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.073 [2024-11-26 20:55:44.472562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.073 qpair failed and we were unable to recover it. 00:25:41.073 [2024-11-26 20:55:44.472690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.073 [2024-11-26 20:55:44.472730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.073 qpair failed and we were unable to recover it. 00:25:41.073 [2024-11-26 20:55:44.472901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.073 [2024-11-26 20:55:44.472959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.073 qpair failed and we were unable to recover it. 00:25:41.073 [2024-11-26 20:55:44.473160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.073 [2024-11-26 20:55:44.473251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.073 qpair failed and we were unable to recover it. 00:25:41.073 [2024-11-26 20:55:44.473529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.073 [2024-11-26 20:55:44.473584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.073 qpair failed and we were unable to recover it. 00:25:41.073 [2024-11-26 20:55:44.473769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.073 [2024-11-26 20:55:44.473822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.073 qpair failed and we were unable to recover it. 00:25:41.073 [2024-11-26 20:55:44.474129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.073 [2024-11-26 20:55:44.474182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.073 qpair failed and we were unable to recover it. 00:25:41.073 [2024-11-26 20:55:44.474393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.073 [2024-11-26 20:55:44.474440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.073 qpair failed and we were unable to recover it. 00:25:41.073 [2024-11-26 20:55:44.474581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.073 [2024-11-26 20:55:44.474621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.073 qpair failed and we were unable to recover it. 00:25:41.073 [2024-11-26 20:55:44.474758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.073 [2024-11-26 20:55:44.474798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.073 qpair failed and we were unable to recover it. 00:25:41.073 [2024-11-26 20:55:44.474918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.073 [2024-11-26 20:55:44.474958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.073 qpair failed and we were unable to recover it. 00:25:41.073 [2024-11-26 20:55:44.475125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.073 [2024-11-26 20:55:44.475161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.073 qpair failed and we were unable to recover it. 00:25:41.073 [2024-11-26 20:55:44.475319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.073 [2024-11-26 20:55:44.475354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.073 qpair failed and we were unable to recover it. 00:25:41.073 [2024-11-26 20:55:44.475518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.073 [2024-11-26 20:55:44.475559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.073 qpair failed and we were unable to recover it. 00:25:41.073 [2024-11-26 20:55:44.475737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.073 [2024-11-26 20:55:44.475770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.073 qpair failed and we were unable to recover it. 00:25:41.073 [2024-11-26 20:55:44.475916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.073 [2024-11-26 20:55:44.475961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.073 qpair failed and we were unable to recover it. 00:25:41.073 [2024-11-26 20:55:44.476089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.073 [2024-11-26 20:55:44.476124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.073 qpair failed and we were unable to recover it. 00:25:41.073 [2024-11-26 20:55:44.476234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.073 [2024-11-26 20:55:44.476268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.073 qpair failed and we were unable to recover it. 00:25:41.073 [2024-11-26 20:55:44.476511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.073 [2024-11-26 20:55:44.476552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.073 qpair failed and we were unable to recover it. 00:25:41.073 [2024-11-26 20:55:44.476717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.073 [2024-11-26 20:55:44.476761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.073 qpair failed and we were unable to recover it. 00:25:41.073 [2024-11-26 20:55:44.476936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.073 [2024-11-26 20:55:44.476999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.073 qpair failed and we were unable to recover it. 00:25:41.073 [2024-11-26 20:55:44.477198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.073 [2024-11-26 20:55:44.477251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.073 qpair failed and we were unable to recover it. 00:25:41.073 [2024-11-26 20:55:44.477467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.073 [2024-11-26 20:55:44.477525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.073 qpair failed and we were unable to recover it. 00:25:41.073 [2024-11-26 20:55:44.477740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.073 [2024-11-26 20:55:44.477798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.073 qpair failed and we were unable to recover it. 00:25:41.073 [2024-11-26 20:55:44.477960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.073 [2024-11-26 20:55:44.477995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.073 qpair failed and we were unable to recover it. 00:25:41.073 [2024-11-26 20:55:44.478211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.073 [2024-11-26 20:55:44.478266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.073 qpair failed and we were unable to recover it. 00:25:41.073 [2024-11-26 20:55:44.478469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.073 [2024-11-26 20:55:44.478524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.073 qpair failed and we were unable to recover it. 00:25:41.073 [2024-11-26 20:55:44.478756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.073 [2024-11-26 20:55:44.478798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.073 qpair failed and we were unable to recover it. 00:25:41.073 [2024-11-26 20:55:44.478943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.073 [2024-11-26 20:55:44.478977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.073 qpair failed and we were unable to recover it. 00:25:41.073 [2024-11-26 20:55:44.479168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.073 [2024-11-26 20:55:44.479201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.073 qpair failed and we were unable to recover it. 00:25:41.073 [2024-11-26 20:55:44.479422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.073 [2024-11-26 20:55:44.479466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.073 qpair failed and we were unable to recover it. 00:25:41.073 [2024-11-26 20:55:44.479648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.073 [2024-11-26 20:55:44.479696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.073 qpair failed and we were unable to recover it. 00:25:41.073 [2024-11-26 20:55:44.479893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.073 [2024-11-26 20:55:44.479926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.073 qpair failed and we were unable to recover it. 00:25:41.074 [2024-11-26 20:55:44.480044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.074 [2024-11-26 20:55:44.480077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.074 qpair failed and we were unable to recover it. 00:25:41.074 [2024-11-26 20:55:44.480208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.074 [2024-11-26 20:55:44.480250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.074 qpair failed and we were unable to recover it. 00:25:41.074 [2024-11-26 20:55:44.480534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.074 [2024-11-26 20:55:44.480574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.074 qpair failed and we were unable to recover it. 00:25:41.074 [2024-11-26 20:55:44.480727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.074 [2024-11-26 20:55:44.480760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.074 qpair failed and we were unable to recover it. 00:25:41.074 [2024-11-26 20:55:44.480879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.074 [2024-11-26 20:55:44.480912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.074 qpair failed and we were unable to recover it. 00:25:41.074 [2024-11-26 20:55:44.481067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.074 [2024-11-26 20:55:44.481109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.074 qpair failed and we were unable to recover it. 00:25:41.074 [2024-11-26 20:55:44.481283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.074 [2024-11-26 20:55:44.481346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.074 qpair failed and we were unable to recover it. 00:25:41.074 [2024-11-26 20:55:44.481465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.074 [2024-11-26 20:55:44.481507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.074 qpair failed and we were unable to recover it. 00:25:41.074 [2024-11-26 20:55:44.481652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.074 [2024-11-26 20:55:44.481685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.074 qpair failed and we were unable to recover it. 00:25:41.074 [2024-11-26 20:55:44.481910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.074 [2024-11-26 20:55:44.481974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.074 qpair failed and we were unable to recover it. 00:25:41.074 [2024-11-26 20:55:44.482210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.074 [2024-11-26 20:55:44.482265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.074 qpair failed and we were unable to recover it. 00:25:41.074 [2024-11-26 20:55:44.482468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.074 [2024-11-26 20:55:44.482520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.074 qpair failed and we were unable to recover it. 00:25:41.074 [2024-11-26 20:55:44.482732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.074 [2024-11-26 20:55:44.482765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.074 qpair failed and we were unable to recover it. 00:25:41.074 [2024-11-26 20:55:44.482864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.074 [2024-11-26 20:55:44.482897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.074 qpair failed and we were unable to recover it. 00:25:41.074 [2024-11-26 20:55:44.483066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.074 [2024-11-26 20:55:44.483100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.074 qpair failed and we were unable to recover it. 00:25:41.074 [2024-11-26 20:55:44.483328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.074 [2024-11-26 20:55:44.483376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.074 qpair failed and we were unable to recover it. 00:25:41.074 [2024-11-26 20:55:44.483559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.074 [2024-11-26 20:55:44.483605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.074 qpair failed and we were unable to recover it. 00:25:41.074 [2024-11-26 20:55:44.483778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.074 [2024-11-26 20:55:44.483823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.074 qpair failed and we were unable to recover it. 00:25:41.074 [2024-11-26 20:55:44.484050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.074 [2024-11-26 20:55:44.484083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.074 qpair failed and we were unable to recover it. 00:25:41.074 [2024-11-26 20:55:44.484239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.074 [2024-11-26 20:55:44.484273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.074 qpair failed and we were unable to recover it. 00:25:41.074 [2024-11-26 20:55:44.484427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.074 [2024-11-26 20:55:44.484461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.074 qpair failed and we were unable to recover it. 00:25:41.074 [2024-11-26 20:55:44.484604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.074 [2024-11-26 20:55:44.484638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.074 qpair failed and we were unable to recover it. 00:25:41.074 [2024-11-26 20:55:44.484810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.074 [2024-11-26 20:55:44.484855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.074 qpair failed and we were unable to recover it. 00:25:41.074 [2024-11-26 20:55:44.485067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.074 [2024-11-26 20:55:44.485135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.074 qpair failed and we were unable to recover it. 00:25:41.074 [2024-11-26 20:55:44.485471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.074 [2024-11-26 20:55:44.485534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.074 qpair failed and we were unable to recover it. 00:25:41.074 [2024-11-26 20:55:44.485755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.074 [2024-11-26 20:55:44.485814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.074 qpair failed and we were unable to recover it. 00:25:41.074 [2024-11-26 20:55:44.486028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.074 [2024-11-26 20:55:44.486063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.074 qpair failed and we were unable to recover it. 00:25:41.074 [2024-11-26 20:55:44.486209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.074 [2024-11-26 20:55:44.486246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.074 qpair failed and we were unable to recover it. 00:25:41.074 [2024-11-26 20:55:44.486471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.074 [2024-11-26 20:55:44.486529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.074 qpair failed and we were unable to recover it. 00:25:41.074 [2024-11-26 20:55:44.486776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.074 [2024-11-26 20:55:44.486834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.074 qpair failed and we were unable to recover it. 00:25:41.074 [2024-11-26 20:55:44.487023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.074 [2024-11-26 20:55:44.487091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.074 qpair failed and we were unable to recover it. 00:25:41.074 [2024-11-26 20:55:44.487281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.074 [2024-11-26 20:55:44.487338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.074 qpair failed and we were unable to recover it. 00:25:41.074 [2024-11-26 20:55:44.487498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.074 [2024-11-26 20:55:44.487547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.074 qpair failed and we were unable to recover it. 00:25:41.074 [2024-11-26 20:55:44.487701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.074 [2024-11-26 20:55:44.487750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.074 qpair failed and we were unable to recover it. 00:25:41.074 [2024-11-26 20:55:44.487951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.074 [2024-11-26 20:55:44.487999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.074 qpair failed and we were unable to recover it. 00:25:41.074 [2024-11-26 20:55:44.488249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.074 [2024-11-26 20:55:44.488296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.074 qpair failed and we were unable to recover it. 00:25:41.074 [2024-11-26 20:55:44.488459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.074 [2024-11-26 20:55:44.488518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.074 qpair failed and we were unable to recover it. 00:25:41.074 [2024-11-26 20:55:44.488704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.074 [2024-11-26 20:55:44.488751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.074 qpair failed and we were unable to recover it. 00:25:41.074 [2024-11-26 20:55:44.488934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.075 [2024-11-26 20:55:44.488980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.075 qpair failed and we were unable to recover it. 00:25:41.075 [2024-11-26 20:55:44.489191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.075 [2024-11-26 20:55:44.489239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.075 qpair failed and we were unable to recover it. 00:25:41.075 [2024-11-26 20:55:44.489413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.075 [2024-11-26 20:55:44.489461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.075 qpair failed and we were unable to recover it. 00:25:41.075 [2024-11-26 20:55:44.489636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.075 [2024-11-26 20:55:44.489705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.075 qpair failed and we were unable to recover it. 00:25:41.075 [2024-11-26 20:55:44.489929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.075 [2024-11-26 20:55:44.489977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.075 qpair failed and we were unable to recover it. 00:25:41.075 [2024-11-26 20:55:44.490182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.075 [2024-11-26 20:55:44.490217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.075 qpair failed and we were unable to recover it. 00:25:41.075 [2024-11-26 20:55:44.490335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.075 [2024-11-26 20:55:44.490371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.075 qpair failed and we were unable to recover it. 00:25:41.075 [2024-11-26 20:55:44.490512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.075 [2024-11-26 20:55:44.490547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.075 qpair failed and we were unable to recover it. 00:25:41.075 [2024-11-26 20:55:44.490700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.075 [2024-11-26 20:55:44.490773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.075 qpair failed and we were unable to recover it. 00:25:41.075 [2024-11-26 20:55:44.491012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.075 [2024-11-26 20:55:44.491058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.075 qpair failed and we were unable to recover it. 00:25:41.075 [2024-11-26 20:55:44.491268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.075 [2024-11-26 20:55:44.491323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.075 qpair failed and we were unable to recover it. 00:25:41.075 [2024-11-26 20:55:44.491540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.075 [2024-11-26 20:55:44.491587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.075 qpair failed and we were unable to recover it. 00:25:41.075 [2024-11-26 20:55:44.491754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.075 [2024-11-26 20:55:44.491801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.075 qpair failed and we were unable to recover it. 00:25:41.075 [2024-11-26 20:55:44.492003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.075 [2024-11-26 20:55:44.492049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.075 qpair failed and we were unable to recover it. 00:25:41.075 [2024-11-26 20:55:44.492264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.075 [2024-11-26 20:55:44.492319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.075 qpair failed and we were unable to recover it. 00:25:41.075 [2024-11-26 20:55:44.492480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.075 [2024-11-26 20:55:44.492527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.075 qpair failed and we were unable to recover it. 00:25:41.075 [2024-11-26 20:55:44.492681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.075 [2024-11-26 20:55:44.492728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.075 qpair failed and we were unable to recover it. 00:25:41.075 [2024-11-26 20:55:44.492899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.075 [2024-11-26 20:55:44.492945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.075 qpair failed and we were unable to recover it. 00:25:41.075 [2024-11-26 20:55:44.493130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.075 [2024-11-26 20:55:44.493177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.075 qpair failed and we were unable to recover it. 00:25:41.075 [2024-11-26 20:55:44.493341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.075 [2024-11-26 20:55:44.493389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.075 qpair failed and we were unable to recover it. 00:25:41.075 [2024-11-26 20:55:44.493537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.075 [2024-11-26 20:55:44.493583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.075 qpair failed and we were unable to recover it. 00:25:41.075 [2024-11-26 20:55:44.493725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.075 [2024-11-26 20:55:44.493789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.075 qpair failed and we were unable to recover it. 00:25:41.075 [2024-11-26 20:55:44.493929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.075 [2024-11-26 20:55:44.493964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.075 qpair failed and we were unable to recover it. 00:25:41.075 [2024-11-26 20:55:44.494117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.075 [2024-11-26 20:55:44.494162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.075 qpair failed and we were unable to recover it. 00:25:41.075 [2024-11-26 20:55:44.494348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.075 [2024-11-26 20:55:44.494396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.075 qpair failed and we were unable to recover it. 00:25:41.075 [2024-11-26 20:55:44.494648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.075 [2024-11-26 20:55:44.494701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.075 qpair failed and we were unable to recover it. 00:25:41.075 [2024-11-26 20:55:44.494826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.075 [2024-11-26 20:55:44.494889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.075 qpair failed and we were unable to recover it. 00:25:41.075 [2024-11-26 20:55:44.495049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.075 [2024-11-26 20:55:44.495097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.075 qpair failed and we were unable to recover it. 00:25:41.075 [2024-11-26 20:55:44.495238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.075 [2024-11-26 20:55:44.495283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.075 qpair failed and we were unable to recover it. 00:25:41.075 [2024-11-26 20:55:44.495444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.075 [2024-11-26 20:55:44.495490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.075 qpair failed and we were unable to recover it. 00:25:41.075 [2024-11-26 20:55:44.495669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.075 [2024-11-26 20:55:44.495714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.075 qpair failed and we were unable to recover it. 00:25:41.075 [2024-11-26 20:55:44.495880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.075 [2024-11-26 20:55:44.495926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.075 qpair failed and we were unable to recover it. 00:25:41.075 [2024-11-26 20:55:44.496131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.075 [2024-11-26 20:55:44.496166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.075 qpair failed and we were unable to recover it. 00:25:41.075 [2024-11-26 20:55:44.496274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.075 [2024-11-26 20:55:44.496317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.075 qpair failed and we were unable to recover it. 00:25:41.075 [2024-11-26 20:55:44.496427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.075 [2024-11-26 20:55:44.496461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.075 qpair failed and we were unable to recover it. 00:25:41.075 [2024-11-26 20:55:44.496582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.075 [2024-11-26 20:55:44.496616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.075 qpair failed and we were unable to recover it. 00:25:41.075 [2024-11-26 20:55:44.496795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.075 [2024-11-26 20:55:44.496841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.075 qpair failed and we were unable to recover it. 00:25:41.075 [2024-11-26 20:55:44.497019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.075 [2024-11-26 20:55:44.497053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.075 qpair failed and we were unable to recover it. 00:25:41.075 [2024-11-26 20:55:44.497288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.076 [2024-11-26 20:55:44.497353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.076 qpair failed and we were unable to recover it. 00:25:41.076 [2024-11-26 20:55:44.497520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.076 [2024-11-26 20:55:44.497566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.076 qpair failed and we were unable to recover it. 00:25:41.076 [2024-11-26 20:55:44.497800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.076 [2024-11-26 20:55:44.497847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.076 qpair failed and we were unable to recover it. 00:25:41.076 [2024-11-26 20:55:44.498060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.076 [2024-11-26 20:55:44.498106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.076 qpair failed and we were unable to recover it. 00:25:41.076 [2024-11-26 20:55:44.498318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.076 [2024-11-26 20:55:44.498366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.076 qpair failed and we were unable to recover it. 00:25:41.076 [2024-11-26 20:55:44.498552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.076 [2024-11-26 20:55:44.498601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.076 qpair failed and we were unable to recover it. 00:25:41.076 [2024-11-26 20:55:44.498754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.076 [2024-11-26 20:55:44.498800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.076 qpair failed and we were unable to recover it. 00:25:41.076 [2024-11-26 20:55:44.498984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.076 [2024-11-26 20:55:44.499029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.076 qpair failed and we were unable to recover it. 00:25:41.076 [2024-11-26 20:55:44.499217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.076 [2024-11-26 20:55:44.499264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.076 qpair failed and we were unable to recover it. 00:25:41.076 [2024-11-26 20:55:44.499458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.076 [2024-11-26 20:55:44.499504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.076 qpair failed and we were unable to recover it. 00:25:41.076 [2024-11-26 20:55:44.499731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.076 [2024-11-26 20:55:44.499778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.076 qpair failed and we were unable to recover it. 00:25:41.076 [2024-11-26 20:55:44.499955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.076 [2024-11-26 20:55:44.500002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.076 qpair failed and we were unable to recover it. 00:25:41.076 [2024-11-26 20:55:44.500171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.076 [2024-11-26 20:55:44.500216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.076 qpair failed and we were unable to recover it. 00:25:41.076 [2024-11-26 20:55:44.500411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.076 [2024-11-26 20:55:44.500458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.076 qpair failed and we were unable to recover it. 00:25:41.076 [2024-11-26 20:55:44.500631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.076 [2024-11-26 20:55:44.500676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.076 qpair failed and we were unable to recover it. 00:25:41.076 [2024-11-26 20:55:44.500888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.076 [2024-11-26 20:55:44.500922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.076 qpair failed and we were unable to recover it. 00:25:41.076 [2024-11-26 20:55:44.501037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.076 [2024-11-26 20:55:44.501070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.076 qpair failed and we were unable to recover it. 00:25:41.076 [2024-11-26 20:55:44.501173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.076 [2024-11-26 20:55:44.501208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.076 qpair failed and we were unable to recover it. 00:25:41.076 [2024-11-26 20:55:44.501381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.076 [2024-11-26 20:55:44.501427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.076 qpair failed and we were unable to recover it. 00:25:41.076 [2024-11-26 20:55:44.501646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.076 [2024-11-26 20:55:44.501692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.076 qpair failed and we were unable to recover it. 00:25:41.076 [2024-11-26 20:55:44.501859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.076 [2024-11-26 20:55:44.501895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.076 qpair failed and we were unable to recover it. 00:25:41.076 [2024-11-26 20:55:44.502055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.076 [2024-11-26 20:55:44.502108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.076 qpair failed and we were unable to recover it. 00:25:41.076 [2024-11-26 20:55:44.502298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.076 [2024-11-26 20:55:44.502356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.076 qpair failed and we were unable to recover it. 00:25:41.076 [2024-11-26 20:55:44.502528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.076 [2024-11-26 20:55:44.502574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.076 qpair failed and we were unable to recover it. 00:25:41.076 [2024-11-26 20:55:44.502757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.076 [2024-11-26 20:55:44.502803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.076 qpair failed and we were unable to recover it. 00:25:41.076 [2024-11-26 20:55:44.502993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.076 [2024-11-26 20:55:44.503035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.076 qpair failed and we were unable to recover it. 00:25:41.076 [2024-11-26 20:55:44.503217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.076 [2024-11-26 20:55:44.503252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.076 qpair failed and we were unable to recover it. 00:25:41.076 [2024-11-26 20:55:44.503398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.076 [2024-11-26 20:55:44.503434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.076 qpair failed and we were unable to recover it. 00:25:41.076 [2024-11-26 20:55:44.503609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.076 [2024-11-26 20:55:44.503651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.076 qpair failed and we were unable to recover it. 00:25:41.076 [2024-11-26 20:55:44.503840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.076 [2024-11-26 20:55:44.503885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.076 qpair failed and we were unable to recover it. 00:25:41.076 [2024-11-26 20:55:44.504057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.076 [2024-11-26 20:55:44.504101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.076 qpair failed and we were unable to recover it. 00:25:41.076 [2024-11-26 20:55:44.504287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.076 [2024-11-26 20:55:44.504365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.076 qpair failed and we were unable to recover it. 00:25:41.076 [2024-11-26 20:55:44.504505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.076 [2024-11-26 20:55:44.504549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.076 qpair failed and we were unable to recover it. 00:25:41.076 [2024-11-26 20:55:44.504699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.077 [2024-11-26 20:55:44.504743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.077 qpair failed and we were unable to recover it. 00:25:41.077 [2024-11-26 20:55:44.504920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.077 [2024-11-26 20:55:44.504964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.077 qpair failed and we were unable to recover it. 00:25:41.077 [2024-11-26 20:55:44.505117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.077 [2024-11-26 20:55:44.505181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.077 qpair failed and we were unable to recover it. 00:25:41.077 [2024-11-26 20:55:44.505463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.077 [2024-11-26 20:55:44.505507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.077 qpair failed and we were unable to recover it. 00:25:41.077 [2024-11-26 20:55:44.505652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.077 [2024-11-26 20:55:44.505697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.077 qpair failed and we were unable to recover it. 00:25:41.077 [2024-11-26 20:55:44.505871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.077 [2024-11-26 20:55:44.505914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.077 qpair failed and we were unable to recover it. 00:25:41.077 [2024-11-26 20:55:44.506078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.077 [2024-11-26 20:55:44.506122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.077 qpair failed and we were unable to recover it. 00:25:41.077 [2024-11-26 20:55:44.506248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.077 [2024-11-26 20:55:44.506299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.077 qpair failed and we were unable to recover it. 00:25:41.077 [2024-11-26 20:55:44.506528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.077 [2024-11-26 20:55:44.506563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.077 qpair failed and we were unable to recover it. 00:25:41.077 [2024-11-26 20:55:44.506672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.077 [2024-11-26 20:55:44.506706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.077 qpair failed and we were unable to recover it. 00:25:41.077 [2024-11-26 20:55:44.506932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.077 [2024-11-26 20:55:44.506976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.077 qpair failed and we were unable to recover it. 00:25:41.077 [2024-11-26 20:55:44.507123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.077 [2024-11-26 20:55:44.507169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.077 qpair failed and we were unable to recover it. 00:25:41.077 [2024-11-26 20:55:44.507341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.077 [2024-11-26 20:55:44.507394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.077 qpair failed and we were unable to recover it. 00:25:41.077 [2024-11-26 20:55:44.507529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.077 [2024-11-26 20:55:44.507562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.077 qpair failed and we were unable to recover it. 00:25:41.077 [2024-11-26 20:55:44.507670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.077 [2024-11-26 20:55:44.507704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.077 qpair failed and we were unable to recover it. 00:25:41.077 [2024-11-26 20:55:44.507843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.077 [2024-11-26 20:55:44.507877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.077 qpair failed and we were unable to recover it. 00:25:41.077 [2024-11-26 20:55:44.508092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.077 [2024-11-26 20:55:44.508157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.077 qpair failed and we were unable to recover it. 00:25:41.077 [2024-11-26 20:55:44.508415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.077 [2024-11-26 20:55:44.508482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.077 qpair failed and we were unable to recover it. 00:25:41.077 [2024-11-26 20:55:44.508789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.077 [2024-11-26 20:55:44.508856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.077 qpair failed and we were unable to recover it. 00:25:41.077 [2024-11-26 20:55:44.509059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.077 [2024-11-26 20:55:44.509127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.077 qpair failed and we were unable to recover it. 00:25:41.077 [2024-11-26 20:55:44.509418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.077 [2024-11-26 20:55:44.509484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.077 qpair failed and we were unable to recover it. 00:25:41.077 [2024-11-26 20:55:44.509733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.077 [2024-11-26 20:55:44.509769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.077 qpair failed and we were unable to recover it. 00:25:41.077 [2024-11-26 20:55:44.509942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.077 [2024-11-26 20:55:44.509993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.077 qpair failed and we were unable to recover it. 00:25:41.077 [2024-11-26 20:55:44.510148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.077 [2024-11-26 20:55:44.510218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.077 qpair failed and we were unable to recover it. 00:25:41.077 [2024-11-26 20:55:44.510431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.077 [2024-11-26 20:55:44.510486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.077 qpair failed and we were unable to recover it. 00:25:41.077 [2024-11-26 20:55:44.510754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.077 [2024-11-26 20:55:44.510806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.077 qpair failed and we were unable to recover it. 00:25:41.077 [2024-11-26 20:55:44.510953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.077 [2024-11-26 20:55:44.511000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.077 qpair failed and we were unable to recover it. 00:25:41.077 [2024-11-26 20:55:44.511162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.077 [2024-11-26 20:55:44.511231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.077 qpair failed and we were unable to recover it. 00:25:41.077 [2024-11-26 20:55:44.511442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.077 [2024-11-26 20:55:44.511503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.077 qpair failed and we were unable to recover it. 00:25:41.077 [2024-11-26 20:55:44.511723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.077 [2024-11-26 20:55:44.511783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.077 qpair failed and we were unable to recover it. 00:25:41.077 [2024-11-26 20:55:44.512043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.077 [2024-11-26 20:55:44.512103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.077 qpair failed and we were unable to recover it. 00:25:41.077 [2024-11-26 20:55:44.512389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.077 [2024-11-26 20:55:44.512437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.077 qpair failed and we were unable to recover it. 00:25:41.077 [2024-11-26 20:55:44.512620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.077 [2024-11-26 20:55:44.512665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.077 qpair failed and we were unable to recover it. 00:25:41.077 [2024-11-26 20:55:44.512852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.077 [2024-11-26 20:55:44.512898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.077 qpair failed and we were unable to recover it. 00:25:41.077 [2024-11-26 20:55:44.513092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.077 [2024-11-26 20:55:44.513138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.077 qpair failed and we were unable to recover it. 00:25:41.077 [2024-11-26 20:55:44.513325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.077 [2024-11-26 20:55:44.513372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.077 qpair failed and we were unable to recover it. 00:25:41.077 [2024-11-26 20:55:44.513544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.077 [2024-11-26 20:55:44.513590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.077 qpair failed and we were unable to recover it. 00:25:41.077 [2024-11-26 20:55:44.513747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.077 [2024-11-26 20:55:44.513793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.077 qpair failed and we were unable to recover it. 00:25:41.077 [2024-11-26 20:55:44.513970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.078 [2024-11-26 20:55:44.514017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.078 qpair failed and we were unable to recover it. 00:25:41.078 [2024-11-26 20:55:44.514177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.078 [2024-11-26 20:55:44.514224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.078 qpair failed and we were unable to recover it. 00:25:41.078 [2024-11-26 20:55:44.514388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.078 [2024-11-26 20:55:44.514434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.078 qpair failed and we were unable to recover it. 00:25:41.078 [2024-11-26 20:55:44.514649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.078 [2024-11-26 20:55:44.514695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.078 qpair failed and we were unable to recover it. 00:25:41.078 [2024-11-26 20:55:44.514885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.078 [2024-11-26 20:55:44.514932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.078 qpair failed and we were unable to recover it. 00:25:41.078 [2024-11-26 20:55:44.515142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.078 [2024-11-26 20:55:44.515188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.078 qpair failed and we were unable to recover it. 00:25:41.078 [2024-11-26 20:55:44.515355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.078 [2024-11-26 20:55:44.515402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.078 qpair failed and we were unable to recover it. 00:25:41.078 [2024-11-26 20:55:44.515547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.078 [2024-11-26 20:55:44.515593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.078 qpair failed and we were unable to recover it. 00:25:41.078 [2024-11-26 20:55:44.515807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.078 [2024-11-26 20:55:44.515854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.078 qpair failed and we were unable to recover it. 00:25:41.078 [2024-11-26 20:55:44.516006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.078 [2024-11-26 20:55:44.516063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.078 qpair failed and we were unable to recover it. 00:25:41.078 [2024-11-26 20:55:44.516217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.078 [2024-11-26 20:55:44.516263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.078 qpair failed and we were unable to recover it. 00:25:41.078 [2024-11-26 20:55:44.516465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.078 [2024-11-26 20:55:44.516534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.078 qpair failed and we were unable to recover it. 00:25:41.078 [2024-11-26 20:55:44.516732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.078 [2024-11-26 20:55:44.516781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.078 qpair failed and we were unable to recover it. 00:25:41.078 [2024-11-26 20:55:44.516991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.078 [2024-11-26 20:55:44.517062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.078 qpair failed and we were unable to recover it. 00:25:41.078 [2024-11-26 20:55:44.517278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.078 [2024-11-26 20:55:44.517338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.078 qpair failed and we were unable to recover it. 00:25:41.078 [2024-11-26 20:55:44.517517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.078 [2024-11-26 20:55:44.517559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.078 qpair failed and we were unable to recover it. 00:25:41.078 [2024-11-26 20:55:44.517734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.078 [2024-11-26 20:55:44.517777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.078 qpair failed and we were unable to recover it. 00:25:41.078 [2024-11-26 20:55:44.517952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.078 [2024-11-26 20:55:44.517999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.078 qpair failed and we were unable to recover it. 00:25:41.078 [2024-11-26 20:55:44.518158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.078 [2024-11-26 20:55:44.518201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.078 qpair failed and we were unable to recover it. 00:25:41.078 [2024-11-26 20:55:44.518456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.078 [2024-11-26 20:55:44.518500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.078 qpair failed and we were unable to recover it. 00:25:41.078 [2024-11-26 20:55:44.518685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.078 [2024-11-26 20:55:44.518748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.078 qpair failed and we were unable to recover it. 00:25:41.078 [2024-11-26 20:55:44.518963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.078 [2024-11-26 20:55:44.519027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.078 qpair failed and we were unable to recover it. 00:25:41.078 [2024-11-26 20:55:44.519262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.078 [2024-11-26 20:55:44.519360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.078 qpair failed and we were unable to recover it. 00:25:41.078 [2024-11-26 20:55:44.519568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.078 [2024-11-26 20:55:44.519635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.078 qpair failed and we were unable to recover it. 00:25:41.078 [2024-11-26 20:55:44.519907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.078 [2024-11-26 20:55:44.519975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.078 qpair failed and we were unable to recover it. 00:25:41.078 [2024-11-26 20:55:44.520250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.078 [2024-11-26 20:55:44.520334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.078 qpair failed and we were unable to recover it. 00:25:41.078 [2024-11-26 20:55:44.520533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.078 [2024-11-26 20:55:44.520579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.078 qpair failed and we were unable to recover it. 00:25:41.078 [2024-11-26 20:55:44.520756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.078 [2024-11-26 20:55:44.520803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.078 qpair failed and we were unable to recover it. 00:25:41.078 [2024-11-26 20:55:44.520964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.078 [2024-11-26 20:55:44.521010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.078 qpair failed and we were unable to recover it. 00:25:41.078 [2024-11-26 20:55:44.521155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.078 [2024-11-26 20:55:44.521202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.078 qpair failed and we were unable to recover it. 00:25:41.078 [2024-11-26 20:55:44.521367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.078 [2024-11-26 20:55:44.521413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.078 qpair failed and we were unable to recover it. 00:25:41.078 [2024-11-26 20:55:44.521571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.078 [2024-11-26 20:55:44.521619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.078 qpair failed and we were unable to recover it. 00:25:41.078 [2024-11-26 20:55:44.521834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.078 [2024-11-26 20:55:44.521879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.078 qpair failed and we were unable to recover it. 00:25:41.078 [2024-11-26 20:55:44.522034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.078 [2024-11-26 20:55:44.522080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.078 qpair failed and we were unable to recover it. 00:25:41.078 [2024-11-26 20:55:44.522264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.078 [2024-11-26 20:55:44.522322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.078 qpair failed and we were unable to recover it. 00:25:41.078 [2024-11-26 20:55:44.522497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.078 [2024-11-26 20:55:44.522543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.078 qpair failed and we were unable to recover it. 00:25:41.078 [2024-11-26 20:55:44.522779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.078 [2024-11-26 20:55:44.522859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.078 qpair failed and we were unable to recover it. 00:25:41.078 [2024-11-26 20:55:44.523152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.078 [2024-11-26 20:55:44.523198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.078 qpair failed and we were unable to recover it. 00:25:41.078 [2024-11-26 20:55:44.523387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.078 [2024-11-26 20:55:44.523434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.079 qpair failed and we were unable to recover it. 00:25:41.079 [2024-11-26 20:55:44.523641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.079 [2024-11-26 20:55:44.523706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.079 qpair failed and we were unable to recover it. 00:25:41.079 [2024-11-26 20:55:44.523867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.079 [2024-11-26 20:55:44.523931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.079 qpair failed and we were unable to recover it. 00:25:41.079 [2024-11-26 20:55:44.524122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.079 [2024-11-26 20:55:44.524188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.079 qpair failed and we were unable to recover it. 00:25:41.079 [2024-11-26 20:55:44.524498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.079 [2024-11-26 20:55:44.524565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.079 qpair failed and we were unable to recover it. 00:25:41.079 [2024-11-26 20:55:44.524802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.079 [2024-11-26 20:55:44.524851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.079 qpair failed and we were unable to recover it. 00:25:41.079 [2024-11-26 20:55:44.525016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.079 [2024-11-26 20:55:44.525066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.079 qpair failed and we were unable to recover it. 00:25:41.079 [2024-11-26 20:55:44.525261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.079 [2024-11-26 20:55:44.525330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.079 qpair failed and we were unable to recover it. 00:25:41.079 [2024-11-26 20:55:44.525515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.079 [2024-11-26 20:55:44.525564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.079 qpair failed and we were unable to recover it. 00:25:41.079 [2024-11-26 20:55:44.525754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.079 [2024-11-26 20:55:44.525803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.079 qpair failed and we were unable to recover it. 00:25:41.079 [2024-11-26 20:55:44.525969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.079 [2024-11-26 20:55:44.526018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.079 qpair failed and we were unable to recover it. 00:25:41.079 [2024-11-26 20:55:44.526209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.079 [2024-11-26 20:55:44.526258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.079 qpair failed and we were unable to recover it. 00:25:41.079 [2024-11-26 20:55:44.526482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.079 [2024-11-26 20:55:44.526529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.079 qpair failed and we were unable to recover it. 00:25:41.079 [2024-11-26 20:55:44.526716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.079 [2024-11-26 20:55:44.526771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.079 qpair failed and we were unable to recover it. 00:25:41.079 [2024-11-26 20:55:44.526931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.079 [2024-11-26 20:55:44.526978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.079 qpair failed and we were unable to recover it. 00:25:41.079 [2024-11-26 20:55:44.527148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.079 [2024-11-26 20:55:44.527234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.079 qpair failed and we were unable to recover it. 00:25:41.079 [2024-11-26 20:55:44.527516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.079 [2024-11-26 20:55:44.527563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.079 qpair failed and we were unable to recover it. 00:25:41.079 [2024-11-26 20:55:44.527738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.079 [2024-11-26 20:55:44.527808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.079 qpair failed and we were unable to recover it. 00:25:41.079 [2024-11-26 20:55:44.528103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.079 [2024-11-26 20:55:44.528149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.079 qpair failed and we were unable to recover it. 00:25:41.079 [2024-11-26 20:55:44.528338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.079 [2024-11-26 20:55:44.528386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.079 qpair failed and we were unable to recover it. 00:25:41.079 [2024-11-26 20:55:44.528644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.079 [2024-11-26 20:55:44.528708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.079 qpair failed and we were unable to recover it. 00:25:41.079 [2024-11-26 20:55:44.528974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.079 [2024-11-26 20:55:44.529041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.079 qpair failed and we were unable to recover it. 00:25:41.079 [2024-11-26 20:55:44.529253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.079 [2024-11-26 20:55:44.529311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.079 qpair failed and we were unable to recover it. 00:25:41.079 [2024-11-26 20:55:44.529469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.079 [2024-11-26 20:55:44.529518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.079 qpair failed and we were unable to recover it. 00:25:41.079 [2024-11-26 20:55:44.529724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.079 [2024-11-26 20:55:44.529773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.079 qpair failed and we were unable to recover it. 00:25:41.079 [2024-11-26 20:55:44.529986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.079 [2024-11-26 20:55:44.530037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.079 qpair failed and we were unable to recover it. 00:25:41.079 [2024-11-26 20:55:44.530267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.079 [2024-11-26 20:55:44.530333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.079 qpair failed and we were unable to recover it. 00:25:41.079 [2024-11-26 20:55:44.530536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.079 [2024-11-26 20:55:44.530586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.079 qpair failed and we were unable to recover it. 00:25:41.079 [2024-11-26 20:55:44.530812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.079 [2024-11-26 20:55:44.530860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.079 qpair failed and we were unable to recover it. 00:25:41.079 [2024-11-26 20:55:44.531050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.079 [2024-11-26 20:55:44.531099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.079 qpair failed and we were unable to recover it. 00:25:41.079 [2024-11-26 20:55:44.531292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.079 [2024-11-26 20:55:44.531351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.079 qpair failed and we were unable to recover it. 00:25:41.079 [2024-11-26 20:55:44.531575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.079 [2024-11-26 20:55:44.531624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.079 qpair failed and we were unable to recover it. 00:25:41.079 [2024-11-26 20:55:44.531811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.079 [2024-11-26 20:55:44.531860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.079 qpair failed and we were unable to recover it. 00:25:41.079 [2024-11-26 20:55:44.532078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.079 [2024-11-26 20:55:44.532127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.079 qpair failed and we were unable to recover it. 00:25:41.079 [2024-11-26 20:55:44.532315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.079 [2024-11-26 20:55:44.532365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.079 qpair failed and we were unable to recover it. 00:25:41.079 [2024-11-26 20:55:44.532550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.079 [2024-11-26 20:55:44.532600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.079 qpair failed and we were unable to recover it. 00:25:41.079 [2024-11-26 20:55:44.532816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.079 [2024-11-26 20:55:44.532864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.079 qpair failed and we were unable to recover it. 00:25:41.079 [2024-11-26 20:55:44.533030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.079 [2024-11-26 20:55:44.533079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.079 qpair failed and we were unable to recover it. 00:25:41.079 [2024-11-26 20:55:44.533268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.080 [2024-11-26 20:55:44.533391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.080 qpair failed and we were unable to recover it. 00:25:41.080 [2024-11-26 20:55:44.533569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.080 [2024-11-26 20:55:44.533618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.080 qpair failed and we were unable to recover it. 00:25:41.080 [2024-11-26 20:55:44.533837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.080 [2024-11-26 20:55:44.533902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.080 qpair failed and we were unable to recover it. 00:25:41.080 [2024-11-26 20:55:44.534146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.080 [2024-11-26 20:55:44.534211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.080 qpair failed and we were unable to recover it. 00:25:41.080 [2024-11-26 20:55:44.534491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.080 [2024-11-26 20:55:44.534541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.080 qpair failed and we were unable to recover it. 00:25:41.080 [2024-11-26 20:55:44.534760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.080 [2024-11-26 20:55:44.534825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.080 qpair failed and we were unable to recover it. 00:25:41.080 [2024-11-26 20:55:44.535017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.080 [2024-11-26 20:55:44.535082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.080 qpair failed and we were unable to recover it. 00:25:41.080 [2024-11-26 20:55:44.535372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.080 [2024-11-26 20:55:44.535424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.080 qpair failed and we were unable to recover it. 00:25:41.080 [2024-11-26 20:55:44.535591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.080 [2024-11-26 20:55:44.535640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.080 qpair failed and we were unable to recover it. 00:25:41.080 [2024-11-26 20:55:44.535827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.080 [2024-11-26 20:55:44.535875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.080 qpair failed and we were unable to recover it. 00:25:41.080 [2024-11-26 20:55:44.536102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.080 [2024-11-26 20:55:44.536151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.080 qpair failed and we were unable to recover it. 00:25:41.080 [2024-11-26 20:55:44.536345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.080 [2024-11-26 20:55:44.536395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.080 qpair failed and we were unable to recover it. 00:25:41.080 [2024-11-26 20:55:44.536560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.080 [2024-11-26 20:55:44.536608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.080 qpair failed and we were unable to recover it. 00:25:41.080 [2024-11-26 20:55:44.536797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.080 [2024-11-26 20:55:44.536845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.080 qpair failed and we were unable to recover it. 00:25:41.080 [2024-11-26 20:55:44.537044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.080 [2024-11-26 20:55:44.537093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.080 qpair failed and we were unable to recover it. 00:25:41.080 [2024-11-26 20:55:44.537233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.080 [2024-11-26 20:55:44.537283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.080 qpair failed and we were unable to recover it. 00:25:41.080 [2024-11-26 20:55:44.537475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.080 [2024-11-26 20:55:44.537523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.080 qpair failed and we were unable to recover it. 00:25:41.080 [2024-11-26 20:55:44.537689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.080 [2024-11-26 20:55:44.537738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.080 qpair failed and we were unable to recover it. 00:25:41.080 [2024-11-26 20:55:44.537895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.080 [2024-11-26 20:55:44.537946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.080 qpair failed and we were unable to recover it. 00:25:41.080 [2024-11-26 20:55:44.538145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.080 [2024-11-26 20:55:44.538193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.080 qpair failed and we were unable to recover it. 00:25:41.080 [2024-11-26 20:55:44.538388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.080 [2024-11-26 20:55:44.538438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.080 qpair failed and we were unable to recover it. 00:25:41.080 [2024-11-26 20:55:44.538626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.080 [2024-11-26 20:55:44.538675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.080 qpair failed and we were unable to recover it. 00:25:41.080 [2024-11-26 20:55:44.538859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.080 [2024-11-26 20:55:44.538908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.080 qpair failed and we were unable to recover it. 00:25:41.080 [2024-11-26 20:55:44.539099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.080 [2024-11-26 20:55:44.539149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.080 qpair failed and we were unable to recover it. 00:25:41.080 [2024-11-26 20:55:44.539343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.080 [2024-11-26 20:55:44.539395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.080 qpair failed and we were unable to recover it. 00:25:41.080 [2024-11-26 20:55:44.539536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.080 [2024-11-26 20:55:44.539585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.080 qpair failed and we were unable to recover it. 00:25:41.080 [2024-11-26 20:55:44.539783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.080 [2024-11-26 20:55:44.539833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.080 qpair failed and we were unable to recover it. 00:25:41.080 [2024-11-26 20:55:44.540018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.080 [2024-11-26 20:55:44.540067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.080 qpair failed and we were unable to recover it. 00:25:41.080 [2024-11-26 20:55:44.540217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.080 [2024-11-26 20:55:44.540266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.080 qpair failed and we were unable to recover it. 00:25:41.080 [2024-11-26 20:55:44.540480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.080 [2024-11-26 20:55:44.540527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.080 qpair failed and we were unable to recover it. 00:25:41.080 [2024-11-26 20:55:44.540734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.080 [2024-11-26 20:55:44.540779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.080 qpair failed and we were unable to recover it. 00:25:41.080 [2024-11-26 20:55:44.540956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.080 [2024-11-26 20:55:44.541001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.080 qpair failed and we were unable to recover it. 00:25:41.080 [2024-11-26 20:55:44.541176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.080 [2024-11-26 20:55:44.541223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.080 qpair failed and we were unable to recover it. 00:25:41.080 [2024-11-26 20:55:44.541370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.080 [2024-11-26 20:55:44.541419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.080 qpair failed and we were unable to recover it. 00:25:41.080 [2024-11-26 20:55:44.541576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.080 [2024-11-26 20:55:44.541622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.080 qpair failed and we were unable to recover it. 00:25:41.080 [2024-11-26 20:55:44.541843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.080 [2024-11-26 20:55:44.541908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.080 qpair failed and we were unable to recover it. 00:25:41.080 [2024-11-26 20:55:44.542203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.080 [2024-11-26 20:55:44.542267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.081 qpair failed and we were unable to recover it. 00:25:41.081 [2024-11-26 20:55:44.542490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.081 [2024-11-26 20:55:44.542555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.081 qpair failed and we were unable to recover it. 00:25:41.081 [2024-11-26 20:55:44.542820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.081 [2024-11-26 20:55:44.542868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.081 qpair failed and we were unable to recover it. 00:25:41.081 [2024-11-26 20:55:44.543060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.081 [2024-11-26 20:55:44.543133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.081 qpair failed and we were unable to recover it. 00:25:41.081 [2024-11-26 20:55:44.543389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.081 [2024-11-26 20:55:44.543467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.081 qpair failed and we were unable to recover it. 00:25:41.081 [2024-11-26 20:55:44.543740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.081 [2024-11-26 20:55:44.543786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.081 qpair failed and we were unable to recover it. 00:25:41.081 [2024-11-26 20:55:44.544000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.081 [2024-11-26 20:55:44.544048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.081 qpair failed and we were unable to recover it. 00:25:41.081 [2024-11-26 20:55:44.544230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.081 [2024-11-26 20:55:44.544279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.081 qpair failed and we were unable to recover it. 00:25:41.081 [2024-11-26 20:55:44.544431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.081 [2024-11-26 20:55:44.544480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.081 qpair failed and we were unable to recover it. 00:25:41.081 [2024-11-26 20:55:44.544669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.081 [2024-11-26 20:55:44.544718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.081 qpair failed and we were unable to recover it. 00:25:41.081 [2024-11-26 20:55:44.544881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.081 [2024-11-26 20:55:44.544931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.081 qpair failed and we were unable to recover it. 00:25:41.081 [2024-11-26 20:55:44.545095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.081 [2024-11-26 20:55:44.545144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.081 qpair failed and we were unable to recover it. 00:25:41.081 [2024-11-26 20:55:44.545348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.081 [2024-11-26 20:55:44.545398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.081 qpair failed and we were unable to recover it. 00:25:41.081 [2024-11-26 20:55:44.545625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.081 [2024-11-26 20:55:44.545675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.081 qpair failed and we were unable to recover it. 00:25:41.081 [2024-11-26 20:55:44.545834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.081 [2024-11-26 20:55:44.545885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.081 qpair failed and we were unable to recover it. 00:25:41.081 [2024-11-26 20:55:44.546087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.081 [2024-11-26 20:55:44.546136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.081 qpair failed and we were unable to recover it. 00:25:41.081 [2024-11-26 20:55:44.546342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.081 [2024-11-26 20:55:44.546392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.081 qpair failed and we were unable to recover it. 00:25:41.081 [2024-11-26 20:55:44.546588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.081 [2024-11-26 20:55:44.546637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.081 qpair failed and we were unable to recover it. 00:25:41.081 [2024-11-26 20:55:44.546807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.081 [2024-11-26 20:55:44.546856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.081 qpair failed and we were unable to recover it. 00:25:41.081 [2024-11-26 20:55:44.547093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.081 [2024-11-26 20:55:44.547139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.081 qpair failed and we were unable to recover it. 00:25:41.081 [2024-11-26 20:55:44.547289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.081 [2024-11-26 20:55:44.547357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.081 qpair failed and we were unable to recover it. 00:25:41.081 [2024-11-26 20:55:44.547501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.081 [2024-11-26 20:55:44.547548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.081 qpair failed and we were unable to recover it. 00:25:41.081 [2024-11-26 20:55:44.547752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.081 [2024-11-26 20:55:44.547798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.081 qpair failed and we were unable to recover it. 00:25:41.081 [2024-11-26 20:55:44.547987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.081 [2024-11-26 20:55:44.548033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.081 qpair failed and we were unable to recover it. 00:25:41.081 [2024-11-26 20:55:44.548211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.081 [2024-11-26 20:55:44.548256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.081 qpair failed and we were unable to recover it. 00:25:41.081 [2024-11-26 20:55:44.548403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.081 [2024-11-26 20:55:44.548450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.081 qpair failed and we were unable to recover it. 00:25:41.081 [2024-11-26 20:55:44.548634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.081 [2024-11-26 20:55:44.548680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.081 qpair failed and we were unable to recover it. 00:25:41.081 [2024-11-26 20:55:44.548851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.081 [2024-11-26 20:55:44.548901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.081 qpair failed and we were unable to recover it. 00:25:41.081 [2024-11-26 20:55:44.549050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.081 [2024-11-26 20:55:44.549097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.081 qpair failed and we were unable to recover it. 00:25:41.081 [2024-11-26 20:55:44.549273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.081 [2024-11-26 20:55:44.549369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.081 qpair failed and we were unable to recover it. 00:25:41.081 [2024-11-26 20:55:44.549658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.081 [2024-11-26 20:55:44.549722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.081 qpair failed and we were unable to recover it. 00:25:41.081 [2024-11-26 20:55:44.550023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.081 [2024-11-26 20:55:44.550092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.081 qpair failed and we were unable to recover it. 00:25:41.081 [2024-11-26 20:55:44.550334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.081 [2024-11-26 20:55:44.550402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.081 qpair failed and we were unable to recover it. 00:25:41.081 [2024-11-26 20:55:44.550503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.082 [2024-11-26 20:55:44.550537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.082 qpair failed and we were unable to recover it. 00:25:41.082 [2024-11-26 20:55:44.550685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.082 [2024-11-26 20:55:44.550720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.082 qpair failed and we were unable to recover it. 00:25:41.082 [2024-11-26 20:55:44.550865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.082 [2024-11-26 20:55:44.550952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.082 qpair failed and we were unable to recover it. 00:25:41.082 [2024-11-26 20:55:44.551247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.082 [2024-11-26 20:55:44.551298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.082 qpair failed and we were unable to recover it. 00:25:41.082 [2024-11-26 20:55:44.551480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.082 [2024-11-26 20:55:44.551522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.082 qpair failed and we were unable to recover it. 00:25:41.082 [2024-11-26 20:55:44.551641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.082 [2024-11-26 20:55:44.551677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.082 qpair failed and we were unable to recover it. 00:25:41.082 [2024-11-26 20:55:44.551875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.082 [2024-11-26 20:55:44.551924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.082 qpair failed and we were unable to recover it. 00:25:41.082 [2024-11-26 20:55:44.552123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.082 [2024-11-26 20:55:44.552174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.082 qpair failed and we were unable to recover it. 00:25:41.082 [2024-11-26 20:55:44.552404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.082 [2024-11-26 20:55:44.552450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.082 qpair failed and we were unable to recover it. 00:25:41.082 [2024-11-26 20:55:44.552562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.082 [2024-11-26 20:55:44.552617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.082 qpair failed and we were unable to recover it. 00:25:41.082 [2024-11-26 20:55:44.552774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.082 [2024-11-26 20:55:44.552826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.082 qpair failed and we were unable to recover it. 00:25:41.082 [2024-11-26 20:55:44.553037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.082 [2024-11-26 20:55:44.553116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.082 qpair failed and we were unable to recover it. 00:25:41.082 [2024-11-26 20:55:44.553369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.082 [2024-11-26 20:55:44.553407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.082 qpair failed and we were unable to recover it. 00:25:41.082 [2024-11-26 20:55:44.553559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.082 [2024-11-26 20:55:44.553594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.082 qpair failed and we were unable to recover it. 00:25:41.082 [2024-11-26 20:55:44.553742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.082 [2024-11-26 20:55:44.553793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.082 qpair failed and we were unable to recover it. 00:25:41.082 [2024-11-26 20:55:44.553905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.082 [2024-11-26 20:55:44.553941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.082 qpair failed and we were unable to recover it. 00:25:41.082 [2024-11-26 20:55:44.554095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.082 [2024-11-26 20:55:44.554133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.082 qpair failed and we were unable to recover it. 00:25:41.082 [2024-11-26 20:55:44.554245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.082 [2024-11-26 20:55:44.554281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.082 qpair failed and we were unable to recover it. 00:25:41.082 [2024-11-26 20:55:44.554433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.082 [2024-11-26 20:55:44.554466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.082 qpair failed and we were unable to recover it. 00:25:41.082 [2024-11-26 20:55:44.554614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.082 [2024-11-26 20:55:44.554649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.082 qpair failed and we were unable to recover it. 00:25:41.082 [2024-11-26 20:55:44.554778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.082 [2024-11-26 20:55:44.554811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.082 qpair failed and we were unable to recover it. 00:25:41.082 [2024-11-26 20:55:44.554954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.082 [2024-11-26 20:55:44.554988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.082 qpair failed and we were unable to recover it. 00:25:41.082 [2024-11-26 20:55:44.555105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.082 [2024-11-26 20:55:44.555139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.082 qpair failed and we were unable to recover it. 00:25:41.082 [2024-11-26 20:55:44.555252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.082 [2024-11-26 20:55:44.555286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.082 qpair failed and we were unable to recover it. 00:25:41.082 [2024-11-26 20:55:44.555438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.082 [2024-11-26 20:55:44.555472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.082 qpair failed and we were unable to recover it. 00:25:41.082 [2024-11-26 20:55:44.555607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.082 [2024-11-26 20:55:44.555640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.082 qpair failed and we were unable to recover it. 00:25:41.082 [2024-11-26 20:55:44.555786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.082 [2024-11-26 20:55:44.555819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.082 qpair failed and we were unable to recover it. 00:25:41.082 [2024-11-26 20:55:44.555932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.082 [2024-11-26 20:55:44.555965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.082 qpair failed and we were unable to recover it. 00:25:41.082 [2024-11-26 20:55:44.556094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.082 [2024-11-26 20:55:44.556127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.082 qpair failed and we were unable to recover it. 00:25:41.082 [2024-11-26 20:55:44.556272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.082 [2024-11-26 20:55:44.556312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.082 qpair failed and we were unable to recover it. 00:25:41.082 [2024-11-26 20:55:44.556486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.082 [2024-11-26 20:55:44.556517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.082 qpair failed and we were unable to recover it. 00:25:41.082 [2024-11-26 20:55:44.556655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.082 [2024-11-26 20:55:44.556703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.082 qpair failed and we were unable to recover it. 00:25:41.082 [2024-11-26 20:55:44.556878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.082 [2024-11-26 20:55:44.556908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.082 qpair failed and we were unable to recover it. 00:25:41.082 [2024-11-26 20:55:44.557043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.082 [2024-11-26 20:55:44.557082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.082 qpair failed and we were unable to recover it. 00:25:41.082 [2024-11-26 20:55:44.557223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.082 [2024-11-26 20:55:44.557257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.082 qpair failed and we were unable to recover it. 00:25:41.082 [2024-11-26 20:55:44.557397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.082 [2024-11-26 20:55:44.557428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.082 qpair failed and we were unable to recover it. 00:25:41.082 [2024-11-26 20:55:44.557580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.082 [2024-11-26 20:55:44.557611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.082 qpair failed and we were unable to recover it. 00:25:41.082 [2024-11-26 20:55:44.557781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.082 [2024-11-26 20:55:44.557815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.082 qpair failed and we were unable to recover it. 00:25:41.082 [2024-11-26 20:55:44.557989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.083 [2024-11-26 20:55:44.558023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.083 qpair failed and we were unable to recover it. 00:25:41.083 [2024-11-26 20:55:44.558140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.083 [2024-11-26 20:55:44.558174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.083 qpair failed and we were unable to recover it. 00:25:41.083 [2024-11-26 20:55:44.558336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.083 [2024-11-26 20:55:44.558368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.083 qpair failed and we were unable to recover it. 00:25:41.083 [2024-11-26 20:55:44.558461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.083 [2024-11-26 20:55:44.558492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.083 qpair failed and we were unable to recover it. 00:25:41.083 [2024-11-26 20:55:44.558625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.083 [2024-11-26 20:55:44.558657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.083 qpair failed and we were unable to recover it. 00:25:41.083 [2024-11-26 20:55:44.558814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.083 [2024-11-26 20:55:44.558849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.083 qpair failed and we were unable to recover it. 00:25:41.083 [2024-11-26 20:55:44.559072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.083 [2024-11-26 20:55:44.559121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.083 qpair failed and we were unable to recover it. 00:25:41.083 [2024-11-26 20:55:44.559291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.083 [2024-11-26 20:55:44.559365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.083 qpair failed and we were unable to recover it. 00:25:41.083 [2024-11-26 20:55:44.559487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.083 [2024-11-26 20:55:44.559517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.083 qpair failed and we were unable to recover it. 00:25:41.083 [2024-11-26 20:55:44.559632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.083 [2024-11-26 20:55:44.559663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.083 qpair failed and we were unable to recover it. 00:25:41.083 [2024-11-26 20:55:44.559832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.083 [2024-11-26 20:55:44.559899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.083 qpair failed and we were unable to recover it. 00:25:41.083 [2024-11-26 20:55:44.560124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.083 [2024-11-26 20:55:44.560174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.083 qpair failed and we were unable to recover it. 00:25:41.083 [2024-11-26 20:55:44.560360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.083 [2024-11-26 20:55:44.560391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.083 qpair failed and we were unable to recover it. 00:25:41.083 [2024-11-26 20:55:44.560490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.083 [2024-11-26 20:55:44.560525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.083 qpair failed and we were unable to recover it. 00:25:41.083 [2024-11-26 20:55:44.560741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.083 [2024-11-26 20:55:44.560777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.083 qpair failed and we were unable to recover it. 00:25:41.083 [2024-11-26 20:55:44.560966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.083 [2024-11-26 20:55:44.561001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.083 qpair failed and we were unable to recover it. 00:25:41.083 [2024-11-26 20:55:44.561168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.083 [2024-11-26 20:55:44.561224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.083 qpair failed and we were unable to recover it. 00:25:41.083 [2024-11-26 20:55:44.561448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.083 [2024-11-26 20:55:44.561480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.083 qpair failed and we were unable to recover it. 00:25:41.083 [2024-11-26 20:55:44.561586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.083 [2024-11-26 20:55:44.561616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.083 qpair failed and we were unable to recover it. 00:25:41.083 [2024-11-26 20:55:44.561732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.083 [2024-11-26 20:55:44.561761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.083 qpair failed and we were unable to recover it. 00:25:41.083 [2024-11-26 20:55:44.561966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.083 [2024-11-26 20:55:44.562015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.083 qpair failed and we were unable to recover it. 00:25:41.083 [2024-11-26 20:55:44.562156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.083 [2024-11-26 20:55:44.562215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.083 qpair failed and we were unable to recover it. 00:25:41.083 [2024-11-26 20:55:44.562401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.083 [2024-11-26 20:55:44.562433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.083 qpair failed and we were unable to recover it. 00:25:41.083 [2024-11-26 20:55:44.562538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.083 [2024-11-26 20:55:44.562567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.083 qpair failed and we were unable to recover it. 00:25:41.083 [2024-11-26 20:55:44.562699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.083 [2024-11-26 20:55:44.562730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.083 qpair failed and we were unable to recover it. 00:25:41.083 [2024-11-26 20:55:44.562839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.083 [2024-11-26 20:55:44.562867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.083 qpair failed and we were unable to recover it. 00:25:41.083 [2024-11-26 20:55:44.563094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.083 [2024-11-26 20:55:44.563149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.083 qpair failed and we were unable to recover it. 00:25:41.083 [2024-11-26 20:55:44.563316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.083 [2024-11-26 20:55:44.563347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.083 qpair failed and we were unable to recover it. 00:25:41.083 [2024-11-26 20:55:44.563458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.083 [2024-11-26 20:55:44.563489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.083 qpair failed and we were unable to recover it. 00:25:41.083 [2024-11-26 20:55:44.563619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.083 [2024-11-26 20:55:44.563654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.083 qpair failed and we were unable to recover it. 00:25:41.083 [2024-11-26 20:55:44.563797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.083 [2024-11-26 20:55:44.563826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.083 qpair failed and we were unable to recover it. 00:25:41.083 [2024-11-26 20:55:44.563950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.083 [2024-11-26 20:55:44.563981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.083 qpair failed and we were unable to recover it. 00:25:41.083 [2024-11-26 20:55:44.564209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.083 [2024-11-26 20:55:44.564270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.083 qpair failed and we were unable to recover it. 00:25:41.083 [2024-11-26 20:55:44.564442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.083 [2024-11-26 20:55:44.564473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.083 qpair failed and we were unable to recover it. 00:25:41.083 [2024-11-26 20:55:44.564618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.083 [2024-11-26 20:55:44.564648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.083 qpair failed and we were unable to recover it. 00:25:41.083 [2024-11-26 20:55:44.564791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.083 [2024-11-26 20:55:44.564839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.083 qpair failed and we were unable to recover it. 00:25:41.083 [2024-11-26 20:55:44.565052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.083 [2024-11-26 20:55:44.565104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.083 qpair failed and we were unable to recover it. 00:25:41.083 [2024-11-26 20:55:44.565314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.083 [2024-11-26 20:55:44.565372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.083 qpair failed and we were unable to recover it. 00:25:41.083 [2024-11-26 20:55:44.565528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.084 [2024-11-26 20:55:44.565558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.084 qpair failed and we were unable to recover it. 00:25:41.084 [2024-11-26 20:55:44.565734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.084 [2024-11-26 20:55:44.565764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.084 qpair failed and we were unable to recover it. 00:25:41.084 [2024-11-26 20:55:44.565869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.084 [2024-11-26 20:55:44.565899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.084 qpair failed and we were unable to recover it. 00:25:41.084 [2024-11-26 20:55:44.566096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.084 [2024-11-26 20:55:44.566149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.084 qpair failed and we were unable to recover it. 00:25:41.084 [2024-11-26 20:55:44.566342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.084 [2024-11-26 20:55:44.566376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.084 qpair failed and we were unable to recover it. 00:25:41.084 [2024-11-26 20:55:44.566473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.084 [2024-11-26 20:55:44.566502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.084 qpair failed and we were unable to recover it. 00:25:41.084 [2024-11-26 20:55:44.566653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.084 [2024-11-26 20:55:44.566704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.084 qpair failed and we were unable to recover it. 00:25:41.084 [2024-11-26 20:55:44.566962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.084 [2024-11-26 20:55:44.567015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.084 qpair failed and we were unable to recover it. 00:25:41.084 [2024-11-26 20:55:44.567252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.084 [2024-11-26 20:55:44.567338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.084 qpair failed and we were unable to recover it. 00:25:41.084 [2024-11-26 20:55:44.567471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.084 [2024-11-26 20:55:44.567501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.084 qpair failed and we were unable to recover it. 00:25:41.084 [2024-11-26 20:55:44.567607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.084 [2024-11-26 20:55:44.567639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.084 qpair failed and we were unable to recover it. 00:25:41.084 [2024-11-26 20:55:44.567747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.084 [2024-11-26 20:55:44.567775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.084 qpair failed and we were unable to recover it. 00:25:41.084 [2024-11-26 20:55:44.567935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.084 [2024-11-26 20:55:44.567986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.084 qpair failed and we were unable to recover it. 00:25:41.084 [2024-11-26 20:55:44.568188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.084 [2024-11-26 20:55:44.568240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.084 qpair failed and we were unable to recover it. 00:25:41.084 [2024-11-26 20:55:44.568403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.084 [2024-11-26 20:55:44.568433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.084 qpair failed and we were unable to recover it. 00:25:41.084 [2024-11-26 20:55:44.568546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.084 [2024-11-26 20:55:44.568588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.084 qpair failed and we were unable to recover it. 00:25:41.084 [2024-11-26 20:55:44.568796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.084 [2024-11-26 20:55:44.568825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.084 qpair failed and we were unable to recover it. 00:25:41.084 [2024-11-26 20:55:44.568973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.084 [2024-11-26 20:55:44.569003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.084 qpair failed and we were unable to recover it. 00:25:41.084 [2024-11-26 20:55:44.569246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.084 [2024-11-26 20:55:44.569322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.084 qpair failed and we were unable to recover it. 00:25:41.084 [2024-11-26 20:55:44.569492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.084 [2024-11-26 20:55:44.569523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.084 qpair failed and we were unable to recover it. 00:25:41.084 [2024-11-26 20:55:44.569657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.084 [2024-11-26 20:55:44.569708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.084 qpair failed and we were unable to recover it. 00:25:41.084 [2024-11-26 20:55:44.569818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.084 [2024-11-26 20:55:44.569854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.084 qpair failed and we were unable to recover it. 00:25:41.084 [2024-11-26 20:55:44.570071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.084 [2024-11-26 20:55:44.570137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.084 qpair failed and we were unable to recover it. 00:25:41.084 [2024-11-26 20:55:44.570392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.084 [2024-11-26 20:55:44.570428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.084 qpair failed and we were unable to recover it. 00:25:41.084 [2024-11-26 20:55:44.570551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.084 [2024-11-26 20:55:44.570589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.084 qpair failed and we were unable to recover it. 00:25:41.084 [2024-11-26 20:55:44.570754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.084 [2024-11-26 20:55:44.570789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.084 qpair failed and we were unable to recover it. 00:25:41.084 [2024-11-26 20:55:44.570912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.084 [2024-11-26 20:55:44.570976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.084 qpair failed and we were unable to recover it. 00:25:41.084 [2024-11-26 20:55:44.571221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.084 [2024-11-26 20:55:44.571277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.084 qpair failed and we were unable to recover it. 00:25:41.084 [2024-11-26 20:55:44.571447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.084 [2024-11-26 20:55:44.571480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.084 qpair failed and we were unable to recover it. 00:25:41.084 [2024-11-26 20:55:44.571660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.084 [2024-11-26 20:55:44.571714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.084 qpair failed and we were unable to recover it. 00:25:41.084 [2024-11-26 20:55:44.571927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.084 [2024-11-26 20:55:44.571981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.084 qpair failed and we were unable to recover it. 00:25:41.084 [2024-11-26 20:55:44.572224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.084 [2024-11-26 20:55:44.572279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.084 qpair failed and we were unable to recover it. 00:25:41.084 [2024-11-26 20:55:44.572462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.084 [2024-11-26 20:55:44.572495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.084 qpair failed and we were unable to recover it. 00:25:41.084 [2024-11-26 20:55:44.572639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.084 [2024-11-26 20:55:44.572693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.084 qpair failed and we were unable to recover it. 00:25:41.084 [2024-11-26 20:55:44.572870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.084 [2024-11-26 20:55:44.572905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.084 qpair failed and we were unable to recover it. 00:25:41.084 [2024-11-26 20:55:44.573124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.084 [2024-11-26 20:55:44.573185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.084 qpair failed and we were unable to recover it. 00:25:41.084 [2024-11-26 20:55:44.573393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.084 [2024-11-26 20:55:44.573429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.084 qpair failed and we were unable to recover it. 00:25:41.084 [2024-11-26 20:55:44.573595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.084 [2024-11-26 20:55:44.573651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.084 qpair failed and we were unable to recover it. 00:25:41.084 [2024-11-26 20:55:44.573831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.084 [2024-11-26 20:55:44.573891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.085 qpair failed and we were unable to recover it. 00:25:41.085 [2024-11-26 20:55:44.574068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.085 [2024-11-26 20:55:44.574122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.085 qpair failed and we were unable to recover it. 00:25:41.085 [2024-11-26 20:55:44.574369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.085 [2024-11-26 20:55:44.574427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.085 qpair failed and we were unable to recover it. 00:25:41.085 [2024-11-26 20:55:44.574645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.085 [2024-11-26 20:55:44.574702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.085 qpair failed and we were unable to recover it. 00:25:41.085 [2024-11-26 20:55:44.574931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.085 [2024-11-26 20:55:44.574973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.085 qpair failed and we were unable to recover it. 00:25:41.085 [2024-11-26 20:55:44.575148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.085 [2024-11-26 20:55:44.575185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.085 qpair failed and we were unable to recover it. 00:25:41.085 [2024-11-26 20:55:44.575369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.085 [2024-11-26 20:55:44.575406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.085 qpair failed and we were unable to recover it. 00:25:41.085 [2024-11-26 20:55:44.575564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.085 [2024-11-26 20:55:44.575602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.085 qpair failed and we were unable to recover it. 00:25:41.085 [2024-11-26 20:55:44.575778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.085 [2024-11-26 20:55:44.575812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.085 qpair failed and we were unable to recover it. 00:25:41.085 [2024-11-26 20:55:44.576038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.085 [2024-11-26 20:55:44.576074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.085 qpair failed and we were unable to recover it. 00:25:41.085 [2024-11-26 20:55:44.576210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.085 [2024-11-26 20:55:44.576245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.085 qpair failed and we were unable to recover it. 00:25:41.085 [2024-11-26 20:55:44.576405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.085 [2024-11-26 20:55:44.576441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.085 qpair failed and we were unable to recover it. 00:25:41.085 [2024-11-26 20:55:44.576584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.085 [2024-11-26 20:55:44.576635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.085 qpair failed and we were unable to recover it. 00:25:41.085 [2024-11-26 20:55:44.576775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.085 [2024-11-26 20:55:44.576809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.085 qpair failed and we were unable to recover it. 00:25:41.085 [2024-11-26 20:55:44.577007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.085 [2024-11-26 20:55:44.577062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.085 qpair failed and we were unable to recover it. 00:25:41.085 [2024-11-26 20:55:44.577232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.085 [2024-11-26 20:55:44.577291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.085 qpair failed and we were unable to recover it. 00:25:41.085 [2024-11-26 20:55:44.577478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.085 [2024-11-26 20:55:44.577534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.085 qpair failed and we were unable to recover it. 00:25:41.085 [2024-11-26 20:55:44.577777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.085 [2024-11-26 20:55:44.577812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.085 qpair failed and we were unable to recover it. 00:25:41.085 [2024-11-26 20:55:44.577946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.085 [2024-11-26 20:55:44.577981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.085 qpair failed and we were unable to recover it. 00:25:41.085 [2024-11-26 20:55:44.578201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.085 [2024-11-26 20:55:44.578258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.085 qpair failed and we were unable to recover it. 00:25:41.085 [2024-11-26 20:55:44.578528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.085 [2024-11-26 20:55:44.578592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.085 qpair failed and we were unable to recover it. 00:25:41.085 [2024-11-26 20:55:44.578791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.085 [2024-11-26 20:55:44.578827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.085 qpair failed and we were unable to recover it. 00:25:41.085 [2024-11-26 20:55:44.578980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.085 [2024-11-26 20:55:44.579015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.085 qpair failed and we were unable to recover it. 00:25:41.085 [2024-11-26 20:55:44.579192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.085 [2024-11-26 20:55:44.579248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.085 qpair failed and we were unable to recover it. 00:25:41.085 [2024-11-26 20:55:44.579457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.085 [2024-11-26 20:55:44.579509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.085 qpair failed and we were unable to recover it. 00:25:41.085 [2024-11-26 20:55:44.579688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.085 [2024-11-26 20:55:44.579738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.085 qpair failed and we were unable to recover it. 00:25:41.085 [2024-11-26 20:55:44.579965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.085 [2024-11-26 20:55:44.579999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.085 qpair failed and we were unable to recover it. 00:25:41.085 [2024-11-26 20:55:44.580141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.085 [2024-11-26 20:55:44.580177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.085 qpair failed and we were unable to recover it. 00:25:41.085 [2024-11-26 20:55:44.580358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.085 [2024-11-26 20:55:44.580397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.085 qpair failed and we were unable to recover it. 00:25:41.085 [2024-11-26 20:55:44.580545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.085 [2024-11-26 20:55:44.580582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.085 qpair failed and we were unable to recover it. 00:25:41.085 [2024-11-26 20:55:44.580728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.085 [2024-11-26 20:55:44.580764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.085 qpair failed and we were unable to recover it. 00:25:41.085 [2024-11-26 20:55:44.580961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.085 [2024-11-26 20:55:44.580995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.085 qpair failed and we were unable to recover it. 00:25:41.085 [2024-11-26 20:55:44.581142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.085 [2024-11-26 20:55:44.581176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.085 qpair failed and we were unable to recover it. 00:25:41.085 [2024-11-26 20:55:44.581356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.085 [2024-11-26 20:55:44.581394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.085 qpair failed and we were unable to recover it. 00:25:41.085 [2024-11-26 20:55:44.581538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.085 [2024-11-26 20:55:44.581584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.085 qpair failed and we were unable to recover it. 00:25:41.086 [2024-11-26 20:55:44.581810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.086 [2024-11-26 20:55:44.581867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.086 qpair failed and we were unable to recover it. 00:25:41.086 [2024-11-26 20:55:44.582036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.086 [2024-11-26 20:55:44.582092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.086 qpair failed and we were unable to recover it. 00:25:41.086 [2024-11-26 20:55:44.582319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.086 [2024-11-26 20:55:44.582368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.086 qpair failed and we were unable to recover it. 00:25:41.086 [2024-11-26 20:55:44.582493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.086 [2024-11-26 20:55:44.582527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.086 qpair failed and we were unable to recover it. 00:25:41.086 [2024-11-26 20:55:44.582692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.086 [2024-11-26 20:55:44.582747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.086 qpair failed and we were unable to recover it. 00:25:41.086 [2024-11-26 20:55:44.582957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.086 [2024-11-26 20:55:44.583011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.086 qpair failed and we were unable to recover it. 00:25:41.086 [2024-11-26 20:55:44.583243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.086 [2024-11-26 20:55:44.583279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.086 qpair failed and we were unable to recover it. 00:25:41.086 [2024-11-26 20:55:44.583414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.086 [2024-11-26 20:55:44.583450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.086 qpair failed and we were unable to recover it. 00:25:41.086 [2024-11-26 20:55:44.583568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.086 [2024-11-26 20:55:44.583602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.086 qpair failed and we were unable to recover it. 00:25:41.086 [2024-11-26 20:55:44.583766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.086 [2024-11-26 20:55:44.583826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.086 qpair failed and we were unable to recover it. 00:25:41.086 [2024-11-26 20:55:44.584000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.086 [2024-11-26 20:55:44.584035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.086 qpair failed and we were unable to recover it. 00:25:41.086 [2024-11-26 20:55:44.584219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.086 [2024-11-26 20:55:44.584271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.086 qpair failed and we were unable to recover it. 00:25:41.086 [2024-11-26 20:55:44.584463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.086 [2024-11-26 20:55:44.584516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.086 qpair failed and we were unable to recover it. 00:25:41.086 [2024-11-26 20:55:44.584719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.086 [2024-11-26 20:55:44.584771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.086 qpair failed and we were unable to recover it. 00:25:41.086 [2024-11-26 20:55:44.584974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.086 [2024-11-26 20:55:44.585031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.086 qpair failed and we were unable to recover it. 00:25:41.086 [2024-11-26 20:55:44.585194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.086 [2024-11-26 20:55:44.585248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.086 qpair failed and we were unable to recover it. 00:25:41.086 [2024-11-26 20:55:44.585446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.086 [2024-11-26 20:55:44.585501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.086 qpair failed and we were unable to recover it. 00:25:41.086 [2024-11-26 20:55:44.585730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.086 [2024-11-26 20:55:44.585786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.086 qpair failed and we were unable to recover it. 00:25:41.086 [2024-11-26 20:55:44.585966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.086 [2024-11-26 20:55:44.586021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.086 qpair failed and we were unable to recover it. 00:25:41.086 [2024-11-26 20:55:44.586201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.086 [2024-11-26 20:55:44.586256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.086 qpair failed and we were unable to recover it. 00:25:41.086 [2024-11-26 20:55:44.586487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.086 [2024-11-26 20:55:44.586543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.086 qpair failed and we were unable to recover it. 00:25:41.086 [2024-11-26 20:55:44.586718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.086 [2024-11-26 20:55:44.586775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.086 qpair failed and we were unable to recover it. 00:25:41.086 [2024-11-26 20:55:44.587011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.086 [2024-11-26 20:55:44.587045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.086 qpair failed and we were unable to recover it. 00:25:41.086 [2024-11-26 20:55:44.587242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.086 [2024-11-26 20:55:44.587322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.086 qpair failed and we were unable to recover it. 00:25:41.086 [2024-11-26 20:55:44.587574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.086 [2024-11-26 20:55:44.587648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.086 qpair failed and we were unable to recover it. 00:25:41.086 [2024-11-26 20:55:44.587898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.086 [2024-11-26 20:55:44.587954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.086 qpair failed and we were unable to recover it. 00:25:41.086 [2024-11-26 20:55:44.588169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.086 [2024-11-26 20:55:44.588224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.086 qpair failed and we were unable to recover it. 00:25:41.086 [2024-11-26 20:55:44.588489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.086 [2024-11-26 20:55:44.588523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.086 qpair failed and we were unable to recover it. 00:25:41.086 [2024-11-26 20:55:44.588672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.086 [2024-11-26 20:55:44.588708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.086 qpair failed and we were unable to recover it. 00:25:41.086 [2024-11-26 20:55:44.588919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.086 [2024-11-26 20:55:44.588975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.086 qpair failed and we were unable to recover it. 00:25:41.086 [2024-11-26 20:55:44.589195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.086 [2024-11-26 20:55:44.589253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.086 qpair failed and we were unable to recover it. 00:25:41.086 [2024-11-26 20:55:44.589509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.086 [2024-11-26 20:55:44.589562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.086 qpair failed and we were unable to recover it. 00:25:41.086 [2024-11-26 20:55:44.589754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.086 [2024-11-26 20:55:44.589800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.086 qpair failed and we were unable to recover it. 00:25:41.086 [2024-11-26 20:55:44.589916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.086 [2024-11-26 20:55:44.589949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.086 qpair failed and we were unable to recover it. 00:25:41.086 [2024-11-26 20:55:44.590146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.086 [2024-11-26 20:55:44.590200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.086 qpair failed and we were unable to recover it. 00:25:41.086 [2024-11-26 20:55:44.590313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.086 [2024-11-26 20:55:44.590357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.086 qpair failed and we were unable to recover it. 00:25:41.086 [2024-11-26 20:55:44.590530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.086 [2024-11-26 20:55:44.590590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.086 qpair failed and we were unable to recover it. 00:25:41.087 [2024-11-26 20:55:44.590756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.087 [2024-11-26 20:55:44.590809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.087 qpair failed and we were unable to recover it. 00:25:41.087 [2024-11-26 20:55:44.591002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.087 [2024-11-26 20:55:44.591054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.087 qpair failed and we were unable to recover it. 00:25:41.087 [2024-11-26 20:55:44.591236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.087 [2024-11-26 20:55:44.591293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.087 qpair failed and we were unable to recover it. 00:25:41.087 [2024-11-26 20:55:44.591512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.087 [2024-11-26 20:55:44.591565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.087 qpair failed and we were unable to recover it. 00:25:41.087 [2024-11-26 20:55:44.591776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.087 [2024-11-26 20:55:44.591828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.087 qpair failed and we were unable to recover it. 00:25:41.087 [2024-11-26 20:55:44.592049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.087 [2024-11-26 20:55:44.592099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.087 qpair failed and we were unable to recover it. 00:25:41.087 [2024-11-26 20:55:44.592269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.087 [2024-11-26 20:55:44.592333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.087 qpair failed and we were unable to recover it. 00:25:41.087 [2024-11-26 20:55:44.592486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.087 [2024-11-26 20:55:44.592537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.087 qpair failed and we were unable to recover it. 00:25:41.087 [2024-11-26 20:55:44.592744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.087 [2024-11-26 20:55:44.592796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.087 qpair failed and we were unable to recover it. 00:25:41.087 [2024-11-26 20:55:44.592996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.087 [2024-11-26 20:55:44.593049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.087 qpair failed and we were unable to recover it. 00:25:41.087 [2024-11-26 20:55:44.593261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.087 [2024-11-26 20:55:44.593326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.087 qpair failed and we were unable to recover it. 00:25:41.087 [2024-11-26 20:55:44.593484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.087 [2024-11-26 20:55:44.593533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.087 qpair failed and we were unable to recover it. 00:25:41.087 [2024-11-26 20:55:44.593753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.087 [2024-11-26 20:55:44.593816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.087 qpair failed and we were unable to recover it. 00:25:41.087 [2024-11-26 20:55:44.593979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.087 [2024-11-26 20:55:44.594031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.087 qpair failed and we were unable to recover it. 00:25:41.087 [2024-11-26 20:55:44.594202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.087 [2024-11-26 20:55:44.594254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.087 qpair failed and we were unable to recover it. 00:25:41.087 [2024-11-26 20:55:44.594507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.087 [2024-11-26 20:55:44.594542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.087 qpair failed and we were unable to recover it. 00:25:41.087 [2024-11-26 20:55:44.594682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.087 [2024-11-26 20:55:44.594717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.087 qpair failed and we were unable to recover it. 00:25:41.087 [2024-11-26 20:55:44.594861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.087 [2024-11-26 20:55:44.594894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.087 qpair failed and we were unable to recover it. 00:25:41.087 [2024-11-26 20:55:44.595038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.087 [2024-11-26 20:55:44.595100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.087 qpair failed and we were unable to recover it. 00:25:41.087 [2024-11-26 20:55:44.595273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.087 [2024-11-26 20:55:44.595341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.087 qpair failed and we were unable to recover it. 00:25:41.087 [2024-11-26 20:55:44.595509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.087 [2024-11-26 20:55:44.595563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.087 qpair failed and we were unable to recover it. 00:25:41.087 [2024-11-26 20:55:44.595765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.087 [2024-11-26 20:55:44.595818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.087 qpair failed and we were unable to recover it. 00:25:41.087 [2024-11-26 20:55:44.595996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.087 [2024-11-26 20:55:44.596046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.087 qpair failed and we were unable to recover it. 00:25:41.087 [2024-11-26 20:55:44.596193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.087 [2024-11-26 20:55:44.596241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.087 qpair failed and we were unable to recover it. 00:25:41.087 [2024-11-26 20:55:44.596400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.087 [2024-11-26 20:55:44.596448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.087 qpair failed and we were unable to recover it. 00:25:41.087 [2024-11-26 20:55:44.596596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.087 [2024-11-26 20:55:44.596643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.087 qpair failed and we were unable to recover it. 00:25:41.087 [2024-11-26 20:55:44.596849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.087 [2024-11-26 20:55:44.596898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.087 qpair failed and we were unable to recover it. 00:25:41.087 [2024-11-26 20:55:44.597111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.087 [2024-11-26 20:55:44.597166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.087 qpair failed and we were unable to recover it. 00:25:41.087 [2024-11-26 20:55:44.597348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.087 [2024-11-26 20:55:44.597406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.087 qpair failed and we were unable to recover it. 00:25:41.087 [2024-11-26 20:55:44.597581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.087 [2024-11-26 20:55:44.597637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.087 qpair failed and we were unable to recover it. 00:25:41.087 [2024-11-26 20:55:44.597805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.087 [2024-11-26 20:55:44.597860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.087 qpair failed and we were unable to recover it. 00:25:41.087 [2024-11-26 20:55:44.598077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.087 [2024-11-26 20:55:44.598113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.087 qpair failed and we were unable to recover it. 00:25:41.087 [2024-11-26 20:55:44.598346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.087 [2024-11-26 20:55:44.598381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.087 qpair failed and we were unable to recover it. 00:25:41.087 [2024-11-26 20:55:44.598520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.087 [2024-11-26 20:55:44.598553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.087 qpair failed and we were unable to recover it. 00:25:41.087 [2024-11-26 20:55:44.598791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.087 [2024-11-26 20:55:44.598843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.087 qpair failed and we were unable to recover it. 00:25:41.087 [2024-11-26 20:55:44.599017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.087 [2024-11-26 20:55:44.599070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.087 qpair failed and we were unable to recover it. 00:25:41.087 [2024-11-26 20:55:44.599279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.087 [2024-11-26 20:55:44.599342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.087 qpair failed and we were unable to recover it. 00:25:41.087 [2024-11-26 20:55:44.599556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.087 [2024-11-26 20:55:44.599617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.087 qpair failed and we were unable to recover it. 00:25:41.087 [2024-11-26 20:55:44.599811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.088 [2024-11-26 20:55:44.599863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.088 qpair failed and we were unable to recover it. 00:25:41.088 [2024-11-26 20:55:44.600073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.088 [2024-11-26 20:55:44.600126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.088 qpair failed and we were unable to recover it. 00:25:41.088 [2024-11-26 20:55:44.600336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.088 [2024-11-26 20:55:44.600391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.088 qpair failed and we were unable to recover it. 00:25:41.088 [2024-11-26 20:55:44.600586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.088 [2024-11-26 20:55:44.600660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.088 qpair failed and we were unable to recover it. 00:25:41.088 [2024-11-26 20:55:44.600841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.088 [2024-11-26 20:55:44.600891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.088 qpair failed and we were unable to recover it. 00:25:41.088 [2024-11-26 20:55:44.601100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.088 [2024-11-26 20:55:44.601133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.088 qpair failed and we were unable to recover it. 00:25:41.088 [2024-11-26 20:55:44.601243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.088 [2024-11-26 20:55:44.601276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.088 qpair failed and we were unable to recover it. 00:25:41.088 [2024-11-26 20:55:44.601394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.088 [2024-11-26 20:55:44.601427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.088 qpair failed and we were unable to recover it. 00:25:41.088 [2024-11-26 20:55:44.601642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.088 [2024-11-26 20:55:44.601676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.088 qpair failed and we were unable to recover it. 00:25:41.088 [2024-11-26 20:55:44.601849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.088 [2024-11-26 20:55:44.601908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.088 qpair failed and we were unable to recover it. 00:25:41.088 [2024-11-26 20:55:44.602137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.088 [2024-11-26 20:55:44.602187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.088 qpair failed and we were unable to recover it. 00:25:41.088 [2024-11-26 20:55:44.602382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.088 [2024-11-26 20:55:44.602452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.088 qpair failed and we were unable to recover it. 00:25:41.088 [2024-11-26 20:55:44.602649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.088 [2024-11-26 20:55:44.602721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.088 qpair failed and we were unable to recover it. 00:25:41.088 [2024-11-26 20:55:44.602919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.088 [2024-11-26 20:55:44.602969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.088 qpair failed and we were unable to recover it. 00:25:41.088 [2024-11-26 20:55:44.603206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.088 [2024-11-26 20:55:44.603266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.088 qpair failed and we were unable to recover it. 00:25:41.088 [2024-11-26 20:55:44.603514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.088 [2024-11-26 20:55:44.603584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.088 qpair failed and we were unable to recover it. 00:25:41.088 [2024-11-26 20:55:44.603777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.088 [2024-11-26 20:55:44.603848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.088 qpair failed and we were unable to recover it. 00:25:41.088 [2024-11-26 20:55:44.604069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.088 [2024-11-26 20:55:44.604121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.088 qpair failed and we were unable to recover it. 00:25:41.088 [2024-11-26 20:55:44.604331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.088 [2024-11-26 20:55:44.604385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.088 qpair failed and we were unable to recover it. 00:25:41.088 [2024-11-26 20:55:44.604602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.088 [2024-11-26 20:55:44.604678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.088 qpair failed and we were unable to recover it. 00:25:41.088 [2024-11-26 20:55:44.604909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.088 [2024-11-26 20:55:44.604979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.088 qpair failed and we were unable to recover it. 00:25:41.088 [2024-11-26 20:55:44.605192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.088 [2024-11-26 20:55:44.605243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.088 qpair failed and we were unable to recover it. 00:25:41.088 [2024-11-26 20:55:44.605476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.088 [2024-11-26 20:55:44.605512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.088 qpair failed and we were unable to recover it. 00:25:41.088 [2024-11-26 20:55:44.605616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.088 [2024-11-26 20:55:44.605647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.088 qpair failed and we were unable to recover it. 00:25:41.088 [2024-11-26 20:55:44.605779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.088 [2024-11-26 20:55:44.605813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.088 qpair failed and we were unable to recover it. 00:25:41.088 [2024-11-26 20:55:44.605956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.088 [2024-11-26 20:55:44.605991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.088 qpair failed and we were unable to recover it. 00:25:41.088 [2024-11-26 20:55:44.606124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.088 [2024-11-26 20:55:44.606157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.088 qpair failed and we were unable to recover it. 00:25:41.088 [2024-11-26 20:55:44.606351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.088 [2024-11-26 20:55:44.606403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.088 qpair failed and we were unable to recover it. 00:25:41.088 [2024-11-26 20:55:44.606614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.088 [2024-11-26 20:55:44.606668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.088 qpair failed and we were unable to recover it. 00:25:41.088 [2024-11-26 20:55:44.606827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.088 [2024-11-26 20:55:44.606878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.088 qpair failed and we were unable to recover it. 00:25:41.088 [2024-11-26 20:55:44.607078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.088 [2024-11-26 20:55:44.607129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.088 qpair failed and we were unable to recover it. 00:25:41.088 [2024-11-26 20:55:44.607290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.088 [2024-11-26 20:55:44.607354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.088 qpair failed and we were unable to recover it. 00:25:41.088 [2024-11-26 20:55:44.607586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.088 [2024-11-26 20:55:44.607637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.088 qpair failed and we were unable to recover it. 00:25:41.088 [2024-11-26 20:55:44.607841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.088 [2024-11-26 20:55:44.607894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.088 qpair failed and we were unable to recover it. 00:25:41.088 [2024-11-26 20:55:44.608097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.088 [2024-11-26 20:55:44.608149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.088 qpair failed and we were unable to recover it. 00:25:41.088 [2024-11-26 20:55:44.608318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.088 [2024-11-26 20:55:44.608372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.088 qpair failed and we were unable to recover it. 00:25:41.088 [2024-11-26 20:55:44.608560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.088 [2024-11-26 20:55:44.608633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.088 qpair failed and we were unable to recover it. 00:25:41.088 [2024-11-26 20:55:44.608836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.089 [2024-11-26 20:55:44.608888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.089 qpair failed and we were unable to recover it. 00:25:41.089 [2024-11-26 20:55:44.609096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.089 [2024-11-26 20:55:44.609153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.089 qpair failed and we were unable to recover it. 00:25:41.089 [2024-11-26 20:55:44.609265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.089 [2024-11-26 20:55:44.609297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.089 qpair failed and we were unable to recover it. 00:25:41.089 [2024-11-26 20:55:44.609457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.089 [2024-11-26 20:55:44.609491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.089 qpair failed and we were unable to recover it. 00:25:41.089 [2024-11-26 20:55:44.609627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.089 [2024-11-26 20:55:44.609663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.089 qpair failed and we were unable to recover it. 00:25:41.089 [2024-11-26 20:55:44.609863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.089 [2024-11-26 20:55:44.609917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.089 qpair failed and we were unable to recover it. 00:25:41.089 [2024-11-26 20:55:44.610107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.089 [2024-11-26 20:55:44.610159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.089 qpair failed and we were unable to recover it. 00:25:41.089 [2024-11-26 20:55:44.610323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.089 [2024-11-26 20:55:44.610378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.089 qpair failed and we were unable to recover it. 00:25:41.089 [2024-11-26 20:55:44.610582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.089 [2024-11-26 20:55:44.610637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.089 qpair failed and we were unable to recover it. 00:25:41.089 [2024-11-26 20:55:44.610817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.089 [2024-11-26 20:55:44.610869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.089 qpair failed and we were unable to recover it. 00:25:41.089 [2024-11-26 20:55:44.611077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.089 [2024-11-26 20:55:44.611128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.089 qpair failed and we were unable to recover it. 00:25:41.089 [2024-11-26 20:55:44.611366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.089 [2024-11-26 20:55:44.611401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.089 qpair failed and we were unable to recover it. 00:25:41.089 [2024-11-26 20:55:44.611542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.089 [2024-11-26 20:55:44.611575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.089 qpair failed and we were unable to recover it. 00:25:41.089 [2024-11-26 20:55:44.611733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.089 [2024-11-26 20:55:44.611768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.089 qpair failed and we were unable to recover it. 00:25:41.089 [2024-11-26 20:55:44.611888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.089 [2024-11-26 20:55:44.611922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.089 qpair failed and we were unable to recover it. 00:25:41.089 [2024-11-26 20:55:44.612091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.089 [2024-11-26 20:55:44.612125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.089 qpair failed and we were unable to recover it. 00:25:41.089 [2024-11-26 20:55:44.612270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.089 [2024-11-26 20:55:44.612313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.089 qpair failed and we were unable to recover it. 00:25:41.089 [2024-11-26 20:55:44.612531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.089 [2024-11-26 20:55:44.612601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.089 qpair failed and we were unable to recover it. 00:25:41.089 [2024-11-26 20:55:44.612800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.089 [2024-11-26 20:55:44.612851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.089 qpair failed and we were unable to recover it. 00:25:41.089 [2024-11-26 20:55:44.613057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.089 [2024-11-26 20:55:44.613106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.089 qpair failed and we were unable to recover it. 00:25:41.089 [2024-11-26 20:55:44.613270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.089 [2024-11-26 20:55:44.613334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.089 qpair failed and we were unable to recover it. 00:25:41.089 [2024-11-26 20:55:44.613509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.089 [2024-11-26 20:55:44.613562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.089 qpair failed and we were unable to recover it. 00:25:41.089 [2024-11-26 20:55:44.613717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.089 [2024-11-26 20:55:44.613768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.089 qpair failed and we were unable to recover it. 00:25:41.089 [2024-11-26 20:55:44.613983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.089 [2024-11-26 20:55:44.614020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.089 qpair failed and we were unable to recover it. 00:25:41.089 [2024-11-26 20:55:44.614176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.089 [2024-11-26 20:55:44.614212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.089 qpair failed and we were unable to recover it. 00:25:41.089 [2024-11-26 20:55:44.614425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.089 [2024-11-26 20:55:44.614476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.089 qpair failed and we were unable to recover it. 00:25:41.089 [2024-11-26 20:55:44.614686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.089 [2024-11-26 20:55:44.614737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.089 qpair failed and we were unable to recover it. 00:25:41.089 [2024-11-26 20:55:44.614927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.089 [2024-11-26 20:55:44.614978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.089 qpair failed and we were unable to recover it. 00:25:41.089 [2024-11-26 20:55:44.615183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.089 [2024-11-26 20:55:44.615233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.089 qpair failed and we were unable to recover it. 00:25:41.089 [2024-11-26 20:55:44.615468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.089 [2024-11-26 20:55:44.615521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.089 qpair failed and we were unable to recover it. 00:25:41.089 [2024-11-26 20:55:44.615714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.089 [2024-11-26 20:55:44.615748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.089 qpair failed and we were unable to recover it. 00:25:41.089 [2024-11-26 20:55:44.615895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.089 [2024-11-26 20:55:44.615943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.089 qpair failed and we were unable to recover it. 00:25:41.089 [2024-11-26 20:55:44.616109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.089 [2024-11-26 20:55:44.616159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.089 qpair failed and we were unable to recover it. 00:25:41.089 [2024-11-26 20:55:44.616373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.089 [2024-11-26 20:55:44.616409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.089 qpair failed and we were unable to recover it. 00:25:41.089 [2024-11-26 20:55:44.616542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.089 [2024-11-26 20:55:44.616578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.089 qpair failed and we were unable to recover it. 00:25:41.089 [2024-11-26 20:55:44.616751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.089 [2024-11-26 20:55:44.616802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.089 qpair failed and we were unable to recover it. 00:25:41.089 [2024-11-26 20:55:44.617046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.089 [2024-11-26 20:55:44.617097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.089 qpair failed and we were unable to recover it. 00:25:41.089 [2024-11-26 20:55:44.617317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.089 [2024-11-26 20:55:44.617371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.089 qpair failed and we were unable to recover it. 00:25:41.089 [2024-11-26 20:55:44.617581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.090 [2024-11-26 20:55:44.617633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.090 qpair failed and we were unable to recover it. 00:25:41.090 [2024-11-26 20:55:44.617825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.090 [2024-11-26 20:55:44.617876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.090 qpair failed and we were unable to recover it. 00:25:41.090 [2024-11-26 20:55:44.618087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.090 [2024-11-26 20:55:44.618139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.090 qpair failed and we were unable to recover it. 00:25:41.090 [2024-11-26 20:55:44.618330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.090 [2024-11-26 20:55:44.618383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.090 qpair failed and we were unable to recover it. 00:25:41.090 [2024-11-26 20:55:44.618586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.090 [2024-11-26 20:55:44.618665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.090 qpair failed and we were unable to recover it. 00:25:41.090 [2024-11-26 20:55:44.618931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.090 [2024-11-26 20:55:44.619002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.090 qpair failed and we were unable to recover it. 00:25:41.090 [2024-11-26 20:55:44.619185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.090 [2024-11-26 20:55:44.619239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.090 qpair failed and we were unable to recover it. 00:25:41.090 [2024-11-26 20:55:44.619500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.090 [2024-11-26 20:55:44.619581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.090 qpair failed and we were unable to recover it. 00:25:41.090 [2024-11-26 20:55:44.619831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.090 [2024-11-26 20:55:44.619902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.090 qpair failed and we were unable to recover it. 00:25:41.090 [2024-11-26 20:55:44.620144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.090 [2024-11-26 20:55:44.620196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.090 qpair failed and we were unable to recover it. 00:25:41.090 [2024-11-26 20:55:44.620388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.090 [2024-11-26 20:55:44.620460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.090 qpair failed and we were unable to recover it. 00:25:41.090 [2024-11-26 20:55:44.620737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.090 [2024-11-26 20:55:44.620806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.090 qpair failed and we were unable to recover it. 00:25:41.090 [2024-11-26 20:55:44.621041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.090 [2024-11-26 20:55:44.621094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.090 qpair failed and we were unable to recover it. 00:25:41.090 [2024-11-26 20:55:44.621277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.090 [2024-11-26 20:55:44.621352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.090 qpair failed and we were unable to recover it. 00:25:41.090 [2024-11-26 20:55:44.621575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.090 [2024-11-26 20:55:44.621646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.090 qpair failed and we were unable to recover it. 00:25:41.090 [2024-11-26 20:55:44.621929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.090 [2024-11-26 20:55:44.622000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.090 qpair failed and we were unable to recover it. 00:25:41.090 [2024-11-26 20:55:44.622214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.090 [2024-11-26 20:55:44.622268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.090 qpair failed and we were unable to recover it. 00:25:41.090 [2024-11-26 20:55:44.622557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.090 [2024-11-26 20:55:44.622632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.090 qpair failed and we were unable to recover it. 00:25:41.090 [2024-11-26 20:55:44.622889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.090 [2024-11-26 20:55:44.622955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.090 qpair failed and we were unable to recover it. 00:25:41.090 [2024-11-26 20:55:44.623095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.090 [2024-11-26 20:55:44.623142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.090 qpair failed and we were unable to recover it. 00:25:41.090 [2024-11-26 20:55:44.623310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.090 [2024-11-26 20:55:44.623359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.090 qpair failed and we were unable to recover it. 00:25:41.090 [2024-11-26 20:55:44.623473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.090 [2024-11-26 20:55:44.623512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.090 qpair failed and we were unable to recover it. 00:25:41.090 [2024-11-26 20:55:44.623632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.090 [2024-11-26 20:55:44.623669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.090 qpair failed and we were unable to recover it. 00:25:41.090 [2024-11-26 20:55:44.623798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.090 [2024-11-26 20:55:44.623844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.090 qpair failed and we were unable to recover it. 00:25:41.090 [2024-11-26 20:55:44.623987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.090 [2024-11-26 20:55:44.624026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.090 qpair failed and we were unable to recover it. 00:25:41.090 [2024-11-26 20:55:44.624172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.090 [2024-11-26 20:55:44.624216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.090 qpair failed and we were unable to recover it. 00:25:41.090 [2024-11-26 20:55:44.624360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.090 [2024-11-26 20:55:44.624396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.090 qpair failed and we were unable to recover it. 00:25:41.090 [2024-11-26 20:55:44.624545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.090 [2024-11-26 20:55:44.624579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.090 qpair failed and we were unable to recover it. 00:25:41.090 [2024-11-26 20:55:44.624704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.090 [2024-11-26 20:55:44.624744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.090 qpair failed and we were unable to recover it. 00:25:41.090 [2024-11-26 20:55:44.624880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.090 [2024-11-26 20:55:44.624915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.090 qpair failed and we were unable to recover it. 00:25:41.090 [2024-11-26 20:55:44.625073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.090 [2024-11-26 20:55:44.625109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.090 qpair failed and we were unable to recover it. 00:25:41.090 [2024-11-26 20:55:44.625243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.090 [2024-11-26 20:55:44.625288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.090 qpair failed and we were unable to recover it. 00:25:41.090 [2024-11-26 20:55:44.625444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.090 [2024-11-26 20:55:44.625480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.091 qpair failed and we were unable to recover it. 00:25:41.091 [2024-11-26 20:55:44.625669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.091 [2024-11-26 20:55:44.625705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.091 qpair failed and we were unable to recover it. 00:25:41.091 [2024-11-26 20:55:44.625853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.091 [2024-11-26 20:55:44.625889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.091 qpair failed and we were unable to recover it. 00:25:41.091 [2024-11-26 20:55:44.626089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.091 [2024-11-26 20:55:44.626129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.091 qpair failed and we were unable to recover it. 00:25:41.091 [2024-11-26 20:55:44.626268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.091 [2024-11-26 20:55:44.626321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.091 qpair failed and we were unable to recover it. 00:25:41.091 [2024-11-26 20:55:44.626451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.091 [2024-11-26 20:55:44.626485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.091 qpair failed and we were unable to recover it. 00:25:41.091 [2024-11-26 20:55:44.626608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.091 [2024-11-26 20:55:44.626643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.091 qpair failed and we were unable to recover it. 00:25:41.091 [2024-11-26 20:55:44.626749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.091 [2024-11-26 20:55:44.626788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.091 qpair failed and we were unable to recover it. 00:25:41.091 [2024-11-26 20:55:44.626904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.091 [2024-11-26 20:55:44.626945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.091 qpair failed and we were unable to recover it. 00:25:41.091 [2024-11-26 20:55:44.627089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.091 [2024-11-26 20:55:44.627127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.091 qpair failed and we were unable to recover it. 00:25:41.091 [2024-11-26 20:55:44.627281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.091 [2024-11-26 20:55:44.627323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.091 qpair failed and we were unable to recover it. 00:25:41.091 [2024-11-26 20:55:44.627453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.091 [2024-11-26 20:55:44.627488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.091 qpair failed and we were unable to recover it. 00:25:41.091 [2024-11-26 20:55:44.627626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.091 [2024-11-26 20:55:44.627660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.091 qpair failed and we were unable to recover it. 00:25:41.091 [2024-11-26 20:55:44.627793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.091 [2024-11-26 20:55:44.627837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.091 qpair failed and we were unable to recover it. 00:25:41.091 [2024-11-26 20:55:44.627994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.091 [2024-11-26 20:55:44.628030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.091 qpair failed and we were unable to recover it. 00:25:41.091 [2024-11-26 20:55:44.628173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.091 [2024-11-26 20:55:44.628218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.091 qpair failed and we were unable to recover it. 00:25:41.091 [2024-11-26 20:55:44.628383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.091 [2024-11-26 20:55:44.628417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.091 qpair failed and we were unable to recover it. 00:25:41.091 [2024-11-26 20:55:44.628586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.091 [2024-11-26 20:55:44.628627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.091 qpair failed and we were unable to recover it. 00:25:41.091 [2024-11-26 20:55:44.628743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.091 [2024-11-26 20:55:44.628781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.091 qpair failed and we were unable to recover it. 00:25:41.091 [2024-11-26 20:55:44.628894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.091 [2024-11-26 20:55:44.628928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.091 qpair failed and we were unable to recover it. 00:25:41.091 [2024-11-26 20:55:44.629042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.091 [2024-11-26 20:55:44.629076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.091 qpair failed and we were unable to recover it. 00:25:41.091 [2024-11-26 20:55:44.629216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.091 [2024-11-26 20:55:44.629251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.091 qpair failed and we were unable to recover it. 00:25:41.091 [2024-11-26 20:55:44.629415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.091 [2024-11-26 20:55:44.629451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.091 qpair failed and we were unable to recover it. 00:25:41.091 [2024-11-26 20:55:44.629582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.091 [2024-11-26 20:55:44.629616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.091 qpair failed and we were unable to recover it. 00:25:41.091 [2024-11-26 20:55:44.629785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.091 [2024-11-26 20:55:44.629819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.091 qpair failed and we were unable to recover it. 00:25:41.091 [2024-11-26 20:55:44.629965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.091 [2024-11-26 20:55:44.630000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.091 qpair failed and we were unable to recover it. 00:25:41.091 [2024-11-26 20:55:44.630123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.091 [2024-11-26 20:55:44.630157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.091 qpair failed and we were unable to recover it. 00:25:41.091 [2024-11-26 20:55:44.630322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.091 [2024-11-26 20:55:44.630364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.091 qpair failed and we were unable to recover it. 00:25:41.091 [2024-11-26 20:55:44.630483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.091 [2024-11-26 20:55:44.630519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.091 qpair failed and we were unable to recover it. 00:25:41.091 [2024-11-26 20:55:44.630696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.091 [2024-11-26 20:55:44.630760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.091 qpair failed and we were unable to recover it. 00:25:41.091 [2024-11-26 20:55:44.630929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.091 [2024-11-26 20:55:44.630984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.091 qpair failed and we were unable to recover it. 00:25:41.091 [2024-11-26 20:55:44.631134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.091 [2024-11-26 20:55:44.631185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.091 qpair failed and we were unable to recover it. 00:25:41.091 [2024-11-26 20:55:44.631447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.091 [2024-11-26 20:55:44.631482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.091 qpair failed and we were unable to recover it. 00:25:41.091 [2024-11-26 20:55:44.631592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.091 [2024-11-26 20:55:44.631628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.091 qpair failed and we were unable to recover it. 00:25:41.091 [2024-11-26 20:55:44.631791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.091 [2024-11-26 20:55:44.631848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.091 qpair failed and we were unable to recover it. 00:25:41.091 [2024-11-26 20:55:44.632013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.091 [2024-11-26 20:55:44.632047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.091 qpair failed and we were unable to recover it. 00:25:41.091 [2024-11-26 20:55:44.632269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.091 [2024-11-26 20:55:44.632329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.091 qpair failed and we were unable to recover it. 00:25:41.091 [2024-11-26 20:55:44.632507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.091 [2024-11-26 20:55:44.632580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.091 qpair failed and we were unable to recover it. 00:25:41.091 [2024-11-26 20:55:44.632810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.091 [2024-11-26 20:55:44.632876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.091 qpair failed and we were unable to recover it. 00:25:41.092 [2024-11-26 20:55:44.633096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.092 [2024-11-26 20:55:44.633131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.092 qpair failed and we were unable to recover it. 00:25:41.092 [2024-11-26 20:55:44.633275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.092 [2024-11-26 20:55:44.633318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.092 qpair failed and we were unable to recover it. 00:25:41.092 [2024-11-26 20:55:44.633438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.092 [2024-11-26 20:55:44.633473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.092 qpair failed and we were unable to recover it. 00:25:41.092 [2024-11-26 20:55:44.633591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.092 [2024-11-26 20:55:44.633625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.092 qpair failed and we were unable to recover it. 00:25:41.092 [2024-11-26 20:55:44.633731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.092 [2024-11-26 20:55:44.633766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.092 qpair failed and we were unable to recover it. 00:25:41.092 [2024-11-26 20:55:44.633895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.092 [2024-11-26 20:55:44.633930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.092 qpair failed and we were unable to recover it. 00:25:41.092 [2024-11-26 20:55:44.634073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.092 [2024-11-26 20:55:44.634108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.092 qpair failed and we were unable to recover it. 00:25:41.092 [2024-11-26 20:55:44.634276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.092 [2024-11-26 20:55:44.634327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.092 qpair failed and we were unable to recover it. 00:25:41.092 [2024-11-26 20:55:44.634468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.092 [2024-11-26 20:55:44.634503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.092 qpair failed and we were unable to recover it. 00:25:41.092 [2024-11-26 20:55:44.634613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.092 [2024-11-26 20:55:44.634647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.092 qpair failed and we were unable to recover it. 00:25:41.092 [2024-11-26 20:55:44.634755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.092 [2024-11-26 20:55:44.634792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.092 qpair failed and we were unable to recover it. 00:25:41.092 [2024-11-26 20:55:44.634937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.092 [2024-11-26 20:55:44.634971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.092 qpair failed and we were unable to recover it. 00:25:41.092 [2024-11-26 20:55:44.635102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.092 [2024-11-26 20:55:44.635150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.092 qpair failed and we were unable to recover it. 00:25:41.092 [2024-11-26 20:55:44.635364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.092 [2024-11-26 20:55:44.635416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.092 qpair failed and we were unable to recover it. 00:25:41.092 [2024-11-26 20:55:44.635575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.092 [2024-11-26 20:55:44.635625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.092 qpair failed and we were unable to recover it. 00:25:41.092 [2024-11-26 20:55:44.635839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.092 [2024-11-26 20:55:44.635875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.092 qpair failed and we were unable to recover it. 00:25:41.092 [2024-11-26 20:55:44.636020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.092 [2024-11-26 20:55:44.636057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.092 qpair failed and we were unable to recover it. 00:25:41.092 [2024-11-26 20:55:44.636213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.092 [2024-11-26 20:55:44.636261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.092 qpair failed and we were unable to recover it. 00:25:41.092 [2024-11-26 20:55:44.636458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.092 [2024-11-26 20:55:44.636508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.092 qpair failed and we were unable to recover it. 00:25:41.092 [2024-11-26 20:55:44.636699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.092 [2024-11-26 20:55:44.636748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.092 qpair failed and we were unable to recover it. 00:25:41.092 [2024-11-26 20:55:44.636972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.092 [2024-11-26 20:55:44.637020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.092 qpair failed and we were unable to recover it. 00:25:41.092 [2024-11-26 20:55:44.637192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.092 [2024-11-26 20:55:44.637245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.092 qpair failed and we were unable to recover it. 00:25:41.092 [2024-11-26 20:55:44.637466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.092 [2024-11-26 20:55:44.637515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.092 qpair failed and we were unable to recover it. 00:25:41.092 [2024-11-26 20:55:44.637664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.092 [2024-11-26 20:55:44.637713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.092 qpair failed and we were unable to recover it. 00:25:41.092 [2024-11-26 20:55:44.637939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.092 [2024-11-26 20:55:44.637989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.092 qpair failed and we were unable to recover it. 00:25:41.092 [2024-11-26 20:55:44.638168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.092 [2024-11-26 20:55:44.638218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.092 qpair failed and we were unable to recover it. 00:25:41.092 [2024-11-26 20:55:44.638460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.092 [2024-11-26 20:55:44.638496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.092 qpair failed and we were unable to recover it. 00:25:41.092 [2024-11-26 20:55:44.638668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.092 [2024-11-26 20:55:44.638728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.092 qpair failed and we were unable to recover it. 00:25:41.092 [2024-11-26 20:55:44.638899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.092 [2024-11-26 20:55:44.638957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.092 qpair failed and we were unable to recover it. 00:25:41.092 [2024-11-26 20:55:44.639190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.092 [2024-11-26 20:55:44.639240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.092 qpair failed and we were unable to recover it. 00:25:41.092 [2024-11-26 20:55:44.639405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.092 [2024-11-26 20:55:44.639455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.092 qpair failed and we were unable to recover it. 00:25:41.092 [2024-11-26 20:55:44.639679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.092 [2024-11-26 20:55:44.639730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.092 qpair failed and we were unable to recover it. 00:25:41.092 [2024-11-26 20:55:44.639875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.092 [2024-11-26 20:55:44.639908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.092 qpair failed and we were unable to recover it. 00:25:41.092 [2024-11-26 20:55:44.640024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.092 [2024-11-26 20:55:44.640060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.092 qpair failed and we were unable to recover it. 00:25:41.092 [2024-11-26 20:55:44.640293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.092 [2024-11-26 20:55:44.640356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.092 qpair failed and we were unable to recover it. 00:25:41.092 [2024-11-26 20:55:44.640557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.092 [2024-11-26 20:55:44.640592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.092 qpair failed and we were unable to recover it. 00:25:41.092 [2024-11-26 20:55:44.640692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.092 [2024-11-26 20:55:44.640727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.092 qpair failed and we were unable to recover it. 00:25:41.092 [2024-11-26 20:55:44.640868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.092 [2024-11-26 20:55:44.640901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.092 qpair failed and we were unable to recover it. 00:25:41.092 [2024-11-26 20:55:44.641022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.093 [2024-11-26 20:55:44.641058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.093 qpair failed and we were unable to recover it. 00:25:41.093 [2024-11-26 20:55:44.641281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.093 [2024-11-26 20:55:44.641345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.093 qpair failed and we were unable to recover it. 00:25:41.093 [2024-11-26 20:55:44.641576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.093 [2024-11-26 20:55:44.641651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.093 qpair failed and we were unable to recover it. 00:25:41.093 [2024-11-26 20:55:44.641874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.093 [2024-11-26 20:55:44.641942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.093 qpair failed and we were unable to recover it. 00:25:41.093 [2024-11-26 20:55:44.642188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.093 [2024-11-26 20:55:44.642223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.093 qpair failed and we were unable to recover it. 00:25:41.093 [2024-11-26 20:55:44.642392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.093 [2024-11-26 20:55:44.642427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.093 qpair failed and we were unable to recover it. 00:25:41.093 [2024-11-26 20:55:44.642695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.093 [2024-11-26 20:55:44.642763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.093 qpair failed and we were unable to recover it. 00:25:41.093 [2024-11-26 20:55:44.642958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.093 [2024-11-26 20:55:44.643012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.093 qpair failed and we were unable to recover it. 00:25:41.093 [2024-11-26 20:55:44.643243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.093 [2024-11-26 20:55:44.643277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.093 qpair failed and we were unable to recover it. 00:25:41.093 [2024-11-26 20:55:44.643422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.093 [2024-11-26 20:55:44.643457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.093 qpair failed and we were unable to recover it. 00:25:41.093 [2024-11-26 20:55:44.643703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.093 [2024-11-26 20:55:44.643740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.093 qpair failed and we were unable to recover it. 00:25:41.093 [2024-11-26 20:55:44.643884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.093 [2024-11-26 20:55:44.643920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.093 qpair failed and we were unable to recover it. 00:25:41.093 [2024-11-26 20:55:44.644089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.093 [2024-11-26 20:55:44.644124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.093 qpair failed and we were unable to recover it. 00:25:41.093 [2024-11-26 20:55:44.644271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.093 [2024-11-26 20:55:44.644314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.093 qpair failed and we were unable to recover it. 00:25:41.093 [2024-11-26 20:55:44.644455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.093 [2024-11-26 20:55:44.644490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.093 qpair failed and we were unable to recover it. 00:25:41.093 [2024-11-26 20:55:44.644719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.093 [2024-11-26 20:55:44.644769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.093 qpair failed and we were unable to recover it. 00:25:41.093 [2024-11-26 20:55:44.645021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.093 [2024-11-26 20:55:44.645056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.093 qpair failed and we were unable to recover it. 00:25:41.093 [2024-11-26 20:55:44.645194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.093 [2024-11-26 20:55:44.645229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.093 qpair failed and we were unable to recover it. 00:25:41.093 [2024-11-26 20:55:44.645404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.093 [2024-11-26 20:55:44.645440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.093 qpair failed and we were unable to recover it. 00:25:41.093 [2024-11-26 20:55:44.645560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.093 [2024-11-26 20:55:44.645595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.093 qpair failed and we were unable to recover it. 00:25:41.093 [2024-11-26 20:55:44.645696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.093 [2024-11-26 20:55:44.645730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.093 qpair failed and we were unable to recover it. 00:25:41.093 [2024-11-26 20:55:44.645870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.093 [2024-11-26 20:55:44.645905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.093 qpair failed and we were unable to recover it. 00:25:41.093 [2024-11-26 20:55:44.646135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.093 [2024-11-26 20:55:44.646184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.093 qpair failed and we were unable to recover it. 00:25:41.093 [2024-11-26 20:55:44.646346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.093 [2024-11-26 20:55:44.646396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.093 qpair failed and we were unable to recover it. 00:25:41.093 [2024-11-26 20:55:44.646560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.093 [2024-11-26 20:55:44.646611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.093 qpair failed and we were unable to recover it. 00:25:41.093 [2024-11-26 20:55:44.646792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.093 [2024-11-26 20:55:44.646827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.093 qpair failed and we were unable to recover it. 00:25:41.093 [2024-11-26 20:55:44.646942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.093 [2024-11-26 20:55:44.646977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.093 qpair failed and we were unable to recover it. 00:25:41.093 [2024-11-26 20:55:44.647141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.093 [2024-11-26 20:55:44.647176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.093 qpair failed and we were unable to recover it. 00:25:41.093 [2024-11-26 20:55:44.647384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.093 [2024-11-26 20:55:44.647421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.093 qpair failed and we were unable to recover it. 00:25:41.093 [2024-11-26 20:55:44.647567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.093 [2024-11-26 20:55:44.647604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.093 qpair failed and we were unable to recover it. 00:25:41.093 [2024-11-26 20:55:44.647763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.093 [2024-11-26 20:55:44.647804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.093 qpair failed and we were unable to recover it. 00:25:41.093 [2024-11-26 20:55:44.647923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.093 [2024-11-26 20:55:44.647958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.093 qpair failed and we were unable to recover it. 00:25:41.093 [2024-11-26 20:55:44.648119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.093 [2024-11-26 20:55:44.648168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.093 qpair failed and we were unable to recover it. 00:25:41.093 [2024-11-26 20:55:44.648364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.093 [2024-11-26 20:55:44.648416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.093 qpair failed and we were unable to recover it. 00:25:41.093 [2024-11-26 20:55:44.648655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.093 [2024-11-26 20:55:44.648690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.093 qpair failed and we were unable to recover it. 00:25:41.093 [2024-11-26 20:55:44.648841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.093 [2024-11-26 20:55:44.648875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.093 qpair failed and we were unable to recover it. 00:25:41.093 [2024-11-26 20:55:44.648997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.093 [2024-11-26 20:55:44.649031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.093 qpair failed and we were unable to recover it. 00:25:41.093 [2024-11-26 20:55:44.649243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.093 [2024-11-26 20:55:44.649293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.093 qpair failed and we were unable to recover it. 00:25:41.093 [2024-11-26 20:55:44.649525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.094 [2024-11-26 20:55:44.649559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.094 qpair failed and we were unable to recover it. 00:25:41.094 [2024-11-26 20:55:44.649729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.094 [2024-11-26 20:55:44.649788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.094 qpair failed and we were unable to recover it. 00:25:41.094 [2024-11-26 20:55:44.649976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.094 [2024-11-26 20:55:44.650010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.094 qpair failed and we were unable to recover it. 00:25:41.094 [2024-11-26 20:55:44.650150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.094 [2024-11-26 20:55:44.650185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.094 qpair failed and we were unable to recover it. 00:25:41.094 [2024-11-26 20:55:44.650339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.094 [2024-11-26 20:55:44.650393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.094 qpair failed and we were unable to recover it. 00:25:41.094 [2024-11-26 20:55:44.650534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.094 [2024-11-26 20:55:44.650570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.094 qpair failed and we were unable to recover it. 00:25:41.094 [2024-11-26 20:55:44.650847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.094 [2024-11-26 20:55:44.650882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.094 qpair failed and we were unable to recover it. 00:25:41.094 [2024-11-26 20:55:44.651045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.094 [2024-11-26 20:55:44.651098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.094 qpair failed and we were unable to recover it. 00:25:41.094 [2024-11-26 20:55:44.651296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.094 [2024-11-26 20:55:44.651360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.094 qpair failed and we were unable to recover it. 00:25:41.094 [2024-11-26 20:55:44.651636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.094 [2024-11-26 20:55:44.651705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.094 qpair failed and we were unable to recover it. 00:25:41.094 [2024-11-26 20:55:44.651869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.094 [2024-11-26 20:55:44.651939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.094 qpair failed and we were unable to recover it. 00:25:41.094 [2024-11-26 20:55:44.652237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.094 [2024-11-26 20:55:44.652286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.094 qpair failed and we were unable to recover it. 00:25:41.094 [2024-11-26 20:55:44.652536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.094 [2024-11-26 20:55:44.652605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.094 qpair failed and we were unable to recover it. 00:25:41.094 [2024-11-26 20:55:44.652859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.094 [2024-11-26 20:55:44.652926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.094 qpair failed and we were unable to recover it. 00:25:41.094 [2024-11-26 20:55:44.653152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.094 [2024-11-26 20:55:44.653201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.094 qpair failed and we were unable to recover it. 00:25:41.094 [2024-11-26 20:55:44.653384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.094 [2024-11-26 20:55:44.653453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.094 qpair failed and we were unable to recover it. 00:25:41.094 [2024-11-26 20:55:44.653744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.094 [2024-11-26 20:55:44.653816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.094 qpair failed and we were unable to recover it. 00:25:41.094 [2024-11-26 20:55:44.654072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.094 [2024-11-26 20:55:44.654141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.094 qpair failed and we were unable to recover it. 00:25:41.094 [2024-11-26 20:55:44.654332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.094 [2024-11-26 20:55:44.654383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.094 qpair failed and we were unable to recover it. 00:25:41.094 [2024-11-26 20:55:44.654623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.094 [2024-11-26 20:55:44.654674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.094 qpair failed and we were unable to recover it. 00:25:41.094 [2024-11-26 20:55:44.654855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.094 [2024-11-26 20:55:44.654921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.094 qpair failed and we were unable to recover it. 00:25:41.094 [2024-11-26 20:55:44.655119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.094 [2024-11-26 20:55:44.655168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.094 qpair failed and we were unable to recover it. 00:25:41.094 [2024-11-26 20:55:44.655376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.094 [2024-11-26 20:55:44.655449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.094 qpair failed and we were unable to recover it. 00:25:41.094 [2024-11-26 20:55:44.655718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.094 [2024-11-26 20:55:44.655786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.094 qpair failed and we were unable to recover it. 00:25:41.094 [2024-11-26 20:55:44.655972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.094 [2024-11-26 20:55:44.656022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.094 qpair failed and we were unable to recover it. 00:25:41.094 [2024-11-26 20:55:44.656205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.094 [2024-11-26 20:55:44.656254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.094 qpair failed and we were unable to recover it. 00:25:41.094 [2024-11-26 20:55:44.656486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.094 [2024-11-26 20:55:44.656537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.094 qpair failed and we were unable to recover it. 00:25:41.094 [2024-11-26 20:55:44.656685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.094 [2024-11-26 20:55:44.656719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.094 qpair failed and we were unable to recover it. 00:25:41.094 [2024-11-26 20:55:44.656864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.094 [2024-11-26 20:55:44.656899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.094 qpair failed and we were unable to recover it. 00:25:41.094 [2024-11-26 20:55:44.657061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.094 [2024-11-26 20:55:44.657111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.094 qpair failed and we were unable to recover it. 00:25:41.094 [2024-11-26 20:55:44.657330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.094 [2024-11-26 20:55:44.657380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.094 qpair failed and we were unable to recover it. 00:25:41.094 [2024-11-26 20:55:44.657569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.094 [2024-11-26 20:55:44.657639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.094 qpair failed and we were unable to recover it. 00:25:41.094 [2024-11-26 20:55:44.657878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.094 [2024-11-26 20:55:44.657918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.094 qpair failed and we were unable to recover it. 00:25:41.094 [2024-11-26 20:55:44.658062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.094 [2024-11-26 20:55:44.658096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.094 qpair failed and we were unable to recover it. 00:25:41.094 [2024-11-26 20:55:44.658366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.094 [2024-11-26 20:55:44.658434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.094 qpair failed and we were unable to recover it. 00:25:41.094 [2024-11-26 20:55:44.658649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.094 [2024-11-26 20:55:44.658719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.094 qpair failed and we were unable to recover it. 00:25:41.094 [2024-11-26 20:55:44.658917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.094 [2024-11-26 20:55:44.658955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.094 qpair failed and we were unable to recover it. 00:25:41.094 [2024-11-26 20:55:44.659105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.094 [2024-11-26 20:55:44.659142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.094 qpair failed and we were unable to recover it. 00:25:41.094 [2024-11-26 20:55:44.659348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.094 [2024-11-26 20:55:44.659399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.095 qpair failed and we were unable to recover it. 00:25:41.095 [2024-11-26 20:55:44.659604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.095 [2024-11-26 20:55:44.659653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.095 qpair failed and we were unable to recover it. 00:25:41.095 [2024-11-26 20:55:44.659837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.095 [2024-11-26 20:55:44.659886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.095 qpair failed and we were unable to recover it. 00:25:41.095 [2024-11-26 20:55:44.660116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.095 [2024-11-26 20:55:44.660165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.095 qpair failed and we were unable to recover it. 00:25:41.095 [2024-11-26 20:55:44.660350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.095 [2024-11-26 20:55:44.660400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.095 qpair failed and we were unable to recover it. 00:25:41.095 [2024-11-26 20:55:44.660577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.095 [2024-11-26 20:55:44.660653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.095 qpair failed and we were unable to recover it. 00:25:41.095 [2024-11-26 20:55:44.660864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.095 [2024-11-26 20:55:44.660901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.095 qpair failed and we were unable to recover it. 00:25:41.095 [2024-11-26 20:55:44.661072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.095 [2024-11-26 20:55:44.661107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.095 qpair failed and we were unable to recover it. 00:25:41.095 [2024-11-26 20:55:44.661280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.095 [2024-11-26 20:55:44.661340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.095 qpair failed and we were unable to recover it. 00:25:41.095 [2024-11-26 20:55:44.661536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.095 [2024-11-26 20:55:44.661604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.095 qpair failed and we were unable to recover it. 00:25:41.095 [2024-11-26 20:55:44.661786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.095 [2024-11-26 20:55:44.661833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.095 qpair failed and we were unable to recover it. 00:25:41.095 [2024-11-26 20:55:44.662022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.095 [2024-11-26 20:55:44.662073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.095 qpair failed and we were unable to recover it. 00:25:41.095 [2024-11-26 20:55:44.662278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.095 [2024-11-26 20:55:44.662342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.095 qpair failed and we were unable to recover it. 00:25:41.095 [2024-11-26 20:55:44.662568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.095 [2024-11-26 20:55:44.662617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.095 qpair failed and we were unable to recover it. 00:25:41.095 [2024-11-26 20:55:44.662773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.095 [2024-11-26 20:55:44.662822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.095 qpair failed and we were unable to recover it. 00:25:41.095 [2024-11-26 20:55:44.663032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.095 [2024-11-26 20:55:44.663069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.095 qpair failed and we were unable to recover it. 00:25:41.095 [2024-11-26 20:55:44.663266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.095 [2024-11-26 20:55:44.663301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.095 qpair failed and we were unable to recover it. 00:25:41.095 [2024-11-26 20:55:44.663425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.095 [2024-11-26 20:55:44.663459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.095 qpair failed and we were unable to recover it. 00:25:41.095 [2024-11-26 20:55:44.663738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.095 [2024-11-26 20:55:44.663806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.095 qpair failed and we were unable to recover it. 00:25:41.095 [2024-11-26 20:55:44.663995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.095 [2024-11-26 20:55:44.664043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.095 qpair failed and we were unable to recover it. 00:25:41.095 [2024-11-26 20:55:44.664257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.095 [2024-11-26 20:55:44.664316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.095 qpair failed and we were unable to recover it. 00:25:41.095 [2024-11-26 20:55:44.664619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.095 [2024-11-26 20:55:44.664688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.095 qpair failed and we were unable to recover it. 00:25:41.095 [2024-11-26 20:55:44.664960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.095 [2024-11-26 20:55:44.665028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.095 qpair failed and we were unable to recover it. 00:25:41.095 [2024-11-26 20:55:44.665226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.095 [2024-11-26 20:55:44.665277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.095 qpair failed and we were unable to recover it. 00:25:41.095 [2024-11-26 20:55:44.665524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.095 [2024-11-26 20:55:44.665593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.095 qpair failed and we were unable to recover it. 00:25:41.095 [2024-11-26 20:55:44.665815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.095 [2024-11-26 20:55:44.665882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.095 qpair failed and we were unable to recover it. 00:25:41.095 [2024-11-26 20:55:44.666081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.095 [2024-11-26 20:55:44.666117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.095 qpair failed and we were unable to recover it. 00:25:41.095 [2024-11-26 20:55:44.666248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.095 [2024-11-26 20:55:44.666300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.095 qpair failed and we were unable to recover it. 00:25:41.095 [2024-11-26 20:55:44.666492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.095 [2024-11-26 20:55:44.666563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.095 qpair failed and we were unable to recover it. 00:25:41.095 [2024-11-26 20:55:44.666788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.095 [2024-11-26 20:55:44.666860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.095 qpair failed and we were unable to recover it. 00:25:41.095 [2024-11-26 20:55:44.667083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.095 [2024-11-26 20:55:44.667132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.095 qpair failed and we were unable to recover it. 00:25:41.095 [2024-11-26 20:55:44.667341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.095 [2024-11-26 20:55:44.667394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.095 qpair failed and we were unable to recover it. 00:25:41.095 [2024-11-26 20:55:44.667621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.095 [2024-11-26 20:55:44.667671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.095 qpair failed and we were unable to recover it. 00:25:41.095 [2024-11-26 20:55:44.667872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.095 [2024-11-26 20:55:44.667921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.095 qpair failed and we were unable to recover it. 00:25:41.096 [2024-11-26 20:55:44.668112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.096 [2024-11-26 20:55:44.668171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.096 qpair failed and we were unable to recover it. 00:25:41.096 [2024-11-26 20:55:44.668429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.096 [2024-11-26 20:55:44.668498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.096 qpair failed and we were unable to recover it. 00:25:41.096 [2024-11-26 20:55:44.668725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.096 [2024-11-26 20:55:44.668795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.096 qpair failed and we were unable to recover it. 00:25:41.096 [2024-11-26 20:55:44.668984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.096 [2024-11-26 20:55:44.669034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.096 qpair failed and we were unable to recover it. 00:25:41.096 [2024-11-26 20:55:44.669253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.096 [2024-11-26 20:55:44.669314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.096 qpair failed and we were unable to recover it. 00:25:41.096 [2024-11-26 20:55:44.669518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.096 [2024-11-26 20:55:44.669587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.096 qpair failed and we were unable to recover it. 00:25:41.096 [2024-11-26 20:55:44.669792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.096 [2024-11-26 20:55:44.669858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.096 qpair failed and we were unable to recover it. 00:25:41.096 [2024-11-26 20:55:44.670096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.096 [2024-11-26 20:55:44.670132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.096 qpair failed and we were unable to recover it. 00:25:41.096 [2024-11-26 20:55:44.670256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.096 [2024-11-26 20:55:44.670293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.096 qpair failed and we were unable to recover it. 00:25:41.096 [2024-11-26 20:55:44.670563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.096 [2024-11-26 20:55:44.670643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.096 qpair failed and we were unable to recover it. 00:25:41.096 [2024-11-26 20:55:44.670861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.096 [2024-11-26 20:55:44.670896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.096 qpair failed and we were unable to recover it. 00:25:41.096 [2024-11-26 20:55:44.671033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.096 [2024-11-26 20:55:44.671069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.096 qpair failed and we were unable to recover it. 00:25:41.096 [2024-11-26 20:55:44.671285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.096 [2024-11-26 20:55:44.671337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.096 qpair failed and we were unable to recover it. 00:25:41.096 [2024-11-26 20:55:44.671513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.096 [2024-11-26 20:55:44.671574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.096 qpair failed and we were unable to recover it. 00:25:41.096 [2024-11-26 20:55:44.671801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.096 [2024-11-26 20:55:44.671870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.096 qpair failed and we were unable to recover it. 00:25:41.096 [2024-11-26 20:55:44.672088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.096 [2024-11-26 20:55:44.672122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.096 qpair failed and we were unable to recover it. 00:25:41.096 [2024-11-26 20:55:44.672287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.096 [2024-11-26 20:55:44.672367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.096 qpair failed and we were unable to recover it. 00:25:41.096 [2024-11-26 20:55:44.672628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.096 [2024-11-26 20:55:44.672696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.096 qpair failed and we were unable to recover it. 00:25:41.096 [2024-11-26 20:55:44.672938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.096 [2024-11-26 20:55:44.672974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.096 qpair failed and we were unable to recover it. 00:25:41.096 [2024-11-26 20:55:44.673123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.096 [2024-11-26 20:55:44.673159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.096 qpair failed and we were unable to recover it. 00:25:41.096 [2024-11-26 20:55:44.673377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.096 [2024-11-26 20:55:44.673451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.096 qpair failed and we were unable to recover it. 00:25:41.096 [2024-11-26 20:55:44.673713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.096 [2024-11-26 20:55:44.673782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.096 qpair failed and we were unable to recover it. 00:25:41.096 [2024-11-26 20:55:44.674043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.096 [2024-11-26 20:55:44.674112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.096 qpair failed and we were unable to recover it. 00:25:41.096 [2024-11-26 20:55:44.674361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.096 [2024-11-26 20:55:44.674437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.096 qpair failed and we were unable to recover it. 00:25:41.096 [2024-11-26 20:55:44.674711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.096 [2024-11-26 20:55:44.674778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.096 qpair failed and we were unable to recover it. 00:25:41.096 [2024-11-26 20:55:44.675017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.096 [2024-11-26 20:55:44.675052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.096 qpair failed and we were unable to recover it. 00:25:41.096 [2024-11-26 20:55:44.675192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.096 [2024-11-26 20:55:44.675226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.096 qpair failed and we were unable to recover it. 00:25:41.096 [2024-11-26 20:55:44.675415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.096 [2024-11-26 20:55:44.675486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.096 qpair failed and we were unable to recover it. 00:25:41.096 [2024-11-26 20:55:44.675695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.096 [2024-11-26 20:55:44.675765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.096 qpair failed and we were unable to recover it. 00:25:41.096 [2024-11-26 20:55:44.676037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.096 [2024-11-26 20:55:44.676073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.096 qpair failed and we were unable to recover it. 00:25:41.096 [2024-11-26 20:55:44.676214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.096 [2024-11-26 20:55:44.676250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.096 qpair failed and we were unable to recover it. 00:25:41.096 [2024-11-26 20:55:44.676503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.096 [2024-11-26 20:55:44.676554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.096 qpair failed and we were unable to recover it. 00:25:41.096 [2024-11-26 20:55:44.676750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.096 [2024-11-26 20:55:44.676799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.096 qpair failed and we were unable to recover it. 00:25:41.096 [2024-11-26 20:55:44.676986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.096 [2024-11-26 20:55:44.677035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.096 qpair failed and we were unable to recover it. 00:25:41.096 [2024-11-26 20:55:44.677261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.096 [2024-11-26 20:55:44.677320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.096 qpair failed and we were unable to recover it. 00:25:41.096 [2024-11-26 20:55:44.677548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.096 [2024-11-26 20:55:44.677617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.096 qpair failed and we were unable to recover it. 00:25:41.096 [2024-11-26 20:55:44.677882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.096 [2024-11-26 20:55:44.677951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.096 qpair failed and we were unable to recover it. 00:25:41.096 [2024-11-26 20:55:44.678175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.096 [2024-11-26 20:55:44.678223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.096 qpair failed and we were unable to recover it. 00:25:41.097 [2024-11-26 20:55:44.678466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.097 [2024-11-26 20:55:44.678535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.097 qpair failed and we were unable to recover it. 00:25:41.097 [2024-11-26 20:55:44.678816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.097 [2024-11-26 20:55:44.678883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.097 qpair failed and we were unable to recover it. 00:25:41.097 [2024-11-26 20:55:44.679073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.097 [2024-11-26 20:55:44.679130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.097 qpair failed and we were unable to recover it. 00:25:41.097 [2024-11-26 20:55:44.679293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.097 [2024-11-26 20:55:44.679361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.097 qpair failed and we were unable to recover it. 00:25:41.097 [2024-11-26 20:55:44.679594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.097 [2024-11-26 20:55:44.679663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.097 qpair failed and we were unable to recover it. 00:25:41.097 [2024-11-26 20:55:44.679886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.097 [2024-11-26 20:55:44.679954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.097 qpair failed and we were unable to recover it. 00:25:41.097 [2024-11-26 20:55:44.680140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.097 [2024-11-26 20:55:44.680190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.097 qpair failed and we were unable to recover it. 00:25:41.097 [2024-11-26 20:55:44.680379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.097 [2024-11-26 20:55:44.680457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.097 qpair failed and we were unable to recover it. 00:25:41.097 [2024-11-26 20:55:44.680669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.097 [2024-11-26 20:55:44.680738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.097 qpair failed and we were unable to recover it. 00:25:41.097 [2024-11-26 20:55:44.680976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.097 [2024-11-26 20:55:44.681045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.097 qpair failed and we were unable to recover it. 00:25:41.097 [2024-11-26 20:55:44.681207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.097 [2024-11-26 20:55:44.681241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.097 qpair failed and we were unable to recover it. 00:25:41.097 [2024-11-26 20:55:44.681391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.097 [2024-11-26 20:55:44.681427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.097 qpair failed and we were unable to recover it. 00:25:41.097 [2024-11-26 20:55:44.681540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.097 [2024-11-26 20:55:44.681591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.097 qpair failed and we were unable to recover it. 00:25:41.097 [2024-11-26 20:55:44.681818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.097 [2024-11-26 20:55:44.681885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.097 qpair failed and we were unable to recover it. 00:25:41.097 [2024-11-26 20:55:44.682078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.097 [2024-11-26 20:55:44.682116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.097 qpair failed and we were unable to recover it. 00:25:41.097 [2024-11-26 20:55:44.682288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.097 [2024-11-26 20:55:44.682349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.097 qpair failed and we were unable to recover it. 00:25:41.097 [2024-11-26 20:55:44.682520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.097 [2024-11-26 20:55:44.682554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.097 qpair failed and we were unable to recover it. 00:25:41.097 [2024-11-26 20:55:44.682792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.097 [2024-11-26 20:55:44.682825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.097 qpair failed and we were unable to recover it. 00:25:41.097 [2024-11-26 20:55:44.682940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.097 [2024-11-26 20:55:44.682974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.097 qpair failed and we were unable to recover it. 00:25:41.097 [2024-11-26 20:55:44.683134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.097 [2024-11-26 20:55:44.683181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.097 qpair failed and we were unable to recover it. 00:25:41.097 [2024-11-26 20:55:44.683374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.097 [2024-11-26 20:55:44.683408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.097 qpair failed and we were unable to recover it. 00:25:41.097 [2024-11-26 20:55:44.683519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.097 [2024-11-26 20:55:44.683553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.097 qpair failed and we were unable to recover it. 00:25:41.097 [2024-11-26 20:55:44.683743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.097 [2024-11-26 20:55:44.683776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.097 qpair failed and we were unable to recover it. 00:25:41.097 [2024-11-26 20:55:44.683929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.097 [2024-11-26 20:55:44.683962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.097 qpair failed and we were unable to recover it. 00:25:41.097 [2024-11-26 20:55:44.684131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.097 [2024-11-26 20:55:44.684164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.097 qpair failed and we were unable to recover it. 00:25:41.097 [2024-11-26 20:55:44.684266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.097 [2024-11-26 20:55:44.684300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.097 qpair failed and we were unable to recover it. 00:25:41.097 [2024-11-26 20:55:44.684455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.097 [2024-11-26 20:55:44.684490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.097 qpair failed and we were unable to recover it. 00:25:41.097 [2024-11-26 20:55:44.684634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.097 [2024-11-26 20:55:44.684701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.097 qpair failed and we were unable to recover it. 00:25:41.097 [2024-11-26 20:55:44.684880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.097 [2024-11-26 20:55:44.684929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:41.097 qpair failed and we were unable to recover it. 00:25:41.097 [2024-11-26 20:55:44.685133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.097 [2024-11-26 20:55:44.685211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.097 qpair failed and we were unable to recover it. 00:25:41.097 [2024-11-26 20:55:44.685440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.097 [2024-11-26 20:55:44.685478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.097 qpair failed and we were unable to recover it. 00:25:41.097 [2024-11-26 20:55:44.685670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.097 [2024-11-26 20:55:44.685736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.097 qpair failed and we were unable to recover it. 00:25:41.097 [2024-11-26 20:55:44.686050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.097 [2024-11-26 20:55:44.686084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.097 qpair failed and we were unable to recover it. 00:25:41.097 [2024-11-26 20:55:44.686224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.097 [2024-11-26 20:55:44.686258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.097 qpair failed and we were unable to recover it. 00:25:41.097 [2024-11-26 20:55:44.686386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.097 [2024-11-26 20:55:44.686423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.097 qpair failed and we were unable to recover it. 00:25:41.097 [2024-11-26 20:55:44.686535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.097 [2024-11-26 20:55:44.686569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.097 qpair failed and we were unable to recover it. 00:25:41.097 [2024-11-26 20:55:44.686864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.097 [2024-11-26 20:55:44.686929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.097 qpair failed and we were unable to recover it. 00:25:41.097 [2024-11-26 20:55:44.687174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.097 [2024-11-26 20:55:44.687239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.097 qpair failed and we were unable to recover it. 00:25:41.098 [2024-11-26 20:55:44.687451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.098 [2024-11-26 20:55:44.687487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.098 qpair failed and we were unable to recover it. 00:25:41.098 [2024-11-26 20:55:44.687639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.098 [2024-11-26 20:55:44.687703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.098 qpair failed and we were unable to recover it. 00:25:41.098 [2024-11-26 20:55:44.687951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.098 [2024-11-26 20:55:44.688009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.098 qpair failed and we were unable to recover it. 00:25:41.098 [2024-11-26 20:55:44.688289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.098 [2024-11-26 20:55:44.688378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.098 qpair failed and we were unable to recover it. 00:25:41.098 [2024-11-26 20:55:44.688519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.098 [2024-11-26 20:55:44.688559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.098 qpair failed and we were unable to recover it. 00:25:41.098 [2024-11-26 20:55:44.688701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.098 [2024-11-26 20:55:44.688756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.098 qpair failed and we were unable to recover it. 00:25:41.098 [2024-11-26 20:55:44.688998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.098 [2024-11-26 20:55:44.689052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.098 qpair failed and we were unable to recover it. 00:25:41.098 [2024-11-26 20:55:44.689340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.098 [2024-11-26 20:55:44.689394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.098 qpair failed and we were unable to recover it. 00:25:41.098 [2024-11-26 20:55:44.689545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.098 [2024-11-26 20:55:44.689579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.098 qpair failed and we were unable to recover it. 00:25:41.098 [2024-11-26 20:55:44.689785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.098 [2024-11-26 20:55:44.689819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.098 qpair failed and we were unable to recover it. 00:25:41.098 [2024-11-26 20:55:44.689975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.098 [2024-11-26 20:55:44.690009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.098 qpair failed and we were unable to recover it. 00:25:41.098 [2024-11-26 20:55:44.690218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.098 [2024-11-26 20:55:44.690252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.098 qpair failed and we were unable to recover it. 00:25:41.098 [2024-11-26 20:55:44.690363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.098 [2024-11-26 20:55:44.690398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.098 qpair failed and we were unable to recover it. 00:25:41.098 [2024-11-26 20:55:44.690542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.098 [2024-11-26 20:55:44.690575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.098 qpair failed and we were unable to recover it. 00:25:41.098 [2024-11-26 20:55:44.690814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.098 [2024-11-26 20:55:44.690849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.098 qpair failed and we were unable to recover it. 00:25:41.098 [2024-11-26 20:55:44.690997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.098 [2024-11-26 20:55:44.691032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.098 qpair failed and we were unable to recover it. 00:25:41.098 [2024-11-26 20:55:44.691267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.098 [2024-11-26 20:55:44.691358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.098 qpair failed and we were unable to recover it. 00:25:41.098 [2024-11-26 20:55:44.691508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.098 [2024-11-26 20:55:44.691563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.098 qpair failed and we were unable to recover it. 00:25:41.098 [2024-11-26 20:55:44.691721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.098 [2024-11-26 20:55:44.691755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.098 qpair failed and we were unable to recover it. 00:25:41.098 [2024-11-26 20:55:44.691923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.098 [2024-11-26 20:55:44.692001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.098 qpair failed and we were unable to recover it. 00:25:41.098 [2024-11-26 20:55:44.692287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.098 [2024-11-26 20:55:44.692329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.098 qpair failed and we were unable to recover it. 00:25:41.098 [2024-11-26 20:55:44.692449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.098 [2024-11-26 20:55:44.692484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.098 qpair failed and we were unable to recover it. 00:25:41.098 [2024-11-26 20:55:44.692654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.098 [2024-11-26 20:55:44.692687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.098 qpair failed and we were unable to recover it. 00:25:41.098 [2024-11-26 20:55:44.692801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.098 [2024-11-26 20:55:44.692836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.098 qpair failed and we were unable to recover it. 00:25:41.098 [2024-11-26 20:55:44.692974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.098 [2024-11-26 20:55:44.693008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.098 qpair failed and we were unable to recover it. 00:25:41.098 [2024-11-26 20:55:44.693210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.098 [2024-11-26 20:55:44.693276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.098 qpair failed and we were unable to recover it. 00:25:41.098 [2024-11-26 20:55:44.693526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.098 [2024-11-26 20:55:44.693576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.098 qpair failed and we were unable to recover it. 00:25:41.098 [2024-11-26 20:55:44.693750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.098 [2024-11-26 20:55:44.693799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.098 qpair failed and we were unable to recover it. 00:25:41.098 [2024-11-26 20:55:44.693977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.098 [2024-11-26 20:55:44.694042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.098 qpair failed and we were unable to recover it. 00:25:41.098 [2024-11-26 20:55:44.694278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.098 [2024-11-26 20:55:44.694322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.098 qpair failed and we were unable to recover it. 00:25:41.098 [2024-11-26 20:55:44.694480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.098 [2024-11-26 20:55:44.694514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.098 qpair failed and we were unable to recover it. 00:25:41.098 [2024-11-26 20:55:44.694721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.098 [2024-11-26 20:55:44.694779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.098 qpair failed and we were unable to recover it. 00:25:41.098 [2024-11-26 20:55:44.695005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.098 [2024-11-26 20:55:44.695071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.098 qpair failed and we were unable to recover it. 00:25:41.098 [2024-11-26 20:55:44.695276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.098 [2024-11-26 20:55:44.695340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.098 qpair failed and we were unable to recover it. 00:25:41.098 [2024-11-26 20:55:44.695541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.098 [2024-11-26 20:55:44.695575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.098 qpair failed and we were unable to recover it. 00:25:41.098 [2024-11-26 20:55:44.695713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.098 [2024-11-26 20:55:44.695747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.098 qpair failed and we were unable to recover it. 00:25:41.098 [2024-11-26 20:55:44.695891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.098 [2024-11-26 20:55:44.695926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.098 qpair failed and we were unable to recover it. 00:25:41.098 [2024-11-26 20:55:44.696153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.098 [2024-11-26 20:55:44.696218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.098 qpair failed and we were unable to recover it. 00:25:41.098 [2024-11-26 20:55:44.696500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.099 [2024-11-26 20:55:44.696535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.099 qpair failed and we were unable to recover it. 00:25:41.099 [2024-11-26 20:55:44.696646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.099 [2024-11-26 20:55:44.696680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.099 qpair failed and we were unable to recover it. 00:25:41.099 [2024-11-26 20:55:44.696816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.099 [2024-11-26 20:55:44.696851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.099 qpair failed and we were unable to recover it. 00:25:41.099 [2024-11-26 20:55:44.697087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.099 [2024-11-26 20:55:44.697151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.099 qpair failed and we were unable to recover it. 00:25:41.099 [2024-11-26 20:55:44.697406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.099 [2024-11-26 20:55:44.697441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.099 qpair failed and we were unable to recover it. 00:25:41.099 [2024-11-26 20:55:44.697551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.099 [2024-11-26 20:55:44.697586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.099 qpair failed and we were unable to recover it. 00:25:41.099 [2024-11-26 20:55:44.697725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.099 [2024-11-26 20:55:44.697759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.099 qpair failed and we were unable to recover it. 00:25:41.099 [2024-11-26 20:55:44.697947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.099 [2024-11-26 20:55:44.697997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.099 qpair failed and we were unable to recover it. 00:25:41.099 [2024-11-26 20:55:44.698270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.099 [2024-11-26 20:55:44.698353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.099 qpair failed and we were unable to recover it. 00:25:41.099 [2024-11-26 20:55:44.698607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.099 [2024-11-26 20:55:44.698671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.099 qpair failed and we were unable to recover it. 00:25:41.099 [2024-11-26 20:55:44.698962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.099 [2024-11-26 20:55:44.699022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.099 qpair failed and we were unable to recover it. 00:25:41.099 [2024-11-26 20:55:44.699283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.099 [2024-11-26 20:55:44.699324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.099 qpair failed and we were unable to recover it. 00:25:41.099 [2024-11-26 20:55:44.699443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.099 [2024-11-26 20:55:44.699478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.099 qpair failed and we were unable to recover it. 00:25:41.099 [2024-11-26 20:55:44.699644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.099 [2024-11-26 20:55:44.699678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.099 qpair failed and we were unable to recover it. 00:25:41.099 [2024-11-26 20:55:44.699802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.099 [2024-11-26 20:55:44.699836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.099 qpair failed and we were unable to recover it. 00:25:41.099 [2024-11-26 20:55:44.700097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.099 [2024-11-26 20:55:44.700131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.099 qpair failed and we were unable to recover it. 00:25:41.099 [2024-11-26 20:55:44.700240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.099 [2024-11-26 20:55:44.700276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.099 qpair failed and we were unable to recover it. 00:25:41.099 [2024-11-26 20:55:44.700516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.099 [2024-11-26 20:55:44.700581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.099 qpair failed and we were unable to recover it. 00:25:41.099 [2024-11-26 20:55:44.700829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.099 [2024-11-26 20:55:44.700894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.099 qpair failed and we were unable to recover it. 00:25:41.099 [2024-11-26 20:55:44.701119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.099 [2024-11-26 20:55:44.701187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.099 qpair failed and we were unable to recover it. 00:25:41.099 [2024-11-26 20:55:44.701477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.099 [2024-11-26 20:55:44.701544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.099 qpair failed and we were unable to recover it. 00:25:41.099 [2024-11-26 20:55:44.701833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.099 [2024-11-26 20:55:44.701897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.099 qpair failed and we were unable to recover it. 00:25:41.099 [2024-11-26 20:55:44.702164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.099 [2024-11-26 20:55:44.702199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.099 qpair failed and we were unable to recover it. 00:25:41.099 [2024-11-26 20:55:44.702334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.099 [2024-11-26 20:55:44.702369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.099 qpair failed and we were unable to recover it. 00:25:41.099 [2024-11-26 20:55:44.702616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.099 [2024-11-26 20:55:44.702681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.099 qpair failed and we were unable to recover it. 00:25:41.099 [2024-11-26 20:55:44.702932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.099 [2024-11-26 20:55:44.702997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.099 qpair failed and we were unable to recover it. 00:25:41.099 [2024-11-26 20:55:44.703324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.099 [2024-11-26 20:55:44.703390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.099 qpair failed and we were unable to recover it. 00:25:41.099 [2024-11-26 20:55:44.703690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.099 [2024-11-26 20:55:44.703755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.099 qpair failed and we were unable to recover it. 00:25:41.099 [2024-11-26 20:55:44.704009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.099 [2024-11-26 20:55:44.704043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.099 qpair failed and we were unable to recover it. 00:25:41.099 [2024-11-26 20:55:44.704155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.099 [2024-11-26 20:55:44.704189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.099 qpair failed and we were unable to recover it. 00:25:41.099 [2024-11-26 20:55:44.704326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.099 [2024-11-26 20:55:44.704360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.099 qpair failed and we were unable to recover it. 00:25:41.099 [2024-11-26 20:55:44.704455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.099 [2024-11-26 20:55:44.704489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.099 qpair failed and we were unable to recover it. 00:25:41.099 [2024-11-26 20:55:44.704614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.099 [2024-11-26 20:55:44.704648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.099 qpair failed and we were unable to recover it. 00:25:41.099 [2024-11-26 20:55:44.704859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.099 [2024-11-26 20:55:44.704900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.099 qpair failed and we were unable to recover it. 00:25:41.100 [2024-11-26 20:55:44.705019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.100 [2024-11-26 20:55:44.705054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.100 qpair failed and we were unable to recover it. 00:25:41.100 [2024-11-26 20:55:44.705315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.100 [2024-11-26 20:55:44.705382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.100 qpair failed and we were unable to recover it. 00:25:41.100 [2024-11-26 20:55:44.705615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.100 [2024-11-26 20:55:44.705649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.100 qpair failed and we were unable to recover it. 00:25:41.100 [2024-11-26 20:55:44.705762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.100 [2024-11-26 20:55:44.705796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.100 qpair failed and we were unable to recover it. 00:25:41.100 [2024-11-26 20:55:44.705955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.100 [2024-11-26 20:55:44.706012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.100 qpair failed and we were unable to recover it. 00:25:41.100 [2024-11-26 20:55:44.706182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.100 [2024-11-26 20:55:44.706216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.100 qpair failed and we were unable to recover it. 00:25:41.100 [2024-11-26 20:55:44.706538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.100 [2024-11-26 20:55:44.706603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.100 qpair failed and we were unable to recover it. 00:25:41.100 [2024-11-26 20:55:44.706858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.100 [2024-11-26 20:55:44.706892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.100 qpair failed and we were unable to recover it. 00:25:41.100 [2024-11-26 20:55:44.707038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.100 [2024-11-26 20:55:44.707072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.100 qpair failed and we were unable to recover it. 00:25:41.100 [2024-11-26 20:55:44.707341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.100 [2024-11-26 20:55:44.707407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.100 qpair failed and we were unable to recover it. 00:25:41.100 [2024-11-26 20:55:44.707606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.100 [2024-11-26 20:55:44.707672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.100 qpair failed and we were unable to recover it. 00:25:41.100 [2024-11-26 20:55:44.707923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.100 [2024-11-26 20:55:44.707957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.100 qpair failed and we were unable to recover it. 00:25:41.100 [2024-11-26 20:55:44.708071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.100 [2024-11-26 20:55:44.708106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.100 qpair failed and we were unable to recover it. 00:25:41.100 [2024-11-26 20:55:44.708370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.100 [2024-11-26 20:55:44.708428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.100 qpair failed and we were unable to recover it. 00:25:41.100 [2024-11-26 20:55:44.708680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.100 [2024-11-26 20:55:44.708714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.100 qpair failed and we were unable to recover it. 00:25:41.100 [2024-11-26 20:55:44.708883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.100 [2024-11-26 20:55:44.708917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.100 qpair failed and we were unable to recover it. 00:25:41.100 [2024-11-26 20:55:44.709088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.100 [2024-11-26 20:55:44.709144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.100 qpair failed and we were unable to recover it. 00:25:41.100 [2024-11-26 20:55:44.709373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.100 [2024-11-26 20:55:44.709430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.100 qpair failed and we were unable to recover it. 00:25:41.100 [2024-11-26 20:55:44.709651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.100 [2024-11-26 20:55:44.709684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.100 qpair failed and we were unable to recover it. 00:25:41.100 [2024-11-26 20:55:44.709823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.100 [2024-11-26 20:55:44.709856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.100 qpair failed and we were unable to recover it. 00:25:41.100 [2024-11-26 20:55:44.710054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.100 [2024-11-26 20:55:44.710110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.100 qpair failed and we were unable to recover it. 00:25:41.100 [2024-11-26 20:55:44.710356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.100 [2024-11-26 20:55:44.710390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.100 qpair failed and we were unable to recover it. 00:25:41.100 [2024-11-26 20:55:44.710508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.100 [2024-11-26 20:55:44.710542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.100 qpair failed and we were unable to recover it. 00:25:41.100 [2024-11-26 20:55:44.710689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.100 [2024-11-26 20:55:44.710724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.100 qpair failed and we were unable to recover it. 00:25:41.100 [2024-11-26 20:55:44.710884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.100 [2024-11-26 20:55:44.710949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.100 qpair failed and we were unable to recover it. 00:25:41.100 [2024-11-26 20:55:44.711141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.100 [2024-11-26 20:55:44.711199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.100 qpair failed and we were unable to recover it. 00:25:41.100 [2024-11-26 20:55:44.711389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.100 [2024-11-26 20:55:44.711424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.100 qpair failed and we were unable to recover it. 00:25:41.100 [2024-11-26 20:55:44.711544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.100 [2024-11-26 20:55:44.711578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.100 qpair failed and we were unable to recover it. 00:25:41.100 [2024-11-26 20:55:44.711740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.100 [2024-11-26 20:55:44.711804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.100 qpair failed and we were unable to recover it. 00:25:41.100 [2024-11-26 20:55:44.712059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.100 [2024-11-26 20:55:44.712123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.100 qpair failed and we were unable to recover it. 00:25:41.100 [2024-11-26 20:55:44.712360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.100 [2024-11-26 20:55:44.712395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.100 qpair failed and we were unable to recover it. 00:25:41.100 [2024-11-26 20:55:44.712536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.100 [2024-11-26 20:55:44.712571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.100 qpair failed and we were unable to recover it. 00:25:41.100 [2024-11-26 20:55:44.712736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.100 [2024-11-26 20:55:44.712770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.100 qpair failed and we were unable to recover it. 00:25:41.100 [2024-11-26 20:55:44.713014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.100 [2024-11-26 20:55:44.713078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.100 qpair failed and we were unable to recover it. 00:25:41.100 [2024-11-26 20:55:44.713367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.100 [2024-11-26 20:55:44.713433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.100 qpair failed and we were unable to recover it. 00:25:41.100 [2024-11-26 20:55:44.713708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.100 [2024-11-26 20:55:44.713772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.100 qpair failed and we were unable to recover it. 00:25:41.100 [2024-11-26 20:55:44.714021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.100 [2024-11-26 20:55:44.714086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.100 qpair failed and we were unable to recover it. 00:25:41.100 [2024-11-26 20:55:44.714350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.100 [2024-11-26 20:55:44.714387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.100 qpair failed and we were unable to recover it. 00:25:41.100 [2024-11-26 20:55:44.714530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.101 [2024-11-26 20:55:44.714564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.101 qpair failed and we were unable to recover it. 00:25:41.101 [2024-11-26 20:55:44.714728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.101 [2024-11-26 20:55:44.714768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.101 qpair failed and we were unable to recover it. 00:25:41.101 [2024-11-26 20:55:44.715015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.101 [2024-11-26 20:55:44.715081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.101 qpair failed and we were unable to recover it. 00:25:41.101 [2024-11-26 20:55:44.715342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.101 [2024-11-26 20:55:44.715409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.101 qpair failed and we were unable to recover it. 00:25:41.101 [2024-11-26 20:55:44.715660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.101 [2024-11-26 20:55:44.715694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.101 qpair failed and we were unable to recover it. 00:25:41.101 [2024-11-26 20:55:44.715828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.101 [2024-11-26 20:55:44.715862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.101 qpair failed and we were unable to recover it. 00:25:41.101 [2024-11-26 20:55:44.716117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.101 [2024-11-26 20:55:44.716181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.101 qpair failed and we were unable to recover it. 00:25:41.101 [2024-11-26 20:55:44.716401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.101 [2024-11-26 20:55:44.716467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.101 qpair failed and we were unable to recover it. 00:25:41.101 [2024-11-26 20:55:44.716679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.101 [2024-11-26 20:55:44.716743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.101 qpair failed and we were unable to recover it. 00:25:41.101 [2024-11-26 20:55:44.716972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.101 [2024-11-26 20:55:44.717006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.101 qpair failed and we were unable to recover it. 00:25:41.101 [2024-11-26 20:55:44.717139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.101 [2024-11-26 20:55:44.717173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.101 qpair failed and we were unable to recover it. 00:25:41.101 [2024-11-26 20:55:44.717309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.101 [2024-11-26 20:55:44.717345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.101 qpair failed and we were unable to recover it. 00:25:41.101 [2024-11-26 20:55:44.717470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.101 [2024-11-26 20:55:44.717504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.101 qpair failed and we were unable to recover it. 00:25:41.101 [2024-11-26 20:55:44.717751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.101 [2024-11-26 20:55:44.717816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.101 qpair failed and we were unable to recover it. 00:25:41.101 [2024-11-26 20:55:44.718028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.101 [2024-11-26 20:55:44.718095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.101 qpair failed and we were unable to recover it. 00:25:41.101 [2024-11-26 20:55:44.718404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.101 [2024-11-26 20:55:44.718470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.101 qpair failed and we were unable to recover it. 00:25:41.101 [2024-11-26 20:55:44.718758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.101 [2024-11-26 20:55:44.718824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.101 qpair failed and we were unable to recover it. 00:25:41.101 [2024-11-26 20:55:44.719067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.101 [2024-11-26 20:55:44.719131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.101 qpair failed and we were unable to recover it. 00:25:41.101 [2024-11-26 20:55:44.719424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.101 [2024-11-26 20:55:44.719460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.101 qpair failed and we were unable to recover it. 00:25:41.101 [2024-11-26 20:55:44.719608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.101 [2024-11-26 20:55:44.719643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.101 qpair failed and we were unable to recover it. 00:25:41.101 [2024-11-26 20:55:44.719746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.101 [2024-11-26 20:55:44.719781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.101 qpair failed and we were unable to recover it. 00:25:41.101 [2024-11-26 20:55:44.719920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.101 [2024-11-26 20:55:44.719954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.101 qpair failed and we were unable to recover it. 00:25:41.101 [2024-11-26 20:55:44.720193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.101 [2024-11-26 20:55:44.720258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.101 qpair failed and we were unable to recover it. 00:25:41.101 [2024-11-26 20:55:44.720521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.101 [2024-11-26 20:55:44.720590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.101 qpair failed and we were unable to recover it. 00:25:41.101 [2024-11-26 20:55:44.720876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.101 [2024-11-26 20:55:44.720943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.101 qpair failed and we were unable to recover it. 00:25:41.101 [2024-11-26 20:55:44.721236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.101 [2024-11-26 20:55:44.721301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.101 qpair failed and we were unable to recover it. 00:25:41.101 [2024-11-26 20:55:44.721579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.101 [2024-11-26 20:55:44.721613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.101 qpair failed and we were unable to recover it. 00:25:41.101 [2024-11-26 20:55:44.721723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.101 [2024-11-26 20:55:44.721758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.101 qpair failed and we were unable to recover it. 00:25:41.101 [2024-11-26 20:55:44.721905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.101 [2024-11-26 20:55:44.721939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.101 qpair failed and we were unable to recover it. 00:25:41.101 [2024-11-26 20:55:44.722144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.101 [2024-11-26 20:55:44.722208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.101 qpair failed and we were unable to recover it. 00:25:41.101 [2024-11-26 20:55:44.722473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.101 [2024-11-26 20:55:44.722539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.101 qpair failed and we were unable to recover it. 00:25:41.101 [2024-11-26 20:55:44.722753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.101 [2024-11-26 20:55:44.722788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.101 qpair failed and we were unable to recover it. 00:25:41.101 [2024-11-26 20:55:44.722939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.101 [2024-11-26 20:55:44.722973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.101 qpair failed and we were unable to recover it. 00:25:41.101 [2024-11-26 20:55:44.723110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.101 [2024-11-26 20:55:44.723146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.101 qpair failed and we were unable to recover it. 00:25:41.101 [2024-11-26 20:55:44.723290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.101 [2024-11-26 20:55:44.723335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.101 qpair failed and we were unable to recover it. 00:25:41.101 [2024-11-26 20:55:44.723539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.101 [2024-11-26 20:55:44.723607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.101 qpair failed and we were unable to recover it. 00:25:41.101 [2024-11-26 20:55:44.723899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.101 [2024-11-26 20:55:44.723964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.101 qpair failed and we were unable to recover it. 00:25:41.101 [2024-11-26 20:55:44.724208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.101 [2024-11-26 20:55:44.724275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.101 qpair failed and we were unable to recover it. 00:25:41.102 [2024-11-26 20:55:44.724543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.102 [2024-11-26 20:55:44.724608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.102 qpair failed and we were unable to recover it. 00:25:41.102 [2024-11-26 20:55:44.724837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.102 [2024-11-26 20:55:44.724871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.102 qpair failed and we were unable to recover it. 00:25:41.102 [2024-11-26 20:55:44.724974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.102 [2024-11-26 20:55:44.725007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.102 qpair failed and we were unable to recover it. 00:25:41.102 [2024-11-26 20:55:44.725160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.102 [2024-11-26 20:55:44.725220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.102 qpair failed and we were unable to recover it. 00:25:41.102 [2024-11-26 20:55:44.725355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.102 [2024-11-26 20:55:44.725390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.102 qpair failed and we were unable to recover it. 00:25:41.102 [2024-11-26 20:55:44.725580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.102 [2024-11-26 20:55:44.725645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.102 qpair failed and we were unable to recover it. 00:25:41.102 [2024-11-26 20:55:44.725831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.102 [2024-11-26 20:55:44.725895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.102 qpair failed and we were unable to recover it. 00:25:41.102 [2024-11-26 20:55:44.726133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.102 [2024-11-26 20:55:44.726166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.102 qpair failed and we were unable to recover it. 00:25:41.102 [2024-11-26 20:55:44.726335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.102 [2024-11-26 20:55:44.726378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.102 qpair failed and we were unable to recover it. 00:25:41.102 [2024-11-26 20:55:44.726608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.102 [2024-11-26 20:55:44.726672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.102 qpair failed and we were unable to recover it. 00:25:41.102 [2024-11-26 20:55:44.726912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.102 [2024-11-26 20:55:44.726975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.102 qpair failed and we were unable to recover it. 00:25:41.102 [2024-11-26 20:55:44.727221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.102 [2024-11-26 20:55:44.727254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.102 qpair failed and we were unable to recover it. 00:25:41.102 [2024-11-26 20:55:44.727384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.102 [2024-11-26 20:55:44.727430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.102 qpair failed and we were unable to recover it. 00:25:41.102 [2024-11-26 20:55:44.727725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.102 [2024-11-26 20:55:44.727789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.102 qpair failed and we were unable to recover it. 00:25:41.102 [2024-11-26 20:55:44.728052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.102 [2024-11-26 20:55:44.728085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.102 qpair failed and we were unable to recover it. 00:25:41.102 [2024-11-26 20:55:44.728258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.102 [2024-11-26 20:55:44.728347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.102 qpair failed and we were unable to recover it. 00:25:41.102 [2024-11-26 20:55:44.728615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.102 [2024-11-26 20:55:44.728680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.102 qpair failed and we were unable to recover it. 00:25:41.102 [2024-11-26 20:55:44.729040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.102 [2024-11-26 20:55:44.729104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.102 qpair failed and we were unable to recover it. 00:25:41.102 [2024-11-26 20:55:44.729394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.102 [2024-11-26 20:55:44.729429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.102 qpair failed and we were unable to recover it. 00:25:41.102 [2024-11-26 20:55:44.729596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.102 [2024-11-26 20:55:44.729630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.102 qpair failed and we were unable to recover it. 00:25:41.102 [2024-11-26 20:55:44.729911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.102 [2024-11-26 20:55:44.729974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.102 qpair failed and we were unable to recover it. 00:25:41.102 [2024-11-26 20:55:44.730215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.102 [2024-11-26 20:55:44.730283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.102 qpair failed and we were unable to recover it. 00:25:41.102 [2024-11-26 20:55:44.730569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.102 [2024-11-26 20:55:44.730604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.102 qpair failed and we were unable to recover it. 00:25:41.102 [2024-11-26 20:55:44.730718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.102 [2024-11-26 20:55:44.730751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.102 qpair failed and we were unable to recover it. 00:25:41.102 [2024-11-26 20:55:44.730902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.102 [2024-11-26 20:55:44.730935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.102 qpair failed and we were unable to recover it. 00:25:41.102 [2024-11-26 20:55:44.731107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.102 [2024-11-26 20:55:44.731170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.102 qpair failed and we were unable to recover it. 00:25:41.102 [2024-11-26 20:55:44.731412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.102 [2024-11-26 20:55:44.731447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.102 qpair failed and we were unable to recover it. 00:25:41.102 [2024-11-26 20:55:44.731558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.102 [2024-11-26 20:55:44.731592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.102 qpair failed and we were unable to recover it. 00:25:41.102 [2024-11-26 20:55:44.731791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.102 [2024-11-26 20:55:44.731826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.102 qpair failed and we were unable to recover it. 00:25:41.102 [2024-11-26 20:55:44.731950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.102 [2024-11-26 20:55:44.731984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.102 qpair failed and we were unable to recover it. 00:25:41.102 [2024-11-26 20:55:44.732160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.102 [2024-11-26 20:55:44.732194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.102 qpair failed and we were unable to recover it. 00:25:41.102 [2024-11-26 20:55:44.732494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.102 [2024-11-26 20:55:44.732528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.102 qpair failed and we were unable to recover it. 00:25:41.102 [2024-11-26 20:55:44.732664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.102 [2024-11-26 20:55:44.732698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.102 qpair failed and we were unable to recover it. 00:25:41.102 [2024-11-26 20:55:44.732839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.102 [2024-11-26 20:55:44.732872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.102 qpair failed and we were unable to recover it. 00:25:41.102 [2024-11-26 20:55:44.732984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.102 [2024-11-26 20:55:44.733019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.102 qpair failed and we were unable to recover it. 00:25:41.102 [2024-11-26 20:55:44.733157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.102 [2024-11-26 20:55:44.733191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.102 qpair failed and we were unable to recover it. 00:25:41.102 [2024-11-26 20:55:44.733396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.102 [2024-11-26 20:55:44.733430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.102 qpair failed and we were unable to recover it. 00:25:41.102 [2024-11-26 20:55:44.733530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.102 [2024-11-26 20:55:44.733563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.102 qpair failed and we were unable to recover it. 00:25:41.102 [2024-11-26 20:55:44.733769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.103 [2024-11-26 20:55:44.733833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.103 qpair failed and we were unable to recover it. 00:25:41.103 [2024-11-26 20:55:44.734066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.103 [2024-11-26 20:55:44.734130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.103 qpair failed and we were unable to recover it. 00:25:41.103 [2024-11-26 20:55:44.734377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.103 [2024-11-26 20:55:44.734443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.103 qpair failed and we were unable to recover it. 00:25:41.103 [2024-11-26 20:55:44.734745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.103 [2024-11-26 20:55:44.734809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.103 qpair failed and we were unable to recover it. 00:25:41.103 [2024-11-26 20:55:44.735091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.103 [2024-11-26 20:55:44.735155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.103 qpair failed and we were unable to recover it. 00:25:41.103 [2024-11-26 20:55:44.735428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.103 [2024-11-26 20:55:44.735504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.103 qpair failed and we were unable to recover it. 00:25:41.103 [2024-11-26 20:55:44.735732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.103 [2024-11-26 20:55:44.735766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.103 qpair failed and we were unable to recover it. 00:25:41.103 [2024-11-26 20:55:44.735913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.103 [2024-11-26 20:55:44.735948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.103 qpair failed and we were unable to recover it. 00:25:41.103 [2024-11-26 20:55:44.736091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.103 [2024-11-26 20:55:44.736125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.103 qpair failed and we were unable to recover it. 00:25:41.103 [2024-11-26 20:55:44.736279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.103 [2024-11-26 20:55:44.736321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.103 qpair failed and we were unable to recover it. 00:25:41.103 [2024-11-26 20:55:44.736475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.103 [2024-11-26 20:55:44.736509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.103 qpair failed and we were unable to recover it. 00:25:41.103 [2024-11-26 20:55:44.736650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.103 [2024-11-26 20:55:44.736683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.103 qpair failed and we were unable to recover it. 00:25:41.103 [2024-11-26 20:55:44.736823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.103 [2024-11-26 20:55:44.736864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.103 qpair failed and we were unable to recover it. 00:25:41.103 [2024-11-26 20:55:44.737012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.103 [2024-11-26 20:55:44.737047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.103 qpair failed and we were unable to recover it. 00:25:41.103 [2024-11-26 20:55:44.737329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.103 [2024-11-26 20:55:44.737401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.103 qpair failed and we were unable to recover it. 00:25:41.103 [2024-11-26 20:55:44.737614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.103 [2024-11-26 20:55:44.737684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.103 qpair failed and we were unable to recover it. 00:25:41.103 [2024-11-26 20:55:44.737948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.103 [2024-11-26 20:55:44.738012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.103 qpair failed and we were unable to recover it. 00:25:41.103 [2024-11-26 20:55:44.738228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.103 [2024-11-26 20:55:44.738292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.103 qpair failed and we were unable to recover it. 00:25:41.103 [2024-11-26 20:55:44.738615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.103 [2024-11-26 20:55:44.738699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.103 qpair failed and we were unable to recover it. 00:25:41.103 [2024-11-26 20:55:44.738922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.103 [2024-11-26 20:55:44.738978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.103 qpair failed and we were unable to recover it. 00:25:41.103 [2024-11-26 20:55:44.739110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.103 [2024-11-26 20:55:44.739143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.103 qpair failed and we were unable to recover it. 00:25:41.103 [2024-11-26 20:55:44.739354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.103 [2024-11-26 20:55:44.739389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.103 qpair failed and we were unable to recover it. 00:25:41.103 [2024-11-26 20:55:44.739500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.103 [2024-11-26 20:55:44.739534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.103 qpair failed and we were unable to recover it. 00:25:41.103 [2024-11-26 20:55:44.739681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.103 [2024-11-26 20:55:44.739714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.103 qpair failed and we were unable to recover it. 00:25:41.103 [2024-11-26 20:55:44.739970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.103 [2024-11-26 20:55:44.740037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.103 qpair failed and we were unable to recover it. 00:25:41.103 [2024-11-26 20:55:44.740264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.103 [2024-11-26 20:55:44.740348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.103 qpair failed and we were unable to recover it. 00:25:41.103 [2024-11-26 20:55:44.740684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.103 [2024-11-26 20:55:44.740778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.103 qpair failed and we were unable to recover it. 00:25:41.103 [2024-11-26 20:55:44.741083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.103 [2024-11-26 20:55:44.741117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.103 qpair failed and we were unable to recover it. 00:25:41.103 [2024-11-26 20:55:44.741241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.103 [2024-11-26 20:55:44.741276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.103 qpair failed and we were unable to recover it. 00:25:41.103 [2024-11-26 20:55:44.741428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.103 [2024-11-26 20:55:44.741482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.378 qpair failed and we were unable to recover it. 00:25:41.378 [2024-11-26 20:55:44.741701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.378 [2024-11-26 20:55:44.741772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.378 qpair failed and we were unable to recover it. 00:25:41.378 [2024-11-26 20:55:44.742016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.378 [2024-11-26 20:55:44.742049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.378 qpair failed and we were unable to recover it. 00:25:41.378 [2024-11-26 20:55:44.742216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.378 [2024-11-26 20:55:44.742250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.378 qpair failed and we were unable to recover it. 00:25:41.378 [2024-11-26 20:55:44.742511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.378 [2024-11-26 20:55:44.742546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.378 qpair failed and we were unable to recover it. 00:25:41.378 [2024-11-26 20:55:44.742685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.378 [2024-11-26 20:55:44.742718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.378 qpair failed and we were unable to recover it. 00:25:41.378 [2024-11-26 20:55:44.742858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.378 [2024-11-26 20:55:44.742892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.378 qpair failed and we were unable to recover it. 00:25:41.378 [2024-11-26 20:55:44.743008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.378 [2024-11-26 20:55:44.743040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.378 qpair failed and we were unable to recover it. 00:25:41.378 [2024-11-26 20:55:44.743266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.378 [2024-11-26 20:55:44.743361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.378 qpair failed and we were unable to recover it. 00:25:41.378 [2024-11-26 20:55:44.743586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.378 [2024-11-26 20:55:44.743652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.378 qpair failed and we were unable to recover it. 00:25:41.378 [2024-11-26 20:55:44.743892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.378 [2024-11-26 20:55:44.743958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.378 qpair failed and we were unable to recover it. 00:25:41.378 [2024-11-26 20:55:44.744202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.378 [2024-11-26 20:55:44.744256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.378 qpair failed and we were unable to recover it. 00:25:41.378 [2024-11-26 20:55:44.744419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.378 [2024-11-26 20:55:44.744453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.378 qpair failed and we were unable to recover it. 00:25:41.378 [2024-11-26 20:55:44.744596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.378 [2024-11-26 20:55:44.744629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.378 qpair failed and we were unable to recover it. 00:25:41.378 [2024-11-26 20:55:44.744848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.378 [2024-11-26 20:55:44.744911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.378 qpair failed and we were unable to recover it. 00:25:41.378 [2024-11-26 20:55:44.745160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.378 [2024-11-26 20:55:44.745222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.378 qpair failed and we were unable to recover it. 00:25:41.378 [2024-11-26 20:55:44.745464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.378 [2024-11-26 20:55:44.745540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.378 qpair failed and we were unable to recover it. 00:25:41.378 [2024-11-26 20:55:44.745750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.378 [2024-11-26 20:55:44.745816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.378 qpair failed and we were unable to recover it. 00:25:41.378 [2024-11-26 20:55:44.746062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.378 [2024-11-26 20:55:44.746124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.378 qpair failed and we were unable to recover it. 00:25:41.378 [2024-11-26 20:55:44.746373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.378 [2024-11-26 20:55:44.746417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.378 qpair failed and we were unable to recover it. 00:25:41.378 [2024-11-26 20:55:44.746529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.378 [2024-11-26 20:55:44.746572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.378 qpair failed and we were unable to recover it. 00:25:41.378 [2024-11-26 20:55:44.746719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.378 [2024-11-26 20:55:44.746780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.378 qpair failed and we were unable to recover it. 00:25:41.378 [2024-11-26 20:55:44.747074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.378 [2024-11-26 20:55:44.747136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.378 qpair failed and we were unable to recover it. 00:25:41.378 [2024-11-26 20:55:44.747433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.378 [2024-11-26 20:55:44.747503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.378 qpair failed and we were unable to recover it. 00:25:41.378 [2024-11-26 20:55:44.747758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.378 [2024-11-26 20:55:44.747823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.378 qpair failed and we were unable to recover it. 00:25:41.378 [2024-11-26 20:55:44.748060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.378 [2024-11-26 20:55:44.748093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.378 qpair failed and we were unable to recover it. 00:25:41.378 [2024-11-26 20:55:44.748236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.378 [2024-11-26 20:55:44.748270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.378 qpair failed and we were unable to recover it. 00:25:41.378 [2024-11-26 20:55:44.748460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.378 [2024-11-26 20:55:44.748493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.378 qpair failed and we were unable to recover it. 00:25:41.378 [2024-11-26 20:55:44.748617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.378 [2024-11-26 20:55:44.748649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.378 qpair failed and we were unable to recover it. 00:25:41.378 [2024-11-26 20:55:44.748794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.378 [2024-11-26 20:55:44.748827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.378 qpair failed and we were unable to recover it. 00:25:41.378 [2024-11-26 20:55:44.748974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.378 [2024-11-26 20:55:44.749006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.378 qpair failed and we were unable to recover it. 00:25:41.378 [2024-11-26 20:55:44.749143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.378 [2024-11-26 20:55:44.749176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.378 qpair failed and we were unable to recover it. 00:25:41.378 [2024-11-26 20:55:44.749349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-11-26 20:55:44.749382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-11-26 20:55:44.749489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-11-26 20:55:44.749522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-11-26 20:55:44.749662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-11-26 20:55:44.749694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-11-26 20:55:44.749950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-11-26 20:55:44.749982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-11-26 20:55:44.750095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-11-26 20:55:44.750127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-11-26 20:55:44.750281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-11-26 20:55:44.750327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-11-26 20:55:44.750594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-11-26 20:55:44.750628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-11-26 20:55:44.750761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-11-26 20:55:44.750793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-11-26 20:55:44.751011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-11-26 20:55:44.751073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-11-26 20:55:44.751334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-11-26 20:55:44.751377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-11-26 20:55:44.751515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-11-26 20:55:44.751547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-11-26 20:55:44.751836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-11-26 20:55:44.751899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-11-26 20:55:44.752132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-11-26 20:55:44.752164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-11-26 20:55:44.752275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-11-26 20:55:44.752324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-11-26 20:55:44.752498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-11-26 20:55:44.752531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-11-26 20:55:44.752707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-11-26 20:55:44.752740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-11-26 20:55:44.752856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-11-26 20:55:44.752890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-11-26 20:55:44.753089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-11-26 20:55:44.753147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-11-26 20:55:44.753256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-11-26 20:55:44.753290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-11-26 20:55:44.753508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-11-26 20:55:44.753575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-11-26 20:55:44.753865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-11-26 20:55:44.753898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-11-26 20:55:44.754035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-11-26 20:55:44.754067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-11-26 20:55:44.754328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-11-26 20:55:44.754363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-11-26 20:55:44.754640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-11-26 20:55:44.754703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-11-26 20:55:44.754919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-11-26 20:55:44.754992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-11-26 20:55:44.755212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-11-26 20:55:44.755274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-11-26 20:55:44.755581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-11-26 20:55:44.755646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-11-26 20:55:44.755933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-11-26 20:55:44.755995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-11-26 20:55:44.756252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-11-26 20:55:44.756333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-11-26 20:55:44.756579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-11-26 20:55:44.756652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-11-26 20:55:44.756895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-11-26 20:55:44.756957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-11-26 20:55:44.757201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-11-26 20:55:44.757263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-11-26 20:55:44.757581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-11-26 20:55:44.757648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-11-26 20:55:44.757904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-11-26 20:55:44.757966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-11-26 20:55:44.758255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-11-26 20:55:44.758338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-11-26 20:55:44.758581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-11-26 20:55:44.758644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-11-26 20:55:44.758835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.379 [2024-11-26 20:55:44.758899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.379 qpair failed and we were unable to recover it. 00:25:41.379 [2024-11-26 20:55:44.759105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-11-26 20:55:44.759167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-11-26 20:55:44.759439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-11-26 20:55:44.759506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-11-26 20:55:44.759793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-11-26 20:55:44.759855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-11-26 20:55:44.760104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-11-26 20:55:44.760166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-11-26 20:55:44.760409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-11-26 20:55:44.760474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-11-26 20:55:44.760712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-11-26 20:55:44.760773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-11-26 20:55:44.761055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-11-26 20:55:44.761117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-11-26 20:55:44.761402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-11-26 20:55:44.761466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-11-26 20:55:44.761715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-11-26 20:55:44.761778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-11-26 20:55:44.762025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-11-26 20:55:44.762091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-11-26 20:55:44.762344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-11-26 20:55:44.762409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-11-26 20:55:44.762666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-11-26 20:55:44.762728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-11-26 20:55:44.762989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-11-26 20:55:44.763051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-11-26 20:55:44.763330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-11-26 20:55:44.763412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-11-26 20:55:44.763675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-11-26 20:55:44.763738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-11-26 20:55:44.764018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-11-26 20:55:44.764080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-11-26 20:55:44.764330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-11-26 20:55:44.764396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-11-26 20:55:44.764659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-11-26 20:55:44.764721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-11-26 20:55:44.764931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-11-26 20:55:44.764994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-11-26 20:55:44.765231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-11-26 20:55:44.765294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-11-26 20:55:44.765602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-11-26 20:55:44.765665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-11-26 20:55:44.765904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-11-26 20:55:44.765966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-11-26 20:55:44.766190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-11-26 20:55:44.766255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-11-26 20:55:44.766477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-11-26 20:55:44.766540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-11-26 20:55:44.766780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-11-26 20:55:44.766845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-11-26 20:55:44.767063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-11-26 20:55:44.767124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-11-26 20:55:44.767351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-11-26 20:55:44.767425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-11-26 20:55:44.767692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-11-26 20:55:44.767768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-11-26 20:55:44.767967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-11-26 20:55:44.768033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-11-26 20:55:44.768273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-11-26 20:55:44.768356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-11-26 20:55:44.768613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-11-26 20:55:44.768676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-11-26 20:55:44.768920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-11-26 20:55:44.768983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-11-26 20:55:44.769229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-11-26 20:55:44.769292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-11-26 20:55:44.769552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-11-26 20:55:44.769615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-11-26 20:55:44.769832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-11-26 20:55:44.769895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-11-26 20:55:44.770153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-11-26 20:55:44.770216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-11-26 20:55:44.770445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-11-26 20:55:44.770514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-11-26 20:55:44.770730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.380 [2024-11-26 20:55:44.770794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.380 qpair failed and we were unable to recover it. 00:25:41.380 [2024-11-26 20:55:44.771080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.381 [2024-11-26 20:55:44.771143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.381 qpair failed and we were unable to recover it. 00:25:41.381 [2024-11-26 20:55:44.771387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.381 [2024-11-26 20:55:44.771455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.381 qpair failed and we were unable to recover it. 00:25:41.381 [2024-11-26 20:55:44.771746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.381 [2024-11-26 20:55:44.771809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.381 qpair failed and we were unable to recover it. 00:25:41.381 [2024-11-26 20:55:44.772074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.381 [2024-11-26 20:55:44.772138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.381 qpair failed and we were unable to recover it. 00:25:41.381 [2024-11-26 20:55:44.772361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.381 [2024-11-26 20:55:44.772448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.381 qpair failed and we were unable to recover it. 00:25:41.381 [2024-11-26 20:55:44.772690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.381 [2024-11-26 20:55:44.772752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.381 qpair failed and we were unable to recover it. 00:25:41.381 [2024-11-26 20:55:44.773031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.381 [2024-11-26 20:55:44.773094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.381 qpair failed and we were unable to recover it. 00:25:41.381 [2024-11-26 20:55:44.773382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.381 [2024-11-26 20:55:44.773446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.381 qpair failed and we were unable to recover it. 00:25:41.381 [2024-11-26 20:55:44.773735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.381 [2024-11-26 20:55:44.773797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.381 qpair failed and we were unable to recover it. 00:25:41.381 [2024-11-26 20:55:44.774017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.381 [2024-11-26 20:55:44.774080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.381 qpair failed and we were unable to recover it. 00:25:41.381 [2024-11-26 20:55:44.774337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.381 [2024-11-26 20:55:44.774414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.381 qpair failed and we were unable to recover it. 00:25:41.381 [2024-11-26 20:55:44.774685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.381 [2024-11-26 20:55:44.774749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.381 qpair failed and we were unable to recover it. 00:25:41.381 [2024-11-26 20:55:44.774935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.381 [2024-11-26 20:55:44.774997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.381 qpair failed and we were unable to recover it. 00:25:41.381 [2024-11-26 20:55:44.775242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.381 [2024-11-26 20:55:44.775326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.381 qpair failed and we were unable to recover it. 00:25:41.381 [2024-11-26 20:55:44.775604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.381 [2024-11-26 20:55:44.775666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.381 qpair failed and we were unable to recover it. 00:25:41.381 [2024-11-26 20:55:44.775947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.381 [2024-11-26 20:55:44.776010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.381 qpair failed and we were unable to recover it. 00:25:41.381 [2024-11-26 20:55:44.776285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.381 [2024-11-26 20:55:44.776384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.381 qpair failed and we were unable to recover it. 00:25:41.381 [2024-11-26 20:55:44.776673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.381 [2024-11-26 20:55:44.776736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.381 qpair failed and we were unable to recover it. 00:25:41.381 [2024-11-26 20:55:44.776986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.381 [2024-11-26 20:55:44.777048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.381 qpair failed and we were unable to recover it. 00:25:41.381 [2024-11-26 20:55:44.777296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.381 [2024-11-26 20:55:44.777378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.381 qpair failed and we were unable to recover it. 00:25:41.381 [2024-11-26 20:55:44.777635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.381 [2024-11-26 20:55:44.777696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.381 qpair failed and we were unable to recover it. 00:25:41.381 [2024-11-26 20:55:44.777950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.381 [2024-11-26 20:55:44.778012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.381 qpair failed and we were unable to recover it. 00:25:41.381 [2024-11-26 20:55:44.778259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.381 [2024-11-26 20:55:44.778361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.381 qpair failed and we were unable to recover it. 00:25:41.381 [2024-11-26 20:55:44.778641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.381 [2024-11-26 20:55:44.778704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.381 qpair failed and we were unable to recover it. 00:25:41.381 [2024-11-26 20:55:44.778920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.381 [2024-11-26 20:55:44.778983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.381 qpair failed and we were unable to recover it. 00:25:41.381 [2024-11-26 20:55:44.779273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.381 [2024-11-26 20:55:44.779365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.381 qpair failed and we were unable to recover it. 00:25:41.381 [2024-11-26 20:55:44.779640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.381 [2024-11-26 20:55:44.779707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.381 qpair failed and we were unable to recover it. 00:25:41.381 [2024-11-26 20:55:44.779903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.381 [2024-11-26 20:55:44.779966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.381 qpair failed and we were unable to recover it. 00:25:41.381 [2024-11-26 20:55:44.780183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.381 [2024-11-26 20:55:44.780248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.381 qpair failed and we were unable to recover it. 00:25:41.381 [2024-11-26 20:55:44.780554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.381 [2024-11-26 20:55:44.780630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.381 qpair failed and we were unable to recover it. 00:25:41.381 [2024-11-26 20:55:44.780876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.381 [2024-11-26 20:55:44.780940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.381 qpair failed and we were unable to recover it. 00:25:41.381 [2024-11-26 20:55:44.781215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.381 [2024-11-26 20:55:44.781278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.381 qpair failed and we were unable to recover it. 00:25:41.381 [2024-11-26 20:55:44.781515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.381 [2024-11-26 20:55:44.781578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.381 qpair failed and we were unable to recover it. 00:25:41.381 [2024-11-26 20:55:44.781829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.381 [2024-11-26 20:55:44.781893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.381 qpair failed and we were unable to recover it. 00:25:41.382 [2024-11-26 20:55:44.782196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.382 [2024-11-26 20:55:44.782259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.382 qpair failed and we were unable to recover it. 00:25:41.382 [2024-11-26 20:55:44.782553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.382 [2024-11-26 20:55:44.782618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.382 qpair failed and we were unable to recover it. 00:25:41.382 [2024-11-26 20:55:44.782873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.382 [2024-11-26 20:55:44.782936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.382 qpair failed and we were unable to recover it. 00:25:41.382 [2024-11-26 20:55:44.783230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.382 [2024-11-26 20:55:44.783293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.382 qpair failed and we were unable to recover it. 00:25:41.382 [2024-11-26 20:55:44.783565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.382 [2024-11-26 20:55:44.783628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.382 qpair failed and we were unable to recover it. 00:25:41.382 [2024-11-26 20:55:44.783916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.382 [2024-11-26 20:55:44.783979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.382 qpair failed and we were unable to recover it. 00:25:41.382 [2024-11-26 20:55:44.784175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.382 [2024-11-26 20:55:44.784239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.382 qpair failed and we were unable to recover it. 00:25:41.382 [2024-11-26 20:55:44.784502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.382 [2024-11-26 20:55:44.784565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.382 qpair failed and we were unable to recover it. 00:25:41.382 [2024-11-26 20:55:44.784816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.382 [2024-11-26 20:55:44.784879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.382 qpair failed and we were unable to recover it. 00:25:41.382 [2024-11-26 20:55:44.785119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.382 [2024-11-26 20:55:44.785184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.382 qpair failed and we were unable to recover it. 00:25:41.382 [2024-11-26 20:55:44.785434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.382 [2024-11-26 20:55:44.785498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.382 qpair failed and we were unable to recover it. 00:25:41.382 [2024-11-26 20:55:44.785788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.382 [2024-11-26 20:55:44.785851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.382 qpair failed and we were unable to recover it. 00:25:41.382 [2024-11-26 20:55:44.786134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.382 [2024-11-26 20:55:44.786197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.382 qpair failed and we were unable to recover it. 00:25:41.382 [2024-11-26 20:55:44.786474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.382 [2024-11-26 20:55:44.786540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.382 qpair failed and we were unable to recover it. 00:25:41.382 [2024-11-26 20:55:44.786781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.382 [2024-11-26 20:55:44.786845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.382 qpair failed and we were unable to recover it. 00:25:41.382 [2024-11-26 20:55:44.787130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.382 [2024-11-26 20:55:44.787194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.382 qpair failed and we were unable to recover it. 00:25:41.382 [2024-11-26 20:55:44.787464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.382 [2024-11-26 20:55:44.787529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.382 qpair failed and we were unable to recover it. 00:25:41.382 [2024-11-26 20:55:44.787769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.382 [2024-11-26 20:55:44.787832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.382 qpair failed and we were unable to recover it. 00:25:41.382 [2024-11-26 20:55:44.788058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.382 [2024-11-26 20:55:44.788124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.382 qpair failed and we were unable to recover it. 00:25:41.382 [2024-11-26 20:55:44.788378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.382 [2024-11-26 20:55:44.788442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.382 qpair failed and we were unable to recover it. 00:25:41.382 [2024-11-26 20:55:44.788690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.382 [2024-11-26 20:55:44.788753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.382 qpair failed and we were unable to recover it. 00:25:41.382 [2024-11-26 20:55:44.788990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.382 [2024-11-26 20:55:44.789053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.382 qpair failed and we were unable to recover it. 00:25:41.382 [2024-11-26 20:55:44.789150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f1bf30 (9): Bad file descriptor 00:25:41.382 [2024-11-26 20:55:44.789610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.382 [2024-11-26 20:55:44.789711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.382 qpair failed and we were unable to recover it. 00:25:41.382 [2024-11-26 20:55:44.789994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.382 [2024-11-26 20:55:44.790066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.382 qpair failed and we were unable to recover it. 00:25:41.382 [2024-11-26 20:55:44.790366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.382 [2024-11-26 20:55:44.790435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.382 qpair failed and we were unable to recover it. 00:25:41.382 [2024-11-26 20:55:44.790668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.382 [2024-11-26 20:55:44.790737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.382 qpair failed and we were unable to recover it. 00:25:41.382 [2024-11-26 20:55:44.790986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.382 [2024-11-26 20:55:44.791052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.382 qpair failed and we were unable to recover it. 00:25:41.382 [2024-11-26 20:55:44.791295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.382 [2024-11-26 20:55:44.791375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.382 qpair failed and we were unable to recover it. 00:25:41.382 [2024-11-26 20:55:44.791665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.382 [2024-11-26 20:55:44.791731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.382 qpair failed and we were unable to recover it. 00:25:41.382 [2024-11-26 20:55:44.791979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.382 [2024-11-26 20:55:44.792043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.382 qpair failed and we were unable to recover it. 00:25:41.382 [2024-11-26 20:55:44.792258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.382 [2024-11-26 20:55:44.792340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.382 qpair failed and we were unable to recover it. 00:25:41.383 [2024-11-26 20:55:44.792635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.383 [2024-11-26 20:55:44.792701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.383 qpair failed and we were unable to recover it. 00:25:41.383 [2024-11-26 20:55:44.792917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.383 [2024-11-26 20:55:44.792982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.383 qpair failed and we were unable to recover it. 00:25:41.383 [2024-11-26 20:55:44.793265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.383 [2024-11-26 20:55:44.793347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.383 qpair failed and we were unable to recover it. 00:25:41.383 [2024-11-26 20:55:44.793595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.383 [2024-11-26 20:55:44.793661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.383 qpair failed and we were unable to recover it. 00:25:41.383 [2024-11-26 20:55:44.793923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.383 [2024-11-26 20:55:44.793989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.383 qpair failed and we were unable to recover it. 00:25:41.383 [2024-11-26 20:55:44.794253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.383 [2024-11-26 20:55:44.794337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.383 qpair failed and we were unable to recover it. 00:25:41.383 [2024-11-26 20:55:44.794629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.383 [2024-11-26 20:55:44.794694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.383 qpair failed and we were unable to recover it. 00:25:41.383 [2024-11-26 20:55:44.794929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.383 [2024-11-26 20:55:44.794996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.383 qpair failed and we were unable to recover it. 00:25:41.383 [2024-11-26 20:55:44.795209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.383 [2024-11-26 20:55:44.795274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.383 qpair failed and we were unable to recover it. 00:25:41.383 [2024-11-26 20:55:44.795590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.383 [2024-11-26 20:55:44.795654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.383 qpair failed and we were unable to recover it. 00:25:41.383 [2024-11-26 20:55:44.795898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.383 [2024-11-26 20:55:44.795962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.383 qpair failed and we were unable to recover it. 00:25:41.383 [2024-11-26 20:55:44.796243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.383 [2024-11-26 20:55:44.796328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.383 qpair failed and we were unable to recover it. 00:25:41.383 [2024-11-26 20:55:44.796636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.383 [2024-11-26 20:55:44.796700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.383 qpair failed and we were unable to recover it. 00:25:41.383 [2024-11-26 20:55:44.796952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.383 [2024-11-26 20:55:44.797016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.383 qpair failed and we were unable to recover it. 00:25:41.383 [2024-11-26 20:55:44.797335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.383 [2024-11-26 20:55:44.797402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.383 qpair failed and we were unable to recover it. 00:25:41.383 [2024-11-26 20:55:44.797657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.383 [2024-11-26 20:55:44.797722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.383 qpair failed and we were unable to recover it. 00:25:41.383 [2024-11-26 20:55:44.797949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.383 [2024-11-26 20:55:44.798015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.383 qpair failed and we were unable to recover it. 00:25:41.383 [2024-11-26 20:55:44.798319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.383 [2024-11-26 20:55:44.798398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.383 qpair failed and we were unable to recover it. 00:25:41.383 [2024-11-26 20:55:44.798648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.383 [2024-11-26 20:55:44.798713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.383 qpair failed and we were unable to recover it. 00:25:41.383 [2024-11-26 20:55:44.798936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.383 [2024-11-26 20:55:44.799000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.383 qpair failed and we were unable to recover it. 00:25:41.383 [2024-11-26 20:55:44.799224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.383 [2024-11-26 20:55:44.799289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.383 qpair failed and we were unable to recover it. 00:25:41.383 [2024-11-26 20:55:44.799568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.383 [2024-11-26 20:55:44.799636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.383 qpair failed and we were unable to recover it. 00:25:41.383 [2024-11-26 20:55:44.799992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.383 [2024-11-26 20:55:44.800057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.383 qpair failed and we were unable to recover it. 00:25:41.383 [2024-11-26 20:55:44.800277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.383 [2024-11-26 20:55:44.800367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.383 qpair failed and we were unable to recover it. 00:25:41.383 [2024-11-26 20:55:44.800664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.383 [2024-11-26 20:55:44.800751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.383 qpair failed and we were unable to recover it. 00:25:41.383 [2024-11-26 20:55:44.800970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.383 [2024-11-26 20:55:44.801037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.383 qpair failed and we were unable to recover it. 00:25:41.383 [2024-11-26 20:55:44.801334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.383 [2024-11-26 20:55:44.801403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.383 qpair failed and we were unable to recover it. 00:25:41.383 [2024-11-26 20:55:44.801690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.383 [2024-11-26 20:55:44.801755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.383 qpair failed and we were unable to recover it. 00:25:41.383 [2024-11-26 20:55:44.802091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.383 [2024-11-26 20:55:44.802156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.383 qpair failed and we were unable to recover it. 00:25:41.383 [2024-11-26 20:55:44.802374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.383 [2024-11-26 20:55:44.802444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.383 qpair failed and we were unable to recover it. 00:25:41.383 [2024-11-26 20:55:44.802737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.383 [2024-11-26 20:55:44.802801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.383 qpair failed and we were unable to recover it. 00:25:41.383 [2024-11-26 20:55:44.803063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.383 [2024-11-26 20:55:44.803131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.383 qpair failed and we were unable to recover it. 00:25:41.383 [2024-11-26 20:55:44.803380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.383 [2024-11-26 20:55:44.803448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.383 qpair failed and we were unable to recover it. 00:25:41.383 [2024-11-26 20:55:44.803729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.383 [2024-11-26 20:55:44.803795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.383 qpair failed and we were unable to recover it. 00:25:41.383 [2024-11-26 20:55:44.804031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.383 [2024-11-26 20:55:44.804095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.383 qpair failed and we were unable to recover it. 00:25:41.383 [2024-11-26 20:55:44.804327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.383 [2024-11-26 20:55:44.804397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.383 qpair failed and we were unable to recover it. 00:25:41.383 [2024-11-26 20:55:44.804666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.383 [2024-11-26 20:55:44.804734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.383 qpair failed and we were unable to recover it. 00:25:41.384 [2024-11-26 20:55:44.804982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.384 [2024-11-26 20:55:44.805049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.384 qpair failed and we were unable to recover it. 00:25:41.384 [2024-11-26 20:55:44.805316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.384 [2024-11-26 20:55:44.805386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.384 qpair failed and we were unable to recover it. 00:25:41.384 [2024-11-26 20:55:44.805611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.384 [2024-11-26 20:55:44.805677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.384 qpair failed and we were unable to recover it. 00:25:41.384 [2024-11-26 20:55:44.805979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.384 [2024-11-26 20:55:44.806043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.384 qpair failed and we were unable to recover it. 00:25:41.384 [2024-11-26 20:55:44.806255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.384 [2024-11-26 20:55:44.806340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.384 qpair failed and we were unable to recover it. 00:25:41.384 [2024-11-26 20:55:44.806582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.384 [2024-11-26 20:55:44.806647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.384 qpair failed and we were unable to recover it. 00:25:41.384 [2024-11-26 20:55:44.806898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.384 [2024-11-26 20:55:44.806962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.384 qpair failed and we were unable to recover it. 00:25:41.384 [2024-11-26 20:55:44.807270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.384 [2024-11-26 20:55:44.807370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.384 qpair failed and we were unable to recover it. 00:25:41.384 [2024-11-26 20:55:44.807619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.384 [2024-11-26 20:55:44.807685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.384 qpair failed and we were unable to recover it. 00:25:41.384 [2024-11-26 20:55:44.807951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.384 [2024-11-26 20:55:44.808015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.384 qpair failed and we were unable to recover it. 00:25:41.384 [2024-11-26 20:55:44.808265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.384 [2024-11-26 20:55:44.808353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.384 qpair failed and we were unable to recover it. 00:25:41.384 [2024-11-26 20:55:44.808617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.384 [2024-11-26 20:55:44.808682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.384 qpair failed and we were unable to recover it. 00:25:41.384 [2024-11-26 20:55:44.808942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.384 [2024-11-26 20:55:44.809007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.384 qpair failed and we were unable to recover it. 00:25:41.384 [2024-11-26 20:55:44.809300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.384 [2024-11-26 20:55:44.809383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.384 qpair failed and we were unable to recover it. 00:25:41.384 [2024-11-26 20:55:44.809629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.384 [2024-11-26 20:55:44.809693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.384 qpair failed and we were unable to recover it. 00:25:41.384 [2024-11-26 20:55:44.809960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.384 [2024-11-26 20:55:44.810025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.384 qpair failed and we were unable to recover it. 00:25:41.384 [2024-11-26 20:55:44.810241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.384 [2024-11-26 20:55:44.810322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.384 qpair failed and we were unable to recover it. 00:25:41.384 [2024-11-26 20:55:44.810582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.384 [2024-11-26 20:55:44.810649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.384 qpair failed and we were unable to recover it. 00:25:41.384 [2024-11-26 20:55:44.810950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.384 [2024-11-26 20:55:44.811014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.384 qpair failed and we were unable to recover it. 00:25:41.384 [2024-11-26 20:55:44.811260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.384 [2024-11-26 20:55:44.811346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.384 qpair failed and we were unable to recover it. 00:25:41.384 [2024-11-26 20:55:44.811615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.384 [2024-11-26 20:55:44.811691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.384 qpair failed and we were unable to recover it. 00:25:41.384 [2024-11-26 20:55:44.812002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.384 [2024-11-26 20:55:44.812068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.384 qpair failed and we were unable to recover it. 00:25:41.384 [2024-11-26 20:55:44.812331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.384 [2024-11-26 20:55:44.812402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.384 qpair failed and we were unable to recover it. 00:25:41.384 [2024-11-26 20:55:44.812611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.384 [2024-11-26 20:55:44.812675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.384 qpair failed and we were unable to recover it. 00:25:41.384 [2024-11-26 20:55:44.812964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.384 [2024-11-26 20:55:44.813029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.384 qpair failed and we were unable to recover it. 00:25:41.384 [2024-11-26 20:55:44.813323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.384 [2024-11-26 20:55:44.813388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.384 qpair failed and we were unable to recover it. 00:25:41.384 [2024-11-26 20:55:44.813647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.384 [2024-11-26 20:55:44.813712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.384 qpair failed and we were unable to recover it. 00:25:41.384 [2024-11-26 20:55:44.814006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.384 [2024-11-26 20:55:44.814071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.384 qpair failed and we were unable to recover it. 00:25:41.384 [2024-11-26 20:55:44.814345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.384 [2024-11-26 20:55:44.814412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.384 qpair failed and we were unable to recover it. 00:25:41.384 [2024-11-26 20:55:44.814673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.384 [2024-11-26 20:55:44.814738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.384 qpair failed and we were unable to recover it. 00:25:41.384 [2024-11-26 20:55:44.814986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.384 [2024-11-26 20:55:44.815049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.384 qpair failed and we were unable to recover it. 00:25:41.384 [2024-11-26 20:55:44.815325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.384 [2024-11-26 20:55:44.815401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.384 qpair failed and we were unable to recover it. 00:25:41.384 [2024-11-26 20:55:44.815683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.384 [2024-11-26 20:55:44.815748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.384 qpair failed and we were unable to recover it. 00:25:41.384 [2024-11-26 20:55:44.815999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.384 [2024-11-26 20:55:44.816064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.384 qpair failed and we were unable to recover it. 00:25:41.384 [2024-11-26 20:55:44.816371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.384 [2024-11-26 20:55:44.816437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.384 qpair failed and we were unable to recover it. 00:25:41.384 [2024-11-26 20:55:44.816694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.385 [2024-11-26 20:55:44.816759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.385 qpair failed and we were unable to recover it. 00:25:41.385 [2024-11-26 20:55:44.817054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.385 [2024-11-26 20:55:44.817119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.385 qpair failed and we were unable to recover it. 00:25:41.385 [2024-11-26 20:55:44.817389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.385 [2024-11-26 20:55:44.817454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.385 qpair failed and we were unable to recover it. 00:25:41.385 [2024-11-26 20:55:44.817652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.385 [2024-11-26 20:55:44.817718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.385 qpair failed and we were unable to recover it. 00:25:41.385 [2024-11-26 20:55:44.817973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.385 [2024-11-26 20:55:44.818038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.385 qpair failed and we were unable to recover it. 00:25:41.385 [2024-11-26 20:55:44.818290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.385 [2024-11-26 20:55:44.818371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.385 qpair failed and we were unable to recover it. 00:25:41.385 [2024-11-26 20:55:44.818601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.385 [2024-11-26 20:55:44.818665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.385 qpair failed and we were unable to recover it. 00:25:41.385 [2024-11-26 20:55:44.818920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.385 [2024-11-26 20:55:44.818986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.385 qpair failed and we were unable to recover it. 00:25:41.385 [2024-11-26 20:55:44.819195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.385 [2024-11-26 20:55:44.819260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.385 qpair failed and we were unable to recover it. 00:25:41.385 [2024-11-26 20:55:44.819548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.385 [2024-11-26 20:55:44.819612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.385 qpair failed and we were unable to recover it. 00:25:41.385 [2024-11-26 20:55:44.819808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.385 [2024-11-26 20:55:44.819874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.385 qpair failed and we were unable to recover it. 00:25:41.385 [2024-11-26 20:55:44.820131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.385 [2024-11-26 20:55:44.820196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.385 qpair failed and we were unable to recover it. 00:25:41.385 [2024-11-26 20:55:44.820449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.385 [2024-11-26 20:55:44.820517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.385 qpair failed and we were unable to recover it. 00:25:41.385 [2024-11-26 20:55:44.820712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.385 [2024-11-26 20:55:44.820777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.385 qpair failed and we were unable to recover it. 00:25:41.385 [2024-11-26 20:55:44.821046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.385 [2024-11-26 20:55:44.821111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.385 qpair failed and we were unable to recover it. 00:25:41.385 [2024-11-26 20:55:44.821365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.385 [2024-11-26 20:55:44.821434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.385 qpair failed and we were unable to recover it. 00:25:41.385 [2024-11-26 20:55:44.821720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.385 [2024-11-26 20:55:44.821784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.385 qpair failed and we were unable to recover it. 00:25:41.385 [2024-11-26 20:55:44.822035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.385 [2024-11-26 20:55:44.822100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.385 qpair failed and we were unable to recover it. 00:25:41.385 [2024-11-26 20:55:44.822379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.385 [2024-11-26 20:55:44.822445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.385 qpair failed and we were unable to recover it. 00:25:41.385 [2024-11-26 20:55:44.822690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.385 [2024-11-26 20:55:44.822754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.385 qpair failed and we were unable to recover it. 00:25:41.385 [2024-11-26 20:55:44.823000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.385 [2024-11-26 20:55:44.823064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.385 qpair failed and we were unable to recover it. 00:25:41.385 [2024-11-26 20:55:44.823327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.385 [2024-11-26 20:55:44.823402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.385 qpair failed and we were unable to recover it. 00:25:41.385 [2024-11-26 20:55:44.823661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.385 [2024-11-26 20:55:44.823725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.385 qpair failed and we were unable to recover it. 00:25:41.385 [2024-11-26 20:55:44.823970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.385 [2024-11-26 20:55:44.824036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.385 qpair failed and we were unable to recover it. 00:25:41.385 [2024-11-26 20:55:44.824230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.385 [2024-11-26 20:55:44.824298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.385 qpair failed and we were unable to recover it. 00:25:41.385 [2024-11-26 20:55:44.824532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.385 [2024-11-26 20:55:44.824611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.385 qpair failed and we were unable to recover it. 00:25:41.385 [2024-11-26 20:55:44.824911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.385 [2024-11-26 20:55:44.824976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.385 qpair failed and we were unable to recover it. 00:25:41.385 [2024-11-26 20:55:44.825188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.385 [2024-11-26 20:55:44.825253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.385 qpair failed and we were unable to recover it. 00:25:41.385 [2024-11-26 20:55:44.825558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.385 [2024-11-26 20:55:44.825623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.385 qpair failed and we were unable to recover it. 00:25:41.385 [2024-11-26 20:55:44.825866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.385 [2024-11-26 20:55:44.825933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.385 qpair failed and we were unable to recover it. 00:25:41.385 [2024-11-26 20:55:44.826232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.385 [2024-11-26 20:55:44.826296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.385 qpair failed and we were unable to recover it. 00:25:41.385 [2024-11-26 20:55:44.826563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.385 [2024-11-26 20:55:44.826629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.385 qpair failed and we were unable to recover it. 00:25:41.385 [2024-11-26 20:55:44.826844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.385 [2024-11-26 20:55:44.826911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.385 qpair failed and we were unable to recover it. 00:25:41.385 [2024-11-26 20:55:44.827215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.385 [2024-11-26 20:55:44.827280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.385 qpair failed and we were unable to recover it. 00:25:41.385 [2024-11-26 20:55:44.827520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.385 [2024-11-26 20:55:44.827588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.385 qpair failed and we were unable to recover it. 00:25:41.385 [2024-11-26 20:55:44.827858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.385 [2024-11-26 20:55:44.827902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.385 qpair failed and we were unable to recover it. 00:25:41.385 [2024-11-26 20:55:44.828060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.385 [2024-11-26 20:55:44.828102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.385 qpair failed and we were unable to recover it. 00:25:41.385 [2024-11-26 20:55:44.828252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.385 [2024-11-26 20:55:44.828294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.385 qpair failed and we were unable to recover it. 00:25:41.385 [2024-11-26 20:55:44.828486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.386 [2024-11-26 20:55:44.828530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.386 qpair failed and we were unable to recover it. 00:25:41.386 [2024-11-26 20:55:44.828712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.386 [2024-11-26 20:55:44.828754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.386 qpair failed and we were unable to recover it. 00:25:41.386 [2024-11-26 20:55:44.828893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.386 [2024-11-26 20:55:44.828936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.386 qpair failed and we were unable to recover it. 00:25:41.386 [2024-11-26 20:55:44.829117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.386 [2024-11-26 20:55:44.829161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.386 qpair failed and we were unable to recover it. 00:25:41.386 [2024-11-26 20:55:44.829341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.386 [2024-11-26 20:55:44.829385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.386 qpair failed and we were unable to recover it. 00:25:41.386 [2024-11-26 20:55:44.829515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.386 [2024-11-26 20:55:44.829558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.386 qpair failed and we were unable to recover it. 00:25:41.386 [2024-11-26 20:55:44.829729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.386 [2024-11-26 20:55:44.829772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.386 qpair failed and we were unable to recover it. 00:25:41.386 [2024-11-26 20:55:44.829890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.386 [2024-11-26 20:55:44.829932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.386 qpair failed and we were unable to recover it. 00:25:41.386 [2024-11-26 20:55:44.830104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.386 [2024-11-26 20:55:44.830149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.386 qpair failed and we were unable to recover it. 00:25:41.386 [2024-11-26 20:55:44.830322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.386 [2024-11-26 20:55:44.830367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.386 qpair failed and we were unable to recover it. 00:25:41.386 [2024-11-26 20:55:44.830524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.386 [2024-11-26 20:55:44.830568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.386 qpair failed and we were unable to recover it. 00:25:41.386 [2024-11-26 20:55:44.830701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.386 [2024-11-26 20:55:44.830746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.386 qpair failed and we were unable to recover it. 00:25:41.386 [2024-11-26 20:55:44.830949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.386 [2024-11-26 20:55:44.830992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.386 qpair failed and we were unable to recover it. 00:25:41.386 [2024-11-26 20:55:44.831198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.386 [2024-11-26 20:55:44.831240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.386 qpair failed and we were unable to recover it. 00:25:41.386 [2024-11-26 20:55:44.831417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.386 [2024-11-26 20:55:44.831462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.386 qpair failed and we were unable to recover it. 00:25:41.386 [2024-11-26 20:55:44.831637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.386 [2024-11-26 20:55:44.831680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.386 qpair failed and we were unable to recover it. 00:25:41.386 [2024-11-26 20:55:44.831859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.386 [2024-11-26 20:55:44.831902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.386 qpair failed and we were unable to recover it. 00:25:41.386 [2024-11-26 20:55:44.832085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.386 [2024-11-26 20:55:44.832129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.386 qpair failed and we were unable to recover it. 00:25:41.386 [2024-11-26 20:55:44.832265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.386 [2024-11-26 20:55:44.832318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.386 qpair failed and we were unable to recover it. 00:25:41.386 [2024-11-26 20:55:44.832513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.386 [2024-11-26 20:55:44.832558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.386 qpair failed and we were unable to recover it. 00:25:41.386 [2024-11-26 20:55:44.832699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.386 [2024-11-26 20:55:44.832744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.386 qpair failed and we were unable to recover it. 00:25:41.386 [2024-11-26 20:55:44.832925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.386 [2024-11-26 20:55:44.832968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.386 qpair failed and we were unable to recover it. 00:25:41.386 [2024-11-26 20:55:44.833102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.386 [2024-11-26 20:55:44.833145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.386 qpair failed and we were unable to recover it. 00:25:41.386 [2024-11-26 20:55:44.833325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.386 [2024-11-26 20:55:44.833370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.386 qpair failed and we were unable to recover it. 00:25:41.386 [2024-11-26 20:55:44.833539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.386 [2024-11-26 20:55:44.833582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.386 qpair failed and we were unable to recover it. 00:25:41.386 [2024-11-26 20:55:44.833766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.386 [2024-11-26 20:55:44.833809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.386 qpair failed and we were unable to recover it. 00:25:41.386 [2024-11-26 20:55:44.833978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.386 [2024-11-26 20:55:44.834021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.386 qpair failed and we were unable to recover it. 00:25:41.386 [2024-11-26 20:55:44.834185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.386 [2024-11-26 20:55:44.834235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.386 qpair failed and we were unable to recover it. 00:25:41.386 [2024-11-26 20:55:44.834461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.386 [2024-11-26 20:55:44.834505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.386 qpair failed and we were unable to recover it. 00:25:41.386 [2024-11-26 20:55:44.834681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.386 [2024-11-26 20:55:44.834724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.386 qpair failed and we were unable to recover it. 00:25:41.386 [2024-11-26 20:55:44.834890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.386 [2024-11-26 20:55:44.834933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.386 qpair failed and we were unable to recover it. 00:25:41.386 [2024-11-26 20:55:44.835131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.386 [2024-11-26 20:55:44.835174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.386 qpair failed and we were unable to recover it. 00:25:41.386 [2024-11-26 20:55:44.835335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.386 [2024-11-26 20:55:44.835381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.386 qpair failed and we were unable to recover it. 00:25:41.386 [2024-11-26 20:55:44.835589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.386 [2024-11-26 20:55:44.835633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.386 qpair failed and we were unable to recover it. 00:25:41.386 [2024-11-26 20:55:44.835810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.386 [2024-11-26 20:55:44.835855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.386 qpair failed and we were unable to recover it. 00:25:41.386 [2024-11-26 20:55:44.835992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.386 [2024-11-26 20:55:44.836035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.387 qpair failed and we were unable to recover it. 00:25:41.387 [2024-11-26 20:55:44.836182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.387 [2024-11-26 20:55:44.836225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.387 qpair failed and we were unable to recover it. 00:25:41.387 [2024-11-26 20:55:44.836394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.387 [2024-11-26 20:55:44.836438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.387 qpair failed and we were unable to recover it. 00:25:41.387 [2024-11-26 20:55:44.836652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.387 [2024-11-26 20:55:44.836695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.387 qpair failed and we were unable to recover it. 00:25:41.387 [2024-11-26 20:55:44.836870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.387 [2024-11-26 20:55:44.836913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.387 qpair failed and we were unable to recover it. 00:25:41.387 [2024-11-26 20:55:44.837075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.387 [2024-11-26 20:55:44.837118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.387 qpair failed and we were unable to recover it. 00:25:41.387 [2024-11-26 20:55:44.837299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.387 [2024-11-26 20:55:44.837358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.387 qpair failed and we were unable to recover it. 00:25:41.387 [2024-11-26 20:55:44.837502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.387 [2024-11-26 20:55:44.837547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.387 qpair failed and we were unable to recover it. 00:25:41.387 [2024-11-26 20:55:44.837711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.387 [2024-11-26 20:55:44.837755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.387 qpair failed and we were unable to recover it. 00:25:41.387 [2024-11-26 20:55:44.837936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.387 [2024-11-26 20:55:44.837981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.387 qpair failed and we were unable to recover it. 00:25:41.387 [2024-11-26 20:55:44.838161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.387 [2024-11-26 20:55:44.838205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.387 qpair failed and we were unable to recover it. 00:25:41.387 [2024-11-26 20:55:44.838384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.387 [2024-11-26 20:55:44.838429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.387 qpair failed and we were unable to recover it. 00:25:41.387 [2024-11-26 20:55:44.838600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.387 [2024-11-26 20:55:44.838643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.387 qpair failed and we were unable to recover it. 00:25:41.387 [2024-11-26 20:55:44.838810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.387 [2024-11-26 20:55:44.838854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.387 qpair failed and we were unable to recover it. 00:25:41.387 [2024-11-26 20:55:44.838992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.387 [2024-11-26 20:55:44.839035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.387 qpair failed and we were unable to recover it. 00:25:41.387 [2024-11-26 20:55:44.839180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.387 [2024-11-26 20:55:44.839225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.387 qpair failed and we were unable to recover it. 00:25:41.387 [2024-11-26 20:55:44.839381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.387 [2024-11-26 20:55:44.839427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.387 qpair failed and we were unable to recover it. 00:25:41.387 [2024-11-26 20:55:44.839602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.387 [2024-11-26 20:55:44.839645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.387 qpair failed and we were unable to recover it. 00:25:41.387 [2024-11-26 20:55:44.839778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.387 [2024-11-26 20:55:44.839821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.387 qpair failed and we were unable to recover it. 00:25:41.387 [2024-11-26 20:55:44.839989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.387 [2024-11-26 20:55:44.840033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.387 qpair failed and we were unable to recover it. 00:25:41.387 [2024-11-26 20:55:44.840173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.387 [2024-11-26 20:55:44.840218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.387 qpair failed and we were unable to recover it. 00:25:41.387 [2024-11-26 20:55:44.840418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.387 [2024-11-26 20:55:44.840463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.387 qpair failed and we were unable to recover it. 00:25:41.387 [2024-11-26 20:55:44.840636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.387 [2024-11-26 20:55:44.840679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.387 qpair failed and we were unable to recover it. 00:25:41.387 [2024-11-26 20:55:44.840804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.387 [2024-11-26 20:55:44.840848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.387 qpair failed and we were unable to recover it. 00:25:41.387 [2024-11-26 20:55:44.841024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.387 [2024-11-26 20:55:44.841067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.387 qpair failed and we were unable to recover it. 00:25:41.387 [2024-11-26 20:55:44.841222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.387 [2024-11-26 20:55:44.841265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.387 qpair failed and we were unable to recover it. 00:25:41.387 [2024-11-26 20:55:44.841487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.387 [2024-11-26 20:55:44.841531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.387 qpair failed and we were unable to recover it. 00:25:41.387 [2024-11-26 20:55:44.841703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.387 [2024-11-26 20:55:44.841746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.387 qpair failed and we were unable to recover it. 00:25:41.387 [2024-11-26 20:55:44.841915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.387 [2024-11-26 20:55:44.841962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.387 qpair failed and we were unable to recover it. 00:25:41.387 [2024-11-26 20:55:44.842106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.387 [2024-11-26 20:55:44.842150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.387 qpair failed and we were unable to recover it. 00:25:41.387 [2024-11-26 20:55:44.842279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.387 [2024-11-26 20:55:44.842338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.387 qpair failed and we were unable to recover it. 00:25:41.387 [2024-11-26 20:55:44.842489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.387 [2024-11-26 20:55:44.842534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.387 qpair failed and we were unable to recover it. 00:25:41.388 [2024-11-26 20:55:44.842681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.388 [2024-11-26 20:55:44.842733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.388 qpair failed and we were unable to recover it. 00:25:41.388 [2024-11-26 20:55:44.842887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.388 [2024-11-26 20:55:44.842931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.388 qpair failed and we were unable to recover it. 00:25:41.388 [2024-11-26 20:55:44.843093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.388 [2024-11-26 20:55:44.843137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.388 qpair failed and we were unable to recover it. 00:25:41.388 [2024-11-26 20:55:44.843324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.388 [2024-11-26 20:55:44.843368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.388 qpair failed and we were unable to recover it. 00:25:41.388 [2024-11-26 20:55:44.843569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.388 [2024-11-26 20:55:44.843612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.388 qpair failed and we were unable to recover it. 00:25:41.388 [2024-11-26 20:55:44.843778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.388 [2024-11-26 20:55:44.843821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.388 qpair failed and we were unable to recover it. 00:25:41.388 [2024-11-26 20:55:44.844028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.388 [2024-11-26 20:55:44.844071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.388 qpair failed and we were unable to recover it. 00:25:41.388 [2024-11-26 20:55:44.844232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.388 [2024-11-26 20:55:44.844275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.388 qpair failed and we were unable to recover it. 00:25:41.388 [2024-11-26 20:55:44.844429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.388 [2024-11-26 20:55:44.844473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.388 qpair failed and we were unable to recover it. 00:25:41.388 [2024-11-26 20:55:44.844655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.388 [2024-11-26 20:55:44.844698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.388 qpair failed and we were unable to recover it. 00:25:41.388 [2024-11-26 20:55:44.844856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.388 [2024-11-26 20:55:44.844898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.388 qpair failed and we were unable to recover it. 00:25:41.388 [2024-11-26 20:55:44.845063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.388 [2024-11-26 20:55:44.845106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.388 qpair failed and we were unable to recover it. 00:25:41.388 [2024-11-26 20:55:44.845253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.388 [2024-11-26 20:55:44.845295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.388 qpair failed and we were unable to recover it. 00:25:41.388 [2024-11-26 20:55:44.845467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.388 [2024-11-26 20:55:44.845511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.388 qpair failed and we were unable to recover it. 00:25:41.388 [2024-11-26 20:55:44.845716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.388 [2024-11-26 20:55:44.845760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.388 qpair failed and we were unable to recover it. 00:25:41.388 [2024-11-26 20:55:44.845927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.388 [2024-11-26 20:55:44.845970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.388 qpair failed and we were unable to recover it. 00:25:41.388 [2024-11-26 20:55:44.846133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.388 [2024-11-26 20:55:44.846175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.388 qpair failed and we were unable to recover it. 00:25:41.388 [2024-11-26 20:55:44.846340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.388 [2024-11-26 20:55:44.846383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.388 qpair failed and we were unable to recover it. 00:25:41.388 [2024-11-26 20:55:44.846548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.388 [2024-11-26 20:55:44.846590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.388 qpair failed and we were unable to recover it. 00:25:41.388 [2024-11-26 20:55:44.846751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.388 [2024-11-26 20:55:44.846794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.388 qpair failed and we were unable to recover it. 00:25:41.388 [2024-11-26 20:55:44.846969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.388 [2024-11-26 20:55:44.847012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.388 qpair failed and we were unable to recover it. 00:25:41.388 [2024-11-26 20:55:44.847181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.388 [2024-11-26 20:55:44.847224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.388 qpair failed and we were unable to recover it. 00:25:41.388 [2024-11-26 20:55:44.847393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.388 [2024-11-26 20:55:44.847437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.388 qpair failed and we were unable to recover it. 00:25:41.388 [2024-11-26 20:55:44.847606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.388 [2024-11-26 20:55:44.847648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.388 qpair failed and we were unable to recover it. 00:25:41.388 [2024-11-26 20:55:44.847795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.388 [2024-11-26 20:55:44.847839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.388 qpair failed and we were unable to recover it. 00:25:41.388 [2024-11-26 20:55:44.848007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.388 [2024-11-26 20:55:44.848050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.388 qpair failed and we were unable to recover it. 00:25:41.388 [2024-11-26 20:55:44.848219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.388 [2024-11-26 20:55:44.848261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.388 qpair failed and we were unable to recover it. 00:25:41.388 [2024-11-26 20:55:44.848437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.388 [2024-11-26 20:55:44.848480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.388 qpair failed and we were unable to recover it. 00:25:41.388 [2024-11-26 20:55:44.848694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.388 [2024-11-26 20:55:44.848736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.388 qpair failed and we were unable to recover it. 00:25:41.388 [2024-11-26 20:55:44.848905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.388 [2024-11-26 20:55:44.848947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.388 qpair failed and we were unable to recover it. 00:25:41.388 [2024-11-26 20:55:44.849109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.388 [2024-11-26 20:55:44.849151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.388 qpair failed and we were unable to recover it. 00:25:41.388 [2024-11-26 20:55:44.849335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.388 [2024-11-26 20:55:44.849379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.388 qpair failed and we were unable to recover it. 00:25:41.388 [2024-11-26 20:55:44.849544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.388 [2024-11-26 20:55:44.849587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.388 qpair failed and we were unable to recover it. 00:25:41.388 [2024-11-26 20:55:44.849753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.388 [2024-11-26 20:55:44.849795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.388 qpair failed and we were unable to recover it. 00:25:41.388 [2024-11-26 20:55:44.849942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.388 [2024-11-26 20:55:44.849986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.388 qpair failed and we were unable to recover it. 00:25:41.388 [2024-11-26 20:55:44.850168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.388 [2024-11-26 20:55:44.850211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.388 qpair failed and we were unable to recover it. 00:25:41.388 [2024-11-26 20:55:44.850419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.388 [2024-11-26 20:55:44.850464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.388 qpair failed and we were unable to recover it. 00:25:41.388 [2024-11-26 20:55:44.850638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.388 [2024-11-26 20:55:44.850681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.388 qpair failed and we were unable to recover it. 00:25:41.388 [2024-11-26 20:55:44.850854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.389 [2024-11-26 20:55:44.850899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.389 qpair failed and we were unable to recover it. 00:25:41.389 [2024-11-26 20:55:44.851068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.389 [2024-11-26 20:55:44.851111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.389 qpair failed and we were unable to recover it. 00:25:41.389 [2024-11-26 20:55:44.851279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.389 [2024-11-26 20:55:44.851342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.389 qpair failed and we were unable to recover it. 00:25:41.389 [2024-11-26 20:55:44.851535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.389 [2024-11-26 20:55:44.851577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.389 qpair failed and we were unable to recover it. 00:25:41.389 [2024-11-26 20:55:44.851741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.389 [2024-11-26 20:55:44.851782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.389 qpair failed and we were unable to recover it. 00:25:41.389 [2024-11-26 20:55:44.851920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.389 [2024-11-26 20:55:44.851963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.389 qpair failed and we were unable to recover it. 00:25:41.389 [2024-11-26 20:55:44.852148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.389 [2024-11-26 20:55:44.852192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.389 qpair failed and we were unable to recover it. 00:25:41.389 [2024-11-26 20:55:44.852372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.389 [2024-11-26 20:55:44.852416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.389 qpair failed and we were unable to recover it. 00:25:41.389 [2024-11-26 20:55:44.852588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.389 [2024-11-26 20:55:44.852630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.389 qpair failed and we were unable to recover it. 00:25:41.389 [2024-11-26 20:55:44.852797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.389 [2024-11-26 20:55:44.852839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.389 qpair failed and we were unable to recover it. 00:25:41.389 [2024-11-26 20:55:44.852983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.389 [2024-11-26 20:55:44.853026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.389 qpair failed and we were unable to recover it. 00:25:41.389 [2024-11-26 20:55:44.853227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.389 [2024-11-26 20:55:44.853270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.389 qpair failed and we were unable to recover it. 00:25:41.389 [2024-11-26 20:55:44.853457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.389 [2024-11-26 20:55:44.853503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.389 qpair failed and we were unable to recover it. 00:25:41.389 [2024-11-26 20:55:44.853673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.389 [2024-11-26 20:55:44.853716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.389 qpair failed and we were unable to recover it. 00:25:41.389 [2024-11-26 20:55:44.853917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.389 [2024-11-26 20:55:44.853960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.389 qpair failed and we were unable to recover it. 00:25:41.389 [2024-11-26 20:55:44.854102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.389 [2024-11-26 20:55:44.854144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.389 qpair failed and we were unable to recover it. 00:25:41.389 [2024-11-26 20:55:44.854288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.389 [2024-11-26 20:55:44.854345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.389 qpair failed and we were unable to recover it. 00:25:41.389 [2024-11-26 20:55:44.854516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.389 [2024-11-26 20:55:44.854558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.389 qpair failed and we were unable to recover it. 00:25:41.389 [2024-11-26 20:55:44.854726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.389 [2024-11-26 20:55:44.854768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.389 qpair failed and we were unable to recover it. 00:25:41.389 [2024-11-26 20:55:44.854934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.389 [2024-11-26 20:55:44.854977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.389 qpair failed and we were unable to recover it. 00:25:41.389 [2024-11-26 20:55:44.855176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.389 [2024-11-26 20:55:44.855219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.389 qpair failed and we were unable to recover it. 00:25:41.389 [2024-11-26 20:55:44.855368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.389 [2024-11-26 20:55:44.855412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.389 qpair failed and we were unable to recover it. 00:25:41.389 [2024-11-26 20:55:44.855577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.389 [2024-11-26 20:55:44.855618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.389 qpair failed and we were unable to recover it. 00:25:41.389 [2024-11-26 20:55:44.855747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.389 [2024-11-26 20:55:44.855792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.389 qpair failed and we were unable to recover it. 00:25:41.389 [2024-11-26 20:55:44.855964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.389 [2024-11-26 20:55:44.856009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.389 qpair failed and we were unable to recover it. 00:25:41.389 [2024-11-26 20:55:44.856264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.389 [2024-11-26 20:55:44.856347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.389 qpair failed and we were unable to recover it. 00:25:41.389 [2024-11-26 20:55:44.856637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.389 [2024-11-26 20:55:44.856701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.389 qpair failed and we were unable to recover it. 00:25:41.389 [2024-11-26 20:55:44.856956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.389 [2024-11-26 20:55:44.857021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.389 qpair failed and we were unable to recover it. 00:25:41.389 [2024-11-26 20:55:44.857277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.389 [2024-11-26 20:55:44.857367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.389 qpair failed and we were unable to recover it. 00:25:41.390 [2024-11-26 20:55:44.857670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.390 [2024-11-26 20:55:44.857735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.390 qpair failed and we were unable to recover it. 00:25:41.390 [2024-11-26 20:55:44.857999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.390 [2024-11-26 20:55:44.858064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.390 qpair failed and we were unable to recover it. 00:25:41.390 [2024-11-26 20:55:44.858345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.390 [2024-11-26 20:55:44.858411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.390 qpair failed and we were unable to recover it. 00:25:41.390 [2024-11-26 20:55:44.858618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.390 [2024-11-26 20:55:44.858682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.390 qpair failed and we were unable to recover it. 00:25:41.390 [2024-11-26 20:55:44.858931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.390 [2024-11-26 20:55:44.858996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.390 qpair failed and we were unable to recover it. 00:25:41.390 [2024-11-26 20:55:44.859285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.390 [2024-11-26 20:55:44.859374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.390 qpair failed and we were unable to recover it. 00:25:41.390 [2024-11-26 20:55:44.859660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.390 [2024-11-26 20:55:44.859725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.390 qpair failed and we were unable to recover it. 00:25:41.390 [2024-11-26 20:55:44.860018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.390 [2024-11-26 20:55:44.860084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.390 qpair failed and we were unable to recover it. 00:25:41.390 [2024-11-26 20:55:44.860365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.390 [2024-11-26 20:55:44.860431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.390 qpair failed and we were unable to recover it. 00:25:41.390 [2024-11-26 20:55:44.860686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.390 [2024-11-26 20:55:44.860750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.390 qpair failed and we were unable to recover it. 00:25:41.390 [2024-11-26 20:55:44.860976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.390 [2024-11-26 20:55:44.861042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.390 qpair failed and we were unable to recover it. 00:25:41.390 [2024-11-26 20:55:44.861250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.390 [2024-11-26 20:55:44.861327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.390 qpair failed and we were unable to recover it. 00:25:41.390 [2024-11-26 20:55:44.861527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.390 [2024-11-26 20:55:44.861595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.390 qpair failed and we were unable to recover it. 00:25:41.390 [2024-11-26 20:55:44.861801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.390 [2024-11-26 20:55:44.861879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.390 qpair failed and we were unable to recover it. 00:25:41.390 [2024-11-26 20:55:44.862183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.390 [2024-11-26 20:55:44.862248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.390 qpair failed and we were unable to recover it. 00:25:41.390 [2024-11-26 20:55:44.862517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.390 [2024-11-26 20:55:44.862583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.390 qpair failed and we were unable to recover it. 00:25:41.390 [2024-11-26 20:55:44.862869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.390 [2024-11-26 20:55:44.862932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.390 qpair failed and we were unable to recover it. 00:25:41.390 [2024-11-26 20:55:44.863186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.390 [2024-11-26 20:55:44.863253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.390 qpair failed and we were unable to recover it. 00:25:41.390 [2024-11-26 20:55:44.863528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.390 [2024-11-26 20:55:44.863593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.390 qpair failed and we were unable to recover it. 00:25:41.390 [2024-11-26 20:55:44.863874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.390 [2024-11-26 20:55:44.863939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.390 qpair failed and we were unable to recover it. 00:25:41.390 [2024-11-26 20:55:44.864187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.390 [2024-11-26 20:55:44.864251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.390 qpair failed and we were unable to recover it. 00:25:41.390 [2024-11-26 20:55:44.864542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.390 [2024-11-26 20:55:44.864607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.390 qpair failed and we were unable to recover it. 00:25:41.390 [2024-11-26 20:55:44.864850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.390 [2024-11-26 20:55:44.864915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.390 qpair failed and we were unable to recover it. 00:25:41.390 [2024-11-26 20:55:44.865147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.390 [2024-11-26 20:55:44.865211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.390 qpair failed and we were unable to recover it. 00:25:41.390 [2024-11-26 20:55:44.865511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.390 [2024-11-26 20:55:44.865578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.390 qpair failed and we were unable to recover it. 00:25:41.390 [2024-11-26 20:55:44.865866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.390 [2024-11-26 20:55:44.865931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.390 qpair failed and we were unable to recover it. 00:25:41.390 [2024-11-26 20:55:44.866184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.390 [2024-11-26 20:55:44.866252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.390 qpair failed and we were unable to recover it. 00:25:41.390 [2024-11-26 20:55:44.866520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.390 [2024-11-26 20:55:44.866586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.390 qpair failed and we were unable to recover it. 00:25:41.390 [2024-11-26 20:55:44.866827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.390 [2024-11-26 20:55:44.866894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.390 qpair failed and we were unable to recover it. 00:25:41.390 [2024-11-26 20:55:44.867192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.390 [2024-11-26 20:55:44.867256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.390 qpair failed and we were unable to recover it. 00:25:41.390 [2024-11-26 20:55:44.867520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.390 [2024-11-26 20:55:44.867586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.390 qpair failed and we were unable to recover it. 00:25:41.390 [2024-11-26 20:55:44.867839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.390 [2024-11-26 20:55:44.867904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.390 qpair failed and we were unable to recover it. 00:25:41.390 [2024-11-26 20:55:44.868201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.390 [2024-11-26 20:55:44.868266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.390 qpair failed and we were unable to recover it. 00:25:41.390 [2024-11-26 20:55:44.868602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.390 [2024-11-26 20:55:44.868667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.390 qpair failed and we were unable to recover it. 00:25:41.390 [2024-11-26 20:55:44.868952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.390 [2024-11-26 20:55:44.869016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.390 qpair failed and we were unable to recover it. 00:25:41.390 [2024-11-26 20:55:44.869265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.390 [2024-11-26 20:55:44.869376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.390 qpair failed and we were unable to recover it. 00:25:41.390 [2024-11-26 20:55:44.869649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.391 [2024-11-26 20:55:44.869714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.391 qpair failed and we were unable to recover it. 00:25:41.391 [2024-11-26 20:55:44.869966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.391 [2024-11-26 20:55:44.870030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.391 qpair failed and we were unable to recover it. 00:25:41.391 [2024-11-26 20:55:44.870252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.391 [2024-11-26 20:55:44.870337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.391 qpair failed and we were unable to recover it. 00:25:41.391 [2024-11-26 20:55:44.870605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.391 [2024-11-26 20:55:44.870670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.391 qpair failed and we were unable to recover it. 00:25:41.391 [2024-11-26 20:55:44.870977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.391 [2024-11-26 20:55:44.871043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.391 qpair failed and we were unable to recover it. 00:25:41.391 [2024-11-26 20:55:44.871320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.391 [2024-11-26 20:55:44.871389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.391 qpair failed and we were unable to recover it. 00:25:41.391 [2024-11-26 20:55:44.871646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.391 [2024-11-26 20:55:44.871710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.391 qpair failed and we were unable to recover it. 00:25:41.391 [2024-11-26 20:55:44.872012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.391 [2024-11-26 20:55:44.872077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.391 qpair failed and we were unable to recover it. 00:25:41.391 [2024-11-26 20:55:44.872337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.391 [2024-11-26 20:55:44.872405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.391 qpair failed and we were unable to recover it. 00:25:41.391 [2024-11-26 20:55:44.872657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.391 [2024-11-26 20:55:44.872723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.391 qpair failed and we were unable to recover it. 00:25:41.391 [2024-11-26 20:55:44.873025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.391 [2024-11-26 20:55:44.873090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.391 qpair failed and we were unable to recover it. 00:25:41.391 [2024-11-26 20:55:44.873396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.391 [2024-11-26 20:55:44.873463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.391 qpair failed and we were unable to recover it. 00:25:41.391 [2024-11-26 20:55:44.873729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.391 [2024-11-26 20:55:44.873792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.391 qpair failed and we were unable to recover it. 00:25:41.391 [2024-11-26 20:55:44.874080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.391 [2024-11-26 20:55:44.874144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.391 qpair failed and we were unable to recover it. 00:25:41.391 [2024-11-26 20:55:44.874391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.391 [2024-11-26 20:55:44.874457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.391 qpair failed and we were unable to recover it. 00:25:41.391 [2024-11-26 20:55:44.874699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.391 [2024-11-26 20:55:44.874766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.391 qpair failed and we were unable to recover it. 00:25:41.391 [2024-11-26 20:55:44.875067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.391 [2024-11-26 20:55:44.875133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.391 qpair failed and we were unable to recover it. 00:25:41.391 [2024-11-26 20:55:44.875395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.391 [2024-11-26 20:55:44.875472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.391 qpair failed and we were unable to recover it. 00:25:41.391 [2024-11-26 20:55:44.875734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.391 [2024-11-26 20:55:44.875800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.391 qpair failed and we were unable to recover it. 00:25:41.391 [2024-11-26 20:55:44.876096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.391 [2024-11-26 20:55:44.876162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.391 qpair failed and we were unable to recover it. 00:25:41.391 [2024-11-26 20:55:44.876435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.391 [2024-11-26 20:55:44.876505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.391 qpair failed and we were unable to recover it. 00:25:41.391 [2024-11-26 20:55:44.876766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.391 [2024-11-26 20:55:44.876833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.391 qpair failed and we were unable to recover it. 00:25:41.391 [2024-11-26 20:55:44.877059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.391 [2024-11-26 20:55:44.877124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.391 qpair failed and we were unable to recover it. 00:25:41.391 [2024-11-26 20:55:44.877365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.391 [2024-11-26 20:55:44.877432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.391 qpair failed and we were unable to recover it. 00:25:41.391 [2024-11-26 20:55:44.877715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.391 [2024-11-26 20:55:44.877780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.391 qpair failed and we were unable to recover it. 00:25:41.391 [2024-11-26 20:55:44.877995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.391 [2024-11-26 20:55:44.878060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.391 qpair failed and we were unable to recover it. 00:25:41.391 [2024-11-26 20:55:44.878271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.391 [2024-11-26 20:55:44.878368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.391 qpair failed and we were unable to recover it. 00:25:41.391 [2024-11-26 20:55:44.878670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.391 [2024-11-26 20:55:44.878735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.391 qpair failed and we were unable to recover it. 00:25:41.391 [2024-11-26 20:55:44.878958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.391 [2024-11-26 20:55:44.879024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.391 qpair failed and we were unable to recover it. 00:25:41.391 [2024-11-26 20:55:44.879318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.391 [2024-11-26 20:55:44.879384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.391 qpair failed and we were unable to recover it. 00:25:41.391 [2024-11-26 20:55:44.879672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.391 [2024-11-26 20:55:44.879736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.391 qpair failed and we were unable to recover it. 00:25:41.391 [2024-11-26 20:55:44.880002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.391 [2024-11-26 20:55:44.880068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.391 qpair failed and we were unable to recover it. 00:25:41.391 [2024-11-26 20:55:44.880338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.391 [2024-11-26 20:55:44.880408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.391 qpair failed and we were unable to recover it. 00:25:41.391 [2024-11-26 20:55:44.880708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.391 [2024-11-26 20:55:44.880774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.391 qpair failed and we were unable to recover it. 00:25:41.391 [2024-11-26 20:55:44.881042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.391 [2024-11-26 20:55:44.881106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.391 qpair failed and we were unable to recover it. 00:25:41.391 [2024-11-26 20:55:44.881404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.391 [2024-11-26 20:55:44.881471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.391 qpair failed and we were unable to recover it. 00:25:41.391 [2024-11-26 20:55:44.881723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.391 [2024-11-26 20:55:44.881788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.391 qpair failed and we were unable to recover it. 00:25:41.391 [2024-11-26 20:55:44.882013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.391 [2024-11-26 20:55:44.882078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.391 qpair failed and we were unable to recover it. 00:25:41.391 [2024-11-26 20:55:44.882334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.392 [2024-11-26 20:55:44.882411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.392 qpair failed and we were unable to recover it. 00:25:41.392 [2024-11-26 20:55:44.882670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.392 [2024-11-26 20:55:44.882736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.392 qpair failed and we were unable to recover it. 00:25:41.392 [2024-11-26 20:55:44.882960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.392 [2024-11-26 20:55:44.883024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.392 qpair failed and we were unable to recover it. 00:25:41.392 [2024-11-26 20:55:44.883281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.392 [2024-11-26 20:55:44.883364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.392 qpair failed and we were unable to recover it. 00:25:41.392 [2024-11-26 20:55:44.883660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.392 [2024-11-26 20:55:44.883727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.392 qpair failed and we were unable to recover it. 00:25:41.392 [2024-11-26 20:55:44.883974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.392 [2024-11-26 20:55:44.884040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.392 qpair failed and we were unable to recover it. 00:25:41.392 [2024-11-26 20:55:44.884318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.392 [2024-11-26 20:55:44.884386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.392 qpair failed and we were unable to recover it. 00:25:41.392 [2024-11-26 20:55:44.884644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.392 [2024-11-26 20:55:44.884710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.392 qpair failed and we were unable to recover it. 00:25:41.392 [2024-11-26 20:55:44.884997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.392 [2024-11-26 20:55:44.885062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.392 qpair failed and we were unable to recover it. 00:25:41.392 [2024-11-26 20:55:44.885362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.392 [2024-11-26 20:55:44.885430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.392 qpair failed and we were unable to recover it. 00:25:41.392 [2024-11-26 20:55:44.885728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.392 [2024-11-26 20:55:44.885793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.392 qpair failed and we were unable to recover it. 00:25:41.392 [2024-11-26 20:55:44.886040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.392 [2024-11-26 20:55:44.886104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.392 qpair failed and we were unable to recover it. 00:25:41.392 [2024-11-26 20:55:44.886395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.392 [2024-11-26 20:55:44.886463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.392 qpair failed and we were unable to recover it. 00:25:41.392 [2024-11-26 20:55:44.886746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.392 [2024-11-26 20:55:44.886811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.392 qpair failed and we were unable to recover it. 00:25:41.392 [2024-11-26 20:55:44.887101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.392 [2024-11-26 20:55:44.887166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.392 qpair failed and we were unable to recover it. 00:25:41.392 [2024-11-26 20:55:44.887434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.392 [2024-11-26 20:55:44.887500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.392 qpair failed and we were unable to recover it. 00:25:41.392 [2024-11-26 20:55:44.887750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.392 [2024-11-26 20:55:44.887816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.392 qpair failed and we were unable to recover it. 00:25:41.392 [2024-11-26 20:55:44.888096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.392 [2024-11-26 20:55:44.888162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.392 qpair failed and we were unable to recover it. 00:25:41.392 [2024-11-26 20:55:44.888445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.392 [2024-11-26 20:55:44.888510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.392 qpair failed and we were unable to recover it. 00:25:41.392 [2024-11-26 20:55:44.888760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.392 [2024-11-26 20:55:44.888836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.392 qpair failed and we were unable to recover it. 00:25:41.392 [2024-11-26 20:55:44.889095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.392 [2024-11-26 20:55:44.889161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.392 qpair failed and we were unable to recover it. 00:25:41.392 [2024-11-26 20:55:44.889448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.392 [2024-11-26 20:55:44.889514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.392 qpair failed and we were unable to recover it. 00:25:41.392 [2024-11-26 20:55:44.889779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.392 [2024-11-26 20:55:44.889845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.392 qpair failed and we were unable to recover it. 00:25:41.392 [2024-11-26 20:55:44.890091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.392 [2024-11-26 20:55:44.890156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.392 qpair failed and we were unable to recover it. 00:25:41.392 [2024-11-26 20:55:44.890371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.392 [2024-11-26 20:55:44.890436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.392 qpair failed and we were unable to recover it. 00:25:41.392 [2024-11-26 20:55:44.890638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.392 [2024-11-26 20:55:44.890702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.392 qpair failed and we were unable to recover it. 00:25:41.392 [2024-11-26 20:55:44.890904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.392 [2024-11-26 20:55:44.890973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.392 qpair failed and we were unable to recover it. 00:25:41.392 [2024-11-26 20:55:44.891229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.392 [2024-11-26 20:55:44.891296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.392 qpair failed and we were unable to recover it. 00:25:41.392 [2024-11-26 20:55:44.891577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.392 [2024-11-26 20:55:44.891642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.392 qpair failed and we were unable to recover it. 00:25:41.392 [2024-11-26 20:55:44.891892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.392 [2024-11-26 20:55:44.891958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.392 qpair failed and we were unable to recover it. 00:25:41.392 [2024-11-26 20:55:44.892210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.392 [2024-11-26 20:55:44.892275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.392 qpair failed and we were unable to recover it. 00:25:41.392 [2024-11-26 20:55:44.892500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.392 [2024-11-26 20:55:44.892566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.392 qpair failed and we were unable to recover it. 00:25:41.392 [2024-11-26 20:55:44.892852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.392 [2024-11-26 20:55:44.892917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.393 qpair failed and we were unable to recover it. 00:25:41.393 [2024-11-26 20:55:44.893182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.393 [2024-11-26 20:55:44.893248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.393 qpair failed and we were unable to recover it. 00:25:41.393 [2024-11-26 20:55:44.893539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.393 [2024-11-26 20:55:44.893609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.393 qpair failed and we were unable to recover it. 00:25:41.393 [2024-11-26 20:55:44.893869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.393 [2024-11-26 20:55:44.893935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.393 qpair failed and we were unable to recover it. 00:25:41.393 [2024-11-26 20:55:44.894223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.393 [2024-11-26 20:55:44.894288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.393 qpair failed and we were unable to recover it. 00:25:41.393 [2024-11-26 20:55:44.894558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.393 [2024-11-26 20:55:44.894623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.393 qpair failed and we were unable to recover it. 00:25:41.393 [2024-11-26 20:55:44.894838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.393 [2024-11-26 20:55:44.894903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.393 qpair failed and we were unable to recover it. 00:25:41.393 [2024-11-26 20:55:44.895147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.393 [2024-11-26 20:55:44.895214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.393 qpair failed and we were unable to recover it. 00:25:41.393 [2024-11-26 20:55:44.895532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.393 [2024-11-26 20:55:44.895598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.393 qpair failed and we were unable to recover it. 00:25:41.393 [2024-11-26 20:55:44.895837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.393 [2024-11-26 20:55:44.895901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.393 qpair failed and we were unable to recover it. 00:25:41.393 [2024-11-26 20:55:44.896121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.393 [2024-11-26 20:55:44.896186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.393 qpair failed and we were unable to recover it. 00:25:41.393 [2024-11-26 20:55:44.896401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.393 [2024-11-26 20:55:44.896470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.393 qpair failed and we were unable to recover it. 00:25:41.393 [2024-11-26 20:55:44.896732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.393 [2024-11-26 20:55:44.896799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.393 qpair failed and we were unable to recover it. 00:25:41.393 [2024-11-26 20:55:44.897088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.393 [2024-11-26 20:55:44.897152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.393 qpair failed and we were unable to recover it. 00:25:41.393 [2024-11-26 20:55:44.897404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.393 [2024-11-26 20:55:44.897473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.393 qpair failed and we were unable to recover it. 00:25:41.393 [2024-11-26 20:55:44.897692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.393 [2024-11-26 20:55:44.897756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.393 qpair failed and we were unable to recover it. 00:25:41.393 [2024-11-26 20:55:44.897989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.393 [2024-11-26 20:55:44.898054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.393 qpair failed and we were unable to recover it. 00:25:41.393 [2024-11-26 20:55:44.898277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.393 [2024-11-26 20:55:44.898359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.393 qpair failed and we were unable to recover it. 00:25:41.393 [2024-11-26 20:55:44.898652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.393 [2024-11-26 20:55:44.898717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.393 qpair failed and we were unable to recover it. 00:25:41.393 [2024-11-26 20:55:44.898934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.393 [2024-11-26 20:55:44.899002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.393 qpair failed and we were unable to recover it. 00:25:41.393 [2024-11-26 20:55:44.899295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.393 [2024-11-26 20:55:44.899374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.393 qpair failed and we were unable to recover it. 00:25:41.393 [2024-11-26 20:55:44.899605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.393 [2024-11-26 20:55:44.899669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.393 qpair failed and we were unable to recover it. 00:25:41.393 [2024-11-26 20:55:44.899921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.393 [2024-11-26 20:55:44.899989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.393 qpair failed and we were unable to recover it. 00:25:41.393 [2024-11-26 20:55:44.900189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.393 [2024-11-26 20:55:44.900254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.393 qpair failed and we were unable to recover it. 00:25:41.393 [2024-11-26 20:55:44.900531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.393 [2024-11-26 20:55:44.900596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.393 qpair failed and we were unable to recover it. 00:25:41.393 [2024-11-26 20:55:44.900869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.393 [2024-11-26 20:55:44.900934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.393 qpair failed and we were unable to recover it. 00:25:41.393 [2024-11-26 20:55:44.901205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.393 [2024-11-26 20:55:44.901269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.393 qpair failed and we were unable to recover it. 00:25:41.393 [2024-11-26 20:55:44.901528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.393 [2024-11-26 20:55:44.901592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.393 qpair failed and we were unable to recover it. 00:25:41.393 [2024-11-26 20:55:44.901809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.393 [2024-11-26 20:55:44.901878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.393 qpair failed and we were unable to recover it. 00:25:41.393 [2024-11-26 20:55:44.902134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.393 [2024-11-26 20:55:44.902198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.393 qpair failed and we were unable to recover it. 00:25:41.393 [2024-11-26 20:55:44.902474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.393 [2024-11-26 20:55:44.902541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.393 qpair failed and we were unable to recover it. 00:25:41.393 [2024-11-26 20:55:44.902843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.393 [2024-11-26 20:55:44.902908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.393 qpair failed and we were unable to recover it. 00:25:41.393 [2024-11-26 20:55:44.903156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.393 [2024-11-26 20:55:44.903219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.393 qpair failed and we were unable to recover it. 00:25:41.393 [2024-11-26 20:55:44.903460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.393 [2024-11-26 20:55:44.903529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.393 qpair failed and we were unable to recover it. 00:25:41.393 [2024-11-26 20:55:44.903825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.393 [2024-11-26 20:55:44.903890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.393 qpair failed and we were unable to recover it. 00:25:41.393 [2024-11-26 20:55:44.904195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.393 [2024-11-26 20:55:44.904260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.393 qpair failed and we were unable to recover it. 00:25:41.393 [2024-11-26 20:55:44.904523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.393 [2024-11-26 20:55:44.904589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.393 qpair failed and we were unable to recover it. 00:25:41.393 [2024-11-26 20:55:44.904839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.393 [2024-11-26 20:55:44.904904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.393 qpair failed and we were unable to recover it. 00:25:41.393 [2024-11-26 20:55:44.905130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.394 [2024-11-26 20:55:44.905195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.394 qpair failed and we were unable to recover it. 00:25:41.394 [2024-11-26 20:55:44.905472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.394 [2024-11-26 20:55:44.905538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.394 qpair failed and we were unable to recover it. 00:25:41.394 [2024-11-26 20:55:44.905785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.394 [2024-11-26 20:55:44.905850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.394 qpair failed and we were unable to recover it. 00:25:41.394 [2024-11-26 20:55:44.906146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.394 [2024-11-26 20:55:44.906212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.394 qpair failed and we were unable to recover it. 00:25:41.394 [2024-11-26 20:55:44.906444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.394 [2024-11-26 20:55:44.906511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.394 qpair failed and we were unable to recover it. 00:25:41.394 [2024-11-26 20:55:44.906759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.394 [2024-11-26 20:55:44.906824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.394 qpair failed and we were unable to recover it. 00:25:41.394 [2024-11-26 20:55:44.907042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.394 [2024-11-26 20:55:44.907106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.394 qpair failed and we were unable to recover it. 00:25:41.394 [2024-11-26 20:55:44.907335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.394 [2024-11-26 20:55:44.907401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.394 qpair failed and we were unable to recover it. 00:25:41.394 [2024-11-26 20:55:44.907698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.394 [2024-11-26 20:55:44.907763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.394 qpair failed and we were unable to recover it. 00:25:41.394 [2024-11-26 20:55:44.908015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.394 [2024-11-26 20:55:44.908079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.394 qpair failed and we were unable to recover it. 00:25:41.394 [2024-11-26 20:55:44.908367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.394 [2024-11-26 20:55:44.908434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.394 qpair failed and we were unable to recover it. 00:25:41.394 [2024-11-26 20:55:44.908720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.394 [2024-11-26 20:55:44.908785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.394 qpair failed and we were unable to recover it. 00:25:41.394 [2024-11-26 20:55:44.909006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.394 [2024-11-26 20:55:44.909074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.394 qpair failed and we were unable to recover it. 00:25:41.394 [2024-11-26 20:55:44.909300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.394 [2024-11-26 20:55:44.909400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.394 qpair failed and we were unable to recover it. 00:25:41.394 [2024-11-26 20:55:44.909658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.394 [2024-11-26 20:55:44.909722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.394 qpair failed and we were unable to recover it. 00:25:41.394 [2024-11-26 20:55:44.909908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.394 [2024-11-26 20:55:44.909975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.394 qpair failed and we were unable to recover it. 00:25:41.394 [2024-11-26 20:55:44.910234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.394 [2024-11-26 20:55:44.910328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.394 qpair failed and we were unable to recover it. 00:25:41.394 [2024-11-26 20:55:44.910579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.394 [2024-11-26 20:55:44.910644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.394 qpair failed and we were unable to recover it. 00:25:41.394 [2024-11-26 20:55:44.910937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.394 [2024-11-26 20:55:44.911001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.394 qpair failed and we were unable to recover it. 00:25:41.394 [2024-11-26 20:55:44.911296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.394 [2024-11-26 20:55:44.911377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.394 qpair failed and we were unable to recover it. 00:25:41.394 [2024-11-26 20:55:44.911599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.394 [2024-11-26 20:55:44.911665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.394 qpair failed and we were unable to recover it. 00:25:41.394 [2024-11-26 20:55:44.911909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.394 [2024-11-26 20:55:44.911975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.394 qpair failed and we were unable to recover it. 00:25:41.394 [2024-11-26 20:55:44.912233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.394 [2024-11-26 20:55:44.912297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.394 qpair failed and we were unable to recover it. 00:25:41.394 [2024-11-26 20:55:44.912570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.394 [2024-11-26 20:55:44.912634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.394 qpair failed and we were unable to recover it. 00:25:41.394 [2024-11-26 20:55:44.912879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.394 [2024-11-26 20:55:44.912944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.394 qpair failed and we were unable to recover it. 00:25:41.394 [2024-11-26 20:55:44.913185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.394 [2024-11-26 20:55:44.913249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.394 qpair failed and we were unable to recover it. 00:25:41.394 [2024-11-26 20:55:44.913546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.394 [2024-11-26 20:55:44.913615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.394 qpair failed and we were unable to recover it. 00:25:41.394 [2024-11-26 20:55:44.913905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.394 [2024-11-26 20:55:44.913970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.394 qpair failed and we were unable to recover it. 00:25:41.394 [2024-11-26 20:55:44.914191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.394 [2024-11-26 20:55:44.914255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.394 qpair failed and we were unable to recover it. 00:25:41.394 [2024-11-26 20:55:44.914526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.394 [2024-11-26 20:55:44.914594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.394 qpair failed and we were unable to recover it. 00:25:41.394 [2024-11-26 20:55:44.914863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.394 [2024-11-26 20:55:44.914929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.394 qpair failed and we were unable to recover it. 00:25:41.394 [2024-11-26 20:55:44.915210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.394 [2024-11-26 20:55:44.915274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.394 qpair failed and we were unable to recover it. 00:25:41.394 [2024-11-26 20:55:44.915586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.394 [2024-11-26 20:55:44.915651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.394 qpair failed and we were unable to recover it. 00:25:41.394 [2024-11-26 20:55:44.915943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.394 [2024-11-26 20:55:44.916009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.394 qpair failed and we were unable to recover it. 00:25:41.394 [2024-11-26 20:55:44.916328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.394 [2024-11-26 20:55:44.916393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.394 qpair failed and we were unable to recover it. 00:25:41.394 [2024-11-26 20:55:44.916604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.394 [2024-11-26 20:55:44.916681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.394 qpair failed and we were unable to recover it. 00:25:41.394 [2024-11-26 20:55:44.916943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.394 [2024-11-26 20:55:44.917009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.394 qpair failed and we were unable to recover it. 00:25:41.394 [2024-11-26 20:55:44.917345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.394 [2024-11-26 20:55:44.917412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.394 qpair failed and we were unable to recover it. 00:25:41.394 [2024-11-26 20:55:44.917672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.395 [2024-11-26 20:55:44.917737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.395 qpair failed and we were unable to recover it. 00:25:41.395 [2024-11-26 20:55:44.918001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.395 [2024-11-26 20:55:44.918069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.395 qpair failed and we were unable to recover it. 00:25:41.395 [2024-11-26 20:55:44.918340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.395 [2024-11-26 20:55:44.918407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.395 qpair failed and we were unable to recover it. 00:25:41.395 [2024-11-26 20:55:44.918667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.395 [2024-11-26 20:55:44.918732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.395 qpair failed and we were unable to recover it. 00:25:41.395 [2024-11-26 20:55:44.918981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.395 [2024-11-26 20:55:44.919046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.395 qpair failed and we were unable to recover it. 00:25:41.395 [2024-11-26 20:55:44.919318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.395 [2024-11-26 20:55:44.919387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.395 qpair failed and we were unable to recover it. 00:25:41.395 [2024-11-26 20:55:44.919672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.395 [2024-11-26 20:55:44.919738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.395 qpair failed and we were unable to recover it. 00:25:41.395 [2024-11-26 20:55:44.919986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.395 [2024-11-26 20:55:44.920050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.395 qpair failed and we were unable to recover it. 00:25:41.395 [2024-11-26 20:55:44.920299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.395 [2024-11-26 20:55:44.920382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.395 qpair failed and we were unable to recover it. 00:25:41.395 [2024-11-26 20:55:44.920592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.395 [2024-11-26 20:55:44.920657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.395 qpair failed and we were unable to recover it. 00:25:41.395 [2024-11-26 20:55:44.920946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.395 [2024-11-26 20:55:44.921011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.395 qpair failed and we were unable to recover it. 00:25:41.395 [2024-11-26 20:55:44.921296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.395 [2024-11-26 20:55:44.921386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.395 qpair failed and we were unable to recover it. 00:25:41.395 [2024-11-26 20:55:44.921669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.395 [2024-11-26 20:55:44.921734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.395 qpair failed and we were unable to recover it. 00:25:41.395 [2024-11-26 20:55:44.921951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.395 [2024-11-26 20:55:44.922015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.395 qpair failed and we were unable to recover it. 00:25:41.395 [2024-11-26 20:55:44.922276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.395 [2024-11-26 20:55:44.922361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.395 qpair failed and we were unable to recover it. 00:25:41.395 [2024-11-26 20:55:44.922624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.395 [2024-11-26 20:55:44.922689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.395 qpair failed and we were unable to recover it. 00:25:41.395 [2024-11-26 20:55:44.922894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.395 [2024-11-26 20:55:44.922962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.395 qpair failed and we were unable to recover it. 00:25:41.395 [2024-11-26 20:55:44.923210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.395 [2024-11-26 20:55:44.923275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.395 qpair failed and we were unable to recover it. 00:25:41.395 [2024-11-26 20:55:44.923530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.395 [2024-11-26 20:55:44.923607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.395 qpair failed and we were unable to recover it. 00:25:41.395 [2024-11-26 20:55:44.923871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.395 [2024-11-26 20:55:44.923937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.395 qpair failed and we were unable to recover it. 00:25:41.395 [2024-11-26 20:55:44.924176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.395 [2024-11-26 20:55:44.924240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.395 qpair failed and we were unable to recover it. 00:25:41.395 [2024-11-26 20:55:44.924543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.395 [2024-11-26 20:55:44.924609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.395 qpair failed and we were unable to recover it. 00:25:41.395 [2024-11-26 20:55:44.924865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.395 [2024-11-26 20:55:44.924931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.395 qpair failed and we were unable to recover it. 00:25:41.395 [2024-11-26 20:55:44.925133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.395 [2024-11-26 20:55:44.925197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.395 qpair failed and we were unable to recover it. 00:25:41.395 [2024-11-26 20:55:44.925484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.395 [2024-11-26 20:55:44.925553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.395 qpair failed and we were unable to recover it. 00:25:41.395 [2024-11-26 20:55:44.925796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.395 [2024-11-26 20:55:44.925860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.395 qpair failed and we were unable to recover it. 00:25:41.395 [2024-11-26 20:55:44.926106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.395 [2024-11-26 20:55:44.926174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.395 qpair failed and we were unable to recover it. 00:25:41.395 [2024-11-26 20:55:44.926435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.395 [2024-11-26 20:55:44.926505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.395 qpair failed and we were unable to recover it. 00:25:41.395 [2024-11-26 20:55:44.926761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.395 [2024-11-26 20:55:44.926825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.395 qpair failed and we were unable to recover it. 00:25:41.395 [2024-11-26 20:55:44.927072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.395 [2024-11-26 20:55:44.927136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.395 qpair failed and we were unable to recover it. 00:25:41.395 [2024-11-26 20:55:44.927415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.395 [2024-11-26 20:55:44.927481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.395 qpair failed and we were unable to recover it. 00:25:41.395 [2024-11-26 20:55:44.927738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.395 [2024-11-26 20:55:44.927803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.395 qpair failed and we were unable to recover it. 00:25:41.395 [2024-11-26 20:55:44.928037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.395 [2024-11-26 20:55:44.928104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.395 qpair failed and we were unable to recover it. 00:25:41.395 [2024-11-26 20:55:44.928338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.395 [2024-11-26 20:55:44.928404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.395 qpair failed and we were unable to recover it. 00:25:41.395 [2024-11-26 20:55:44.928673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.395 [2024-11-26 20:55:44.928738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.395 qpair failed and we were unable to recover it. 00:25:41.395 [2024-11-26 20:55:44.928978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.395 [2024-11-26 20:55:44.929043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.395 qpair failed and we were unable to recover it. 00:25:41.395 [2024-11-26 20:55:44.929286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.395 [2024-11-26 20:55:44.929365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.395 qpair failed and we were unable to recover it. 00:25:41.395 [2024-11-26 20:55:44.929614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.395 [2024-11-26 20:55:44.929678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.395 qpair failed and we were unable to recover it. 00:25:41.395 [2024-11-26 20:55:44.929949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.396 [2024-11-26 20:55:44.930014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.396 qpair failed and we were unable to recover it. 00:25:41.396 [2024-11-26 20:55:44.930219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.396 [2024-11-26 20:55:44.930285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.396 qpair failed and we were unable to recover it. 00:25:41.396 [2024-11-26 20:55:44.930592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.396 [2024-11-26 20:55:44.930657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.396 qpair failed and we were unable to recover it. 00:25:41.396 [2024-11-26 20:55:44.930920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.396 [2024-11-26 20:55:44.930985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.396 qpair failed and we were unable to recover it. 00:25:41.396 [2024-11-26 20:55:44.931272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.396 [2024-11-26 20:55:44.931357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.396 qpair failed and we were unable to recover it. 00:25:41.396 [2024-11-26 20:55:44.931646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.396 [2024-11-26 20:55:44.931711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.396 qpair failed and we were unable to recover it. 00:25:41.396 [2024-11-26 20:55:44.931919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.396 [2024-11-26 20:55:44.931986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.396 qpair failed and we were unable to recover it. 00:25:41.396 [2024-11-26 20:55:44.932259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.396 [2024-11-26 20:55:44.932341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.396 qpair failed and we were unable to recover it. 00:25:41.396 [2024-11-26 20:55:44.932599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.396 [2024-11-26 20:55:44.932665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.396 qpair failed and we were unable to recover it. 00:25:41.396 [2024-11-26 20:55:44.932970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.396 [2024-11-26 20:55:44.933036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.396 qpair failed and we were unable to recover it. 00:25:41.396 [2024-11-26 20:55:44.933358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.396 [2024-11-26 20:55:44.933425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.396 qpair failed and we were unable to recover it. 00:25:41.396 [2024-11-26 20:55:44.933711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.396 [2024-11-26 20:55:44.933774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.396 qpair failed and we were unable to recover it. 00:25:41.396 [2024-11-26 20:55:44.934041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.396 [2024-11-26 20:55:44.934104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.396 qpair failed and we were unable to recover it. 00:25:41.396 [2024-11-26 20:55:44.934293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.396 [2024-11-26 20:55:44.934380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.396 qpair failed and we were unable to recover it. 00:25:41.396 [2024-11-26 20:55:44.934647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.396 [2024-11-26 20:55:44.934712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.396 qpair failed and we were unable to recover it. 00:25:41.396 [2024-11-26 20:55:44.934999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.396 [2024-11-26 20:55:44.935064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.396 qpair failed and we were unable to recover it. 00:25:41.396 [2024-11-26 20:55:44.935284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.396 [2024-11-26 20:55:44.935369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.396 qpair failed and we were unable to recover it. 00:25:41.396 [2024-11-26 20:55:44.935620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.396 [2024-11-26 20:55:44.935684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.396 qpair failed and we were unable to recover it. 00:25:41.396 [2024-11-26 20:55:44.935901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.396 [2024-11-26 20:55:44.935966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.396 qpair failed and we were unable to recover it. 00:25:41.396 [2024-11-26 20:55:44.936217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.396 [2024-11-26 20:55:44.936282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.396 qpair failed and we were unable to recover it. 00:25:41.396 [2024-11-26 20:55:44.936512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.396 [2024-11-26 20:55:44.936598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.396 qpair failed and we were unable to recover it. 00:25:41.396 [2024-11-26 20:55:44.936891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.396 [2024-11-26 20:55:44.936956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.396 qpair failed and we were unable to recover it. 00:25:41.396 [2024-11-26 20:55:44.937235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.396 [2024-11-26 20:55:44.937299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.396 qpair failed and we were unable to recover it. 00:25:41.396 [2024-11-26 20:55:44.937526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.396 [2024-11-26 20:55:44.937592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.396 qpair failed and we were unable to recover it. 00:25:41.396 [2024-11-26 20:55:44.937881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.396 [2024-11-26 20:55:44.937946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.396 qpair failed and we were unable to recover it. 00:25:41.396 [2024-11-26 20:55:44.938188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.396 [2024-11-26 20:55:44.938255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.396 qpair failed and we were unable to recover it. 00:25:41.396 [2024-11-26 20:55:44.938523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.396 [2024-11-26 20:55:44.938589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.396 qpair failed and we were unable to recover it. 00:25:41.396 [2024-11-26 20:55:44.938824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.396 [2024-11-26 20:55:44.938889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.396 qpair failed and we were unable to recover it. 00:25:41.396 [2024-11-26 20:55:44.939134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.396 [2024-11-26 20:55:44.939201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.396 qpair failed and we were unable to recover it. 00:25:41.396 [2024-11-26 20:55:44.939474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.396 [2024-11-26 20:55:44.939544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.396 qpair failed and we were unable to recover it. 00:25:41.396 [2024-11-26 20:55:44.939753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.396 [2024-11-26 20:55:44.939828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.396 qpair failed and we were unable to recover it. 00:25:41.396 [2024-11-26 20:55:44.940051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.396 [2024-11-26 20:55:44.940115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.396 qpair failed and we were unable to recover it. 00:25:41.396 [2024-11-26 20:55:44.940357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.396 [2024-11-26 20:55:44.940424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.396 qpair failed and we were unable to recover it. 00:25:41.396 [2024-11-26 20:55:44.940730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.396 [2024-11-26 20:55:44.940796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.396 qpair failed and we were unable to recover it. 00:25:41.396 [2024-11-26 20:55:44.941065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.396 [2024-11-26 20:55:44.941128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.396 qpair failed and we were unable to recover it. 00:25:41.396 [2024-11-26 20:55:44.941412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.396 [2024-11-26 20:55:44.941478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.396 qpair failed and we were unable to recover it. 00:25:41.396 [2024-11-26 20:55:44.941726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.396 [2024-11-26 20:55:44.941791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.396 qpair failed and we were unable to recover it. 00:25:41.396 [2024-11-26 20:55:44.942036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.396 [2024-11-26 20:55:44.942101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.396 qpair failed and we were unable to recover it. 00:25:41.397 [2024-11-26 20:55:44.942400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.397 [2024-11-26 20:55:44.942466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.397 qpair failed and we were unable to recover it. 00:25:41.397 [2024-11-26 20:55:44.942729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.397 [2024-11-26 20:55:44.942795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.397 qpair failed and we were unable to recover it. 00:25:41.397 [2024-11-26 20:55:44.943041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.397 [2024-11-26 20:55:44.943106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.397 qpair failed and we were unable to recover it. 00:25:41.397 [2024-11-26 20:55:44.943390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.397 [2024-11-26 20:55:44.943455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.397 qpair failed and we were unable to recover it. 00:25:41.397 [2024-11-26 20:55:44.943702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.397 [2024-11-26 20:55:44.943766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.397 qpair failed and we were unable to recover it. 00:25:41.397 [2024-11-26 20:55:44.944007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.397 [2024-11-26 20:55:44.944073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.397 qpair failed and we were unable to recover it. 00:25:41.397 [2024-11-26 20:55:44.944370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.397 [2024-11-26 20:55:44.944437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.397 qpair failed and we were unable to recover it. 00:25:41.397 [2024-11-26 20:55:44.944650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.397 [2024-11-26 20:55:44.944716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.397 qpair failed and we were unable to recover it. 00:25:41.397 [2024-11-26 20:55:44.944918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.397 [2024-11-26 20:55:44.944983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.397 qpair failed and we were unable to recover it. 00:25:41.397 [2024-11-26 20:55:44.945245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.397 [2024-11-26 20:55:44.945325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.397 qpair failed and we were unable to recover it. 00:25:41.397 [2024-11-26 20:55:44.945574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.397 [2024-11-26 20:55:44.945639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.397 qpair failed and we were unable to recover it. 00:25:41.397 [2024-11-26 20:55:44.945923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.397 [2024-11-26 20:55:44.945987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.397 qpair failed and we were unable to recover it. 00:25:41.397 [2024-11-26 20:55:44.946223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.397 [2024-11-26 20:55:44.946288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.397 qpair failed and we were unable to recover it. 00:25:41.397 [2024-11-26 20:55:44.946535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.397 [2024-11-26 20:55:44.946600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.397 qpair failed and we were unable to recover it. 00:25:41.397 [2024-11-26 20:55:44.946894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.397 [2024-11-26 20:55:44.946958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.397 qpair failed and we were unable to recover it. 00:25:41.397 [2024-11-26 20:55:44.947213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.397 [2024-11-26 20:55:44.947281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.397 qpair failed and we were unable to recover it. 00:25:41.397 [2024-11-26 20:55:44.947597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.397 [2024-11-26 20:55:44.947662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.397 qpair failed and we were unable to recover it. 00:25:41.397 [2024-11-26 20:55:44.947942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.397 [2024-11-26 20:55:44.948006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.397 qpair failed and we were unable to recover it. 00:25:41.397 [2024-11-26 20:55:44.948260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.397 [2024-11-26 20:55:44.948347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.397 qpair failed and we were unable to recover it. 00:25:41.397 [2024-11-26 20:55:44.948644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.397 [2024-11-26 20:55:44.948709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.397 qpair failed and we were unable to recover it. 00:25:41.397 [2024-11-26 20:55:44.948990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.397 [2024-11-26 20:55:44.949055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.397 qpair failed and we were unable to recover it. 00:25:41.397 [2024-11-26 20:55:44.949268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.397 [2024-11-26 20:55:44.949374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.397 qpair failed and we were unable to recover it. 00:25:41.397 [2024-11-26 20:55:44.949667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.397 [2024-11-26 20:55:44.949744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.397 qpair failed and we were unable to recover it. 00:25:41.397 [2024-11-26 20:55:44.950011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.397 [2024-11-26 20:55:44.950076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.397 qpair failed and we were unable to recover it. 00:25:41.397 [2024-11-26 20:55:44.950286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.397 [2024-11-26 20:55:44.950376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.397 qpair failed and we were unable to recover it. 00:25:41.397 [2024-11-26 20:55:44.950673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.397 [2024-11-26 20:55:44.950737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.397 qpair failed and we were unable to recover it. 00:25:41.397 [2024-11-26 20:55:44.950979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.397 [2024-11-26 20:55:44.951046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.397 qpair failed and we were unable to recover it. 00:25:41.397 [2024-11-26 20:55:44.951327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.397 [2024-11-26 20:55:44.951394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.397 qpair failed and we were unable to recover it. 00:25:41.397 [2024-11-26 20:55:44.951692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.397 [2024-11-26 20:55:44.951757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.397 qpair failed and we were unable to recover it. 00:25:41.397 [2024-11-26 20:55:44.952044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.397 [2024-11-26 20:55:44.952108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.397 qpair failed and we were unable to recover it. 00:25:41.397 [2024-11-26 20:55:44.952384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.397 [2024-11-26 20:55:44.952452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.397 qpair failed and we were unable to recover it. 00:25:41.397 [2024-11-26 20:55:44.952737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.397 [2024-11-26 20:55:44.952801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.397 qpair failed and we were unable to recover it. 00:25:41.397 [2024-11-26 20:55:44.953060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.398 [2024-11-26 20:55:44.953125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.398 qpair failed and we were unable to recover it. 00:25:41.398 [2024-11-26 20:55:44.953414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.398 [2024-11-26 20:55:44.953479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.398 qpair failed and we were unable to recover it. 00:25:41.398 [2024-11-26 20:55:44.953683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.398 [2024-11-26 20:55:44.953747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.398 qpair failed and we were unable to recover it. 00:25:41.398 [2024-11-26 20:55:44.953996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.398 [2024-11-26 20:55:44.954065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.398 qpair failed and we were unable to recover it. 00:25:41.398 [2024-11-26 20:55:44.954332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.398 [2024-11-26 20:55:44.954397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.398 qpair failed and we were unable to recover it. 00:25:41.398 [2024-11-26 20:55:44.954596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.398 [2024-11-26 20:55:44.954664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.398 qpair failed and we were unable to recover it. 00:25:41.398 [2024-11-26 20:55:44.954874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.398 [2024-11-26 20:55:44.954943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.398 qpair failed and we were unable to recover it. 00:25:41.398 [2024-11-26 20:55:44.955202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.398 [2024-11-26 20:55:44.955266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.398 qpair failed and we were unable to recover it. 00:25:41.398 [2024-11-26 20:55:44.955497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.398 [2024-11-26 20:55:44.955563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.398 qpair failed and we were unable to recover it. 00:25:41.398 [2024-11-26 20:55:44.955819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.398 [2024-11-26 20:55:44.955884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.398 qpair failed and we were unable to recover it. 00:25:41.398 [2024-11-26 20:55:44.956120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.398 [2024-11-26 20:55:44.956185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.398 qpair failed and we were unable to recover it. 00:25:41.398 [2024-11-26 20:55:44.956431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.398 [2024-11-26 20:55:44.956499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.398 qpair failed and we were unable to recover it. 00:25:41.398 [2024-11-26 20:55:44.956760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.398 [2024-11-26 20:55:44.956825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.398 qpair failed and we were unable to recover it. 00:25:41.398 [2024-11-26 20:55:44.957114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.398 [2024-11-26 20:55:44.957180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.398 qpair failed and we were unable to recover it. 00:25:41.398 [2024-11-26 20:55:44.957475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.398 [2024-11-26 20:55:44.957541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.398 qpair failed and we were unable to recover it. 00:25:41.398 [2024-11-26 20:55:44.957827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.398 [2024-11-26 20:55:44.957891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.398 qpair failed and we were unable to recover it. 00:25:41.398 [2024-11-26 20:55:44.958139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.398 [2024-11-26 20:55:44.958204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.398 qpair failed and we were unable to recover it. 00:25:41.398 [2024-11-26 20:55:44.958513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.398 [2024-11-26 20:55:44.958580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.398 qpair failed and we were unable to recover it. 00:25:41.398 [2024-11-26 20:55:44.958829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.398 [2024-11-26 20:55:44.958893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.398 qpair failed and we were unable to recover it. 00:25:41.398 [2024-11-26 20:55:44.959142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.398 [2024-11-26 20:55:44.959207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.398 qpair failed and we were unable to recover it. 00:25:41.398 [2024-11-26 20:55:44.959489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.398 [2024-11-26 20:55:44.959555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.398 qpair failed and we were unable to recover it. 00:25:41.398 [2024-11-26 20:55:44.959754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.398 [2024-11-26 20:55:44.959822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.398 qpair failed and we were unable to recover it. 00:25:41.398 [2024-11-26 20:55:44.960081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.398 [2024-11-26 20:55:44.960146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.398 qpair failed and we were unable to recover it. 00:25:41.398 [2024-11-26 20:55:44.960413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.398 [2024-11-26 20:55:44.960479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.398 qpair failed and we were unable to recover it. 00:25:41.398 [2024-11-26 20:55:44.960741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.398 [2024-11-26 20:55:44.960806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.398 qpair failed and we were unable to recover it. 00:25:41.398 [2024-11-26 20:55:44.961065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.398 [2024-11-26 20:55:44.961130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.398 qpair failed and we were unable to recover it. 00:25:41.398 [2024-11-26 20:55:44.961399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.398 [2024-11-26 20:55:44.961466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.398 qpair failed and we were unable to recover it. 00:25:41.398 [2024-11-26 20:55:44.961719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.398 [2024-11-26 20:55:44.961784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.398 qpair failed and we were unable to recover it. 00:25:41.398 [2024-11-26 20:55:44.961997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.398 [2024-11-26 20:55:44.962064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.398 qpair failed and we were unable to recover it. 00:25:41.398 [2024-11-26 20:55:44.962333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.398 [2024-11-26 20:55:44.962400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.398 qpair failed and we were unable to recover it. 00:25:41.398 [2024-11-26 20:55:44.962632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.398 [2024-11-26 20:55:44.962708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.398 qpair failed and we were unable to recover it. 00:25:41.398 [2024-11-26 20:55:44.962961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.398 [2024-11-26 20:55:44.963027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.398 qpair failed and we were unable to recover it. 00:25:41.398 [2024-11-26 20:55:44.963271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.398 [2024-11-26 20:55:44.963353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.398 qpair failed and we were unable to recover it. 00:25:41.398 [2024-11-26 20:55:44.963597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.398 [2024-11-26 20:55:44.963662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.398 qpair failed and we were unable to recover it. 00:25:41.398 [2024-11-26 20:55:44.963901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.398 [2024-11-26 20:55:44.963965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.398 qpair failed and we were unable to recover it. 00:25:41.398 [2024-11-26 20:55:44.964261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.398 [2024-11-26 20:55:44.964358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.398 qpair failed and we were unable to recover it. 00:25:41.398 [2024-11-26 20:55:44.964590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.398 [2024-11-26 20:55:44.964655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.398 qpair failed and we were unable to recover it. 00:25:41.398 [2024-11-26 20:55:44.964903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.398 [2024-11-26 20:55:44.964966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.398 qpair failed and we were unable to recover it. 00:25:41.398 [2024-11-26 20:55:44.965258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.399 [2024-11-26 20:55:44.965343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.399 qpair failed and we were unable to recover it. 00:25:41.399 [2024-11-26 20:55:44.965597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.399 [2024-11-26 20:55:44.965665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.399 qpair failed and we were unable to recover it. 00:25:41.399 [2024-11-26 20:55:44.965960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.399 [2024-11-26 20:55:44.966023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.399 qpair failed and we were unable to recover it. 00:25:41.399 [2024-11-26 20:55:44.966328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.399 [2024-11-26 20:55:44.966394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.399 qpair failed and we were unable to recover it. 00:25:41.399 [2024-11-26 20:55:44.966694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.399 [2024-11-26 20:55:44.966760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.399 qpair failed and we were unable to recover it. 00:25:41.399 [2024-11-26 20:55:44.967001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.399 [2024-11-26 20:55:44.967068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.399 qpair failed and we were unable to recover it. 00:25:41.399 [2024-11-26 20:55:44.967344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.399 [2024-11-26 20:55:44.967412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.399 qpair failed and we were unable to recover it. 00:25:41.399 [2024-11-26 20:55:44.967633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.399 [2024-11-26 20:55:44.967698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.399 qpair failed and we were unable to recover it. 00:25:41.399 [2024-11-26 20:55:44.967960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.399 [2024-11-26 20:55:44.968024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.399 qpair failed and we were unable to recover it. 00:25:41.399 [2024-11-26 20:55:44.968329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.399 [2024-11-26 20:55:44.968395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.399 qpair failed and we were unable to recover it. 00:25:41.399 [2024-11-26 20:55:44.968639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.399 [2024-11-26 20:55:44.968703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.399 qpair failed and we were unable to recover it. 00:25:41.399 [2024-11-26 20:55:44.968958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.399 [2024-11-26 20:55:44.969022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.399 qpair failed and we were unable to recover it. 00:25:41.399 [2024-11-26 20:55:44.969253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.399 [2024-11-26 20:55:44.969332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.399 qpair failed and we were unable to recover it. 00:25:41.399 [2024-11-26 20:55:44.969585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.399 [2024-11-26 20:55:44.969649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.399 qpair failed and we were unable to recover it. 00:25:41.399 [2024-11-26 20:55:44.969915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.399 [2024-11-26 20:55:44.969979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.399 qpair failed and we were unable to recover it. 00:25:41.399 [2024-11-26 20:55:44.970240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.399 [2024-11-26 20:55:44.970319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.399 qpair failed and we were unable to recover it. 00:25:41.399 [2024-11-26 20:55:44.970613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.399 [2024-11-26 20:55:44.970677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.399 qpair failed and we were unable to recover it. 00:25:41.399 [2024-11-26 20:55:44.970892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.399 [2024-11-26 20:55:44.970956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.399 qpair failed and we were unable to recover it. 00:25:41.399 [2024-11-26 20:55:44.971244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.399 [2024-11-26 20:55:44.971328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.399 qpair failed and we were unable to recover it. 00:25:41.399 [2024-11-26 20:55:44.971595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.399 [2024-11-26 20:55:44.971661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.399 qpair failed and we were unable to recover it. 00:25:41.399 [2024-11-26 20:55:44.971906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.399 [2024-11-26 20:55:44.971974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.399 qpair failed and we were unable to recover it. 00:25:41.399 [2024-11-26 20:55:44.972259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.399 [2024-11-26 20:55:44.972360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.399 qpair failed and we were unable to recover it. 00:25:41.399 [2024-11-26 20:55:44.972660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.399 [2024-11-26 20:55:44.972725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.399 qpair failed and we were unable to recover it. 00:25:41.399 [2024-11-26 20:55:44.972987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.399 [2024-11-26 20:55:44.973052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.399 qpair failed and we were unable to recover it. 00:25:41.399 [2024-11-26 20:55:44.973328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.399 [2024-11-26 20:55:44.973396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.399 qpair failed and we were unable to recover it. 00:25:41.399 [2024-11-26 20:55:44.973653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.399 [2024-11-26 20:55:44.973718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.399 qpair failed and we were unable to recover it. 00:25:41.399 [2024-11-26 20:55:44.974005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.399 [2024-11-26 20:55:44.974069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.399 qpair failed and we were unable to recover it. 00:25:41.399 [2024-11-26 20:55:44.974335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.399 [2024-11-26 20:55:44.974401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.399 qpair failed and we were unable to recover it. 00:25:41.399 [2024-11-26 20:55:44.974600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.399 [2024-11-26 20:55:44.974668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.399 qpair failed and we were unable to recover it. 00:25:41.399 [2024-11-26 20:55:44.974945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.399 [2024-11-26 20:55:44.975010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.399 qpair failed and we were unable to recover it. 00:25:41.399 [2024-11-26 20:55:44.975228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.399 [2024-11-26 20:55:44.975295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.399 qpair failed and we were unable to recover it. 00:25:41.399 [2024-11-26 20:55:44.975607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.399 [2024-11-26 20:55:44.975673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.399 qpair failed and we were unable to recover it. 00:25:41.399 [2024-11-26 20:55:44.975926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.399 [2024-11-26 20:55:44.976001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.399 qpair failed and we were unable to recover it. 00:25:41.399 [2024-11-26 20:55:44.976321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.399 [2024-11-26 20:55:44.976388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.399 qpair failed and we were unable to recover it. 00:25:41.399 [2024-11-26 20:55:44.976632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.399 [2024-11-26 20:55:44.976697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.399 qpair failed and we were unable to recover it. 00:25:41.399 [2024-11-26 20:55:44.976908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.399 [2024-11-26 20:55:44.976975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.399 qpair failed and we were unable to recover it. 00:25:41.399 [2024-11-26 20:55:44.977262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.399 [2024-11-26 20:55:44.977347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.399 qpair failed and we were unable to recover it. 00:25:41.399 [2024-11-26 20:55:44.977605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.399 [2024-11-26 20:55:44.977670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.399 qpair failed and we were unable to recover it. 00:25:41.399 [2024-11-26 20:55:44.977909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.400 [2024-11-26 20:55:44.977972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.400 qpair failed and we were unable to recover it. 00:25:41.400 [2024-11-26 20:55:44.978258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.400 [2024-11-26 20:55:44.978354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.400 qpair failed and we were unable to recover it. 00:25:41.400 [2024-11-26 20:55:44.978641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.400 [2024-11-26 20:55:44.978706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.400 qpair failed and we were unable to recover it. 00:25:41.400 [2024-11-26 20:55:44.978986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.400 [2024-11-26 20:55:44.979050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.400 qpair failed and we were unable to recover it. 00:25:41.400 [2024-11-26 20:55:44.979320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.400 [2024-11-26 20:55:44.979386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.400 qpair failed and we were unable to recover it. 00:25:41.400 [2024-11-26 20:55:44.979599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.400 [2024-11-26 20:55:44.979665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.400 qpair failed and we were unable to recover it. 00:25:41.400 [2024-11-26 20:55:44.979884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.400 [2024-11-26 20:55:44.979949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.400 qpair failed and we were unable to recover it. 00:25:41.400 [2024-11-26 20:55:44.980234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.400 [2024-11-26 20:55:44.980299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.400 qpair failed and we were unable to recover it. 00:25:41.400 [2024-11-26 20:55:44.980587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.400 [2024-11-26 20:55:44.980653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.400 qpair failed and we were unable to recover it. 00:25:41.400 [2024-11-26 20:55:44.980903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.400 [2024-11-26 20:55:44.980967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.400 qpair failed and we were unable to recover it. 00:25:41.400 [2024-11-26 20:55:44.981230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.400 [2024-11-26 20:55:44.981294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.400 qpair failed and we were unable to recover it. 00:25:41.400 [2024-11-26 20:55:44.981578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.400 [2024-11-26 20:55:44.981644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.400 qpair failed and we were unable to recover it. 00:25:41.400 [2024-11-26 20:55:44.981908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.400 [2024-11-26 20:55:44.981973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.400 qpair failed and we were unable to recover it. 00:25:41.400 [2024-11-26 20:55:44.982223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.400 [2024-11-26 20:55:44.982288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.400 qpair failed and we were unable to recover it. 00:25:41.400 [2024-11-26 20:55:44.982558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.400 [2024-11-26 20:55:44.982622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.400 qpair failed and we were unable to recover it. 00:25:41.400 [2024-11-26 20:55:44.982868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.400 [2024-11-26 20:55:44.982936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.400 qpair failed and we were unable to recover it. 00:25:41.400 [2024-11-26 20:55:44.983188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.400 [2024-11-26 20:55:44.983255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.400 qpair failed and we were unable to recover it. 00:25:41.400 [2024-11-26 20:55:44.983530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.400 [2024-11-26 20:55:44.983596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.400 qpair failed and we were unable to recover it. 00:25:41.400 [2024-11-26 20:55:44.983884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.400 [2024-11-26 20:55:44.983948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.400 qpair failed and we were unable to recover it. 00:25:41.400 [2024-11-26 20:55:44.984198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.400 [2024-11-26 20:55:44.984263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.400 qpair failed and we were unable to recover it. 00:25:41.400 [2024-11-26 20:55:44.984495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.400 [2024-11-26 20:55:44.984559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.400 qpair failed and we were unable to recover it. 00:25:41.400 [2024-11-26 20:55:44.984783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.400 [2024-11-26 20:55:44.984848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.400 qpair failed and we were unable to recover it. 00:25:41.400 [2024-11-26 20:55:44.985096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.400 [2024-11-26 20:55:44.985161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.400 qpair failed and we were unable to recover it. 00:25:41.400 [2024-11-26 20:55:44.985447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.400 [2024-11-26 20:55:44.985513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.400 qpair failed and we were unable to recover it. 00:25:41.400 [2024-11-26 20:55:44.985812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.400 [2024-11-26 20:55:44.985876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.400 qpair failed and we were unable to recover it. 00:25:41.400 [2024-11-26 20:55:44.986116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.400 [2024-11-26 20:55:44.986183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.400 qpair failed and we were unable to recover it. 00:25:41.400 [2024-11-26 20:55:44.986435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.400 [2024-11-26 20:55:44.986501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.400 qpair failed and we were unable to recover it. 00:25:41.400 [2024-11-26 20:55:44.986758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.400 [2024-11-26 20:55:44.986823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.400 qpair failed and we were unable to recover it. 00:25:41.400 [2024-11-26 20:55:44.987114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.400 [2024-11-26 20:55:44.987178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.400 qpair failed and we were unable to recover it. 00:25:41.400 [2024-11-26 20:55:44.987468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.400 [2024-11-26 20:55:44.987533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.400 qpair failed and we were unable to recover it. 00:25:41.400 [2024-11-26 20:55:44.987821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.400 [2024-11-26 20:55:44.987884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.400 qpair failed and we were unable to recover it. 00:25:41.400 [2024-11-26 20:55:44.988139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.400 [2024-11-26 20:55:44.988204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.400 qpair failed and we were unable to recover it. 00:25:41.400 [2024-11-26 20:55:44.988475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.400 [2024-11-26 20:55:44.988541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.400 qpair failed and we were unable to recover it. 00:25:41.400 [2024-11-26 20:55:44.988829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.400 [2024-11-26 20:55:44.988895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.400 qpair failed and we were unable to recover it. 00:25:41.400 [2024-11-26 20:55:44.989113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.400 [2024-11-26 20:55:44.989188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.400 qpair failed and we were unable to recover it. 00:25:41.400 [2024-11-26 20:55:44.989458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.400 [2024-11-26 20:55:44.989525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.400 qpair failed and we were unable to recover it. 00:25:41.400 [2024-11-26 20:55:44.989731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.400 [2024-11-26 20:55:44.989795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.400 qpair failed and we were unable to recover it. 00:25:41.400 [2024-11-26 20:55:44.990050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.400 [2024-11-26 20:55:44.990117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.400 qpair failed and we were unable to recover it. 00:25:41.400 [2024-11-26 20:55:44.990408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.401 [2024-11-26 20:55:44.990475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.401 qpair failed and we were unable to recover it. 00:25:41.401 [2024-11-26 20:55:44.990755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.401 [2024-11-26 20:55:44.990819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.401 qpair failed and we were unable to recover it. 00:25:41.401 [2024-11-26 20:55:44.991039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.401 [2024-11-26 20:55:44.991103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.401 qpair failed and we were unable to recover it. 00:25:41.401 [2024-11-26 20:55:44.991325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.401 [2024-11-26 20:55:44.991395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.401 qpair failed and we were unable to recover it. 00:25:41.401 [2024-11-26 20:55:44.991614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.401 [2024-11-26 20:55:44.991679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.401 qpair failed and we were unable to recover it. 00:25:41.401 [2024-11-26 20:55:44.991924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.401 [2024-11-26 20:55:44.991990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.401 qpair failed and we were unable to recover it. 00:25:41.401 [2024-11-26 20:55:44.992192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.401 [2024-11-26 20:55:44.992260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.401 qpair failed and we were unable to recover it. 00:25:41.401 [2024-11-26 20:55:44.992516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.401 [2024-11-26 20:55:44.992583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.401 qpair failed and we were unable to recover it. 00:25:41.401 [2024-11-26 20:55:44.992822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.401 [2024-11-26 20:55:44.992888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.401 qpair failed and we were unable to recover it. 00:25:41.401 [2024-11-26 20:55:44.993172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.401 [2024-11-26 20:55:44.993236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.401 qpair failed and we were unable to recover it. 00:25:41.401 [2024-11-26 20:55:44.993504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.401 [2024-11-26 20:55:44.993571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.401 qpair failed and we were unable to recover it. 00:25:41.401 [2024-11-26 20:55:44.993861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.401 [2024-11-26 20:55:44.993924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.401 qpair failed and we were unable to recover it. 00:25:41.401 [2024-11-26 20:55:44.994125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.401 [2024-11-26 20:55:44.994190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.401 qpair failed and we were unable to recover it. 00:25:41.401 [2024-11-26 20:55:44.994487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.401 [2024-11-26 20:55:44.994554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.401 qpair failed and we were unable to recover it. 00:25:41.401 [2024-11-26 20:55:44.994814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.401 [2024-11-26 20:55:44.994878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.401 qpair failed and we were unable to recover it. 00:25:41.401 [2024-11-26 20:55:44.995145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.401 [2024-11-26 20:55:44.995208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.401 qpair failed and we were unable to recover it. 00:25:41.401 [2024-11-26 20:55:44.995494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.401 [2024-11-26 20:55:44.995561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.401 qpair failed and we were unable to recover it. 00:25:41.401 [2024-11-26 20:55:44.995842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.401 [2024-11-26 20:55:44.995908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.401 qpair failed and we were unable to recover it. 00:25:41.401 [2024-11-26 20:55:44.996178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.401 [2024-11-26 20:55:44.996243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.401 qpair failed and we were unable to recover it. 00:25:41.401 [2024-11-26 20:55:44.996512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.401 [2024-11-26 20:55:44.996577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.401 qpair failed and we were unable to recover it. 00:25:41.401 [2024-11-26 20:55:44.996833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.401 [2024-11-26 20:55:44.996900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.401 qpair failed and we were unable to recover it. 00:25:41.401 [2024-11-26 20:55:44.997142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.401 [2024-11-26 20:55:44.997208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.401 qpair failed and we were unable to recover it. 00:25:41.401 [2024-11-26 20:55:44.997479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.401 [2024-11-26 20:55:44.997545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.401 qpair failed and we were unable to recover it. 00:25:41.401 [2024-11-26 20:55:44.997842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.401 [2024-11-26 20:55:44.997907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.401 qpair failed and we were unable to recover it. 00:25:41.401 [2024-11-26 20:55:44.998200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.401 [2024-11-26 20:55:44.998266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.401 qpair failed and we were unable to recover it. 00:25:41.401 [2024-11-26 20:55:44.998571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.401 [2024-11-26 20:55:44.998636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.401 qpair failed and we were unable to recover it. 00:25:41.401 [2024-11-26 20:55:44.998891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.401 [2024-11-26 20:55:44.998956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.401 qpair failed and we were unable to recover it. 00:25:41.401 [2024-11-26 20:55:44.999211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.401 [2024-11-26 20:55:44.999279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.401 qpair failed and we were unable to recover it. 00:25:41.401 [2024-11-26 20:55:44.999620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.401 [2024-11-26 20:55:44.999685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.401 qpair failed and we were unable to recover it. 00:25:41.401 [2024-11-26 20:55:44.999912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.401 [2024-11-26 20:55:44.999977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.401 qpair failed and we were unable to recover it. 00:25:41.401 [2024-11-26 20:55:45.000228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.401 [2024-11-26 20:55:45.000293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.401 qpair failed and we were unable to recover it. 00:25:41.401 [2024-11-26 20:55:45.000565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.401 [2024-11-26 20:55:45.000633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.401 qpair failed and we were unable to recover it. 00:25:41.401 [2024-11-26 20:55:45.000924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.401 [2024-11-26 20:55:45.000989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.401 qpair failed and we were unable to recover it. 00:25:41.401 [2024-11-26 20:55:45.001267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.402 [2024-11-26 20:55:45.001355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.402 qpair failed and we were unable to recover it. 00:25:41.402 [2024-11-26 20:55:45.001653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.402 [2024-11-26 20:55:45.001717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.402 qpair failed and we were unable to recover it. 00:25:41.402 [2024-11-26 20:55:45.001957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.402 [2024-11-26 20:55:45.002022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.402 qpair failed and we were unable to recover it. 00:25:41.402 [2024-11-26 20:55:45.002257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.402 [2024-11-26 20:55:45.002359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.402 qpair failed and we were unable to recover it. 00:25:41.402 [2024-11-26 20:55:45.002604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.402 [2024-11-26 20:55:45.002668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.402 qpair failed and we were unable to recover it. 00:25:41.402 [2024-11-26 20:55:45.002896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.402 [2024-11-26 20:55:45.002960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.402 qpair failed and we were unable to recover it. 00:25:41.402 [2024-11-26 20:55:45.003198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.402 [2024-11-26 20:55:45.003263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.402 qpair failed and we were unable to recover it. 00:25:41.402 [2024-11-26 20:55:45.003550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.402 [2024-11-26 20:55:45.003616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.402 qpair failed and we were unable to recover it. 00:25:41.402 [2024-11-26 20:55:45.003839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.402 [2024-11-26 20:55:45.003903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.402 qpair failed and we were unable to recover it. 00:25:41.402 [2024-11-26 20:55:45.004133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.402 [2024-11-26 20:55:45.004197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.402 qpair failed and we were unable to recover it. 00:25:41.402 [2024-11-26 20:55:45.004464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.402 [2024-11-26 20:55:45.004531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.402 qpair failed and we were unable to recover it. 00:25:41.402 [2024-11-26 20:55:45.004796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.402 [2024-11-26 20:55:45.004861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.402 qpair failed and we were unable to recover it. 00:25:41.402 [2024-11-26 20:55:45.005066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.402 [2024-11-26 20:55:45.005130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.402 qpair failed and we were unable to recover it. 00:25:41.402 [2024-11-26 20:55:45.005382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.402 [2024-11-26 20:55:45.005450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.402 qpair failed and we were unable to recover it. 00:25:41.402 [2024-11-26 20:55:45.005747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.402 [2024-11-26 20:55:45.005811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.402 qpair failed and we were unable to recover it. 00:25:41.402 [2024-11-26 20:55:45.006076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.402 [2024-11-26 20:55:45.006141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.402 qpair failed and we were unable to recover it. 00:25:41.402 [2024-11-26 20:55:45.006399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.402 [2024-11-26 20:55:45.006467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.402 qpair failed and we were unable to recover it. 00:25:41.402 [2024-11-26 20:55:45.006743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.402 [2024-11-26 20:55:45.006808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.402 qpair failed and we were unable to recover it. 00:25:41.402 [2024-11-26 20:55:45.007104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.402 [2024-11-26 20:55:45.007168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.402 qpair failed and we were unable to recover it. 00:25:41.402 [2024-11-26 20:55:45.007381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.402 [2024-11-26 20:55:45.007448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.402 qpair failed and we were unable to recover it. 00:25:41.402 [2024-11-26 20:55:45.007636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.402 [2024-11-26 20:55:45.007702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.402 qpair failed and we were unable to recover it. 00:25:41.402 [2024-11-26 20:55:45.007959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.402 [2024-11-26 20:55:45.008024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.402 qpair failed and we were unable to recover it. 00:25:41.402 [2024-11-26 20:55:45.008263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.402 [2024-11-26 20:55:45.008342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.402 qpair failed and we were unable to recover it. 00:25:41.402 [2024-11-26 20:55:45.008595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.402 [2024-11-26 20:55:45.008661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.402 qpair failed and we were unable to recover it. 00:25:41.402 [2024-11-26 20:55:45.008879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.402 [2024-11-26 20:55:45.008944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.402 qpair failed and we were unable to recover it. 00:25:41.402 [2024-11-26 20:55:45.009191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.402 [2024-11-26 20:55:45.009255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.402 qpair failed and we were unable to recover it. 00:25:41.402 [2024-11-26 20:55:45.009520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.402 [2024-11-26 20:55:45.009586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.402 qpair failed and we were unable to recover it. 00:25:41.402 [2024-11-26 20:55:45.009828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.402 [2024-11-26 20:55:45.009894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.402 qpair failed and we were unable to recover it. 00:25:41.402 [2024-11-26 20:55:45.010149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.402 [2024-11-26 20:55:45.010213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.402 qpair failed and we were unable to recover it. 00:25:41.402 [2024-11-26 20:55:45.010488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.402 [2024-11-26 20:55:45.010555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.402 qpair failed and we were unable to recover it. 00:25:41.402 [2024-11-26 20:55:45.010802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.402 [2024-11-26 20:55:45.010869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.402 qpair failed and we were unable to recover it. 00:25:41.402 [2024-11-26 20:55:45.011148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.402 [2024-11-26 20:55:45.011212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.402 qpair failed and we were unable to recover it. 00:25:41.402 [2024-11-26 20:55:45.011504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.402 [2024-11-26 20:55:45.011571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.402 qpair failed and we were unable to recover it. 00:25:41.402 [2024-11-26 20:55:45.011855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.402 [2024-11-26 20:55:45.011920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.402 qpair failed and we were unable to recover it. 00:25:41.402 [2024-11-26 20:55:45.012145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.402 [2024-11-26 20:55:45.012211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.402 qpair failed and we were unable to recover it. 00:25:41.402 [2024-11-26 20:55:45.012486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.402 [2024-11-26 20:55:45.012554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.402 qpair failed and we were unable to recover it. 00:25:41.402 [2024-11-26 20:55:45.012807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.402 [2024-11-26 20:55:45.012872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.402 qpair failed and we were unable to recover it. 00:25:41.402 [2024-11-26 20:55:45.013119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.402 [2024-11-26 20:55:45.013184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.402 qpair failed and we were unable to recover it. 00:25:41.403 [2024-11-26 20:55:45.013464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.403 [2024-11-26 20:55:45.013532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.403 qpair failed and we were unable to recover it. 00:25:41.403 [2024-11-26 20:55:45.013804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.403 [2024-11-26 20:55:45.013869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.403 qpair failed and we were unable to recover it. 00:25:41.403 [2024-11-26 20:55:45.014158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.403 [2024-11-26 20:55:45.014224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.403 qpair failed and we were unable to recover it. 00:25:41.403 [2024-11-26 20:55:45.014481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.403 [2024-11-26 20:55:45.014548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.403 qpair failed and we were unable to recover it. 00:25:41.403 [2024-11-26 20:55:45.014837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.403 [2024-11-26 20:55:45.014901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.403 qpair failed and we were unable to recover it. 00:25:41.403 [2024-11-26 20:55:45.015183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.403 [2024-11-26 20:55:45.015259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.403 qpair failed and we were unable to recover it. 00:25:41.403 [2024-11-26 20:55:45.015559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.403 [2024-11-26 20:55:45.015623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.403 qpair failed and we were unable to recover it. 00:25:41.403 [2024-11-26 20:55:45.015852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.403 [2024-11-26 20:55:45.015918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.403 qpair failed and we were unable to recover it. 00:25:41.403 [2024-11-26 20:55:45.016203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.403 [2024-11-26 20:55:45.016268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.403 qpair failed and we were unable to recover it. 00:25:41.403 [2024-11-26 20:55:45.016542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.403 [2024-11-26 20:55:45.016606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.403 qpair failed and we were unable to recover it. 00:25:41.403 [2024-11-26 20:55:45.016864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.403 [2024-11-26 20:55:45.016928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.403 qpair failed and we were unable to recover it. 00:25:41.403 [2024-11-26 20:55:45.017221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.403 [2024-11-26 20:55:45.017286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.403 qpair failed and we were unable to recover it. 00:25:41.403 [2024-11-26 20:55:45.017575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.403 [2024-11-26 20:55:45.017640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.403 qpair failed and we were unable to recover it. 00:25:41.403 [2024-11-26 20:55:45.017917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.403 [2024-11-26 20:55:45.017982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.403 qpair failed and we were unable to recover it. 00:25:41.403 [2024-11-26 20:55:45.018188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.403 [2024-11-26 20:55:45.018254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.403 qpair failed and we were unable to recover it. 00:25:41.403 [2024-11-26 20:55:45.018528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.403 [2024-11-26 20:55:45.018593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.403 qpair failed and we were unable to recover it. 00:25:41.403 [2024-11-26 20:55:45.018840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.403 [2024-11-26 20:55:45.018905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.403 qpair failed and we were unable to recover it. 00:25:41.403 [2024-11-26 20:55:45.019161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.403 [2024-11-26 20:55:45.019225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.403 qpair failed and we were unable to recover it. 00:25:41.403 [2024-11-26 20:55:45.019498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.403 [2024-11-26 20:55:45.019567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.403 qpair failed and we were unable to recover it. 00:25:41.403 [2024-11-26 20:55:45.019872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.403 [2024-11-26 20:55:45.019937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.403 qpair failed and we were unable to recover it. 00:25:41.403 [2024-11-26 20:55:45.020216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.403 [2024-11-26 20:55:45.020281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.403 qpair failed and we were unable to recover it. 00:25:41.403 [2024-11-26 20:55:45.020564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.403 [2024-11-26 20:55:45.020630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.403 qpair failed and we were unable to recover it. 00:25:41.403 [2024-11-26 20:55:45.020929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.403 [2024-11-26 20:55:45.020993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.403 qpair failed and we were unable to recover it. 00:25:41.403 [2024-11-26 20:55:45.021238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.403 [2024-11-26 20:55:45.021325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.403 qpair failed and we were unable to recover it. 00:25:41.403 [2024-11-26 20:55:45.021611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.403 [2024-11-26 20:55:45.021677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.403 qpair failed and we were unable to recover it. 00:25:41.403 [2024-11-26 20:55:45.021957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.403 [2024-11-26 20:55:45.022022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.403 qpair failed and we were unable to recover it. 00:25:41.403 [2024-11-26 20:55:45.022271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.403 [2024-11-26 20:55:45.022357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.403 qpair failed and we were unable to recover it. 00:25:41.403 [2024-11-26 20:55:45.022609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.403 [2024-11-26 20:55:45.022676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.403 qpair failed and we were unable to recover it. 00:25:41.403 [2024-11-26 20:55:45.022874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.403 [2024-11-26 20:55:45.022940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.403 qpair failed and we were unable to recover it. 00:25:41.403 [2024-11-26 20:55:45.023223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.403 [2024-11-26 20:55:45.023288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.403 qpair failed and we were unable to recover it. 00:25:41.403 [2024-11-26 20:55:45.023617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.403 [2024-11-26 20:55:45.023681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.403 qpair failed and we were unable to recover it. 00:25:41.403 [2024-11-26 20:55:45.023924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.403 [2024-11-26 20:55:45.023992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.403 qpair failed and we were unable to recover it. 00:25:41.403 [2024-11-26 20:55:45.024252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.403 [2024-11-26 20:55:45.024345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.403 qpair failed and we were unable to recover it. 00:25:41.403 [2024-11-26 20:55:45.024570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.403 [2024-11-26 20:55:45.024635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.404 qpair failed and we were unable to recover it. 00:25:41.404 [2024-11-26 20:55:45.024874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.404 [2024-11-26 20:55:45.024939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.404 qpair failed and we were unable to recover it. 00:25:41.404 [2024-11-26 20:55:45.025197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.404 [2024-11-26 20:55:45.025263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.404 qpair failed and we were unable to recover it. 00:25:41.404 [2024-11-26 20:55:45.025570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.404 [2024-11-26 20:55:45.025635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.404 qpair failed and we were unable to recover it. 00:25:41.404 [2024-11-26 20:55:45.025877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.404 [2024-11-26 20:55:45.025942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.404 qpair failed and we were unable to recover it. 00:25:41.404 [2024-11-26 20:55:45.026188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.404 [2024-11-26 20:55:45.026257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.404 qpair failed and we were unable to recover it. 00:25:41.404 [2024-11-26 20:55:45.026563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.404 [2024-11-26 20:55:45.026629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.404 qpair failed and we were unable to recover it. 00:25:41.404 [2024-11-26 20:55:45.026877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.404 [2024-11-26 20:55:45.026942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.404 qpair failed and we were unable to recover it. 00:25:41.404 [2024-11-26 20:55:45.027185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.404 [2024-11-26 20:55:45.027250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.404 qpair failed and we were unable to recover it. 00:25:41.404 [2024-11-26 20:55:45.027569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.404 [2024-11-26 20:55:45.027634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.404 qpair failed and we were unable to recover it. 00:25:41.404 [2024-11-26 20:55:45.027884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.404 [2024-11-26 20:55:45.027939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.404 qpair failed and we were unable to recover it. 00:25:41.404 [2024-11-26 20:55:45.028154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.404 [2024-11-26 20:55:45.028210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.404 qpair failed and we were unable to recover it. 00:25:41.404 [2024-11-26 20:55:45.028459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.404 [2024-11-26 20:55:45.028526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.404 qpair failed and we were unable to recover it. 00:25:41.404 [2024-11-26 20:55:45.028781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.404 [2024-11-26 20:55:45.028837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.404 qpair failed and we were unable to recover it. 00:25:41.404 [2024-11-26 20:55:45.029007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.404 [2024-11-26 20:55:45.029063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.404 qpair failed and we were unable to recover it. 00:25:41.404 [2024-11-26 20:55:45.029238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.404 [2024-11-26 20:55:45.029298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.404 qpair failed and we were unable to recover it. 00:25:41.404 [2024-11-26 20:55:45.029540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.404 [2024-11-26 20:55:45.029598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.404 qpair failed and we were unable to recover it. 00:25:41.404 [2024-11-26 20:55:45.029851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.404 [2024-11-26 20:55:45.029906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.404 qpair failed and we were unable to recover it. 00:25:41.404 [2024-11-26 20:55:45.030159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.404 [2024-11-26 20:55:45.030215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.404 qpair failed and we were unable to recover it. 00:25:41.404 [2024-11-26 20:55:45.030449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.404 [2024-11-26 20:55:45.030506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.404 qpair failed and we were unable to recover it. 00:25:41.404 [2024-11-26 20:55:45.030742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.404 [2024-11-26 20:55:45.030797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.404 qpair failed and we were unable to recover it. 00:25:41.404 [2024-11-26 20:55:45.031013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.404 [2024-11-26 20:55:45.031069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.404 qpair failed and we were unable to recover it. 00:25:41.404 [2024-11-26 20:55:45.031293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.404 [2024-11-26 20:55:45.031366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.404 qpair failed and we were unable to recover it. 00:25:41.404 [2024-11-26 20:55:45.031575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.404 [2024-11-26 20:55:45.031632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.404 qpair failed and we were unable to recover it. 00:25:41.404 [2024-11-26 20:55:45.031854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.404 [2024-11-26 20:55:45.031910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.404 qpair failed and we were unable to recover it. 00:25:41.404 [2024-11-26 20:55:45.032129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.404 [2024-11-26 20:55:45.032188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.404 qpair failed and we were unable to recover it. 00:25:41.404 [2024-11-26 20:55:45.032417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.404 [2024-11-26 20:55:45.032475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.404 qpair failed and we were unable to recover it. 00:25:41.404 [2024-11-26 20:55:45.032726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.404 [2024-11-26 20:55:45.032782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.404 qpair failed and we were unable to recover it. 00:25:41.404 [2024-11-26 20:55:45.032992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.404 [2024-11-26 20:55:45.033048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.404 qpair failed and we were unable to recover it. 00:25:41.404 [2024-11-26 20:55:45.033234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.404 [2024-11-26 20:55:45.033292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.404 qpair failed and we were unable to recover it. 00:25:41.404 [2024-11-26 20:55:45.033537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.404 [2024-11-26 20:55:45.033593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.404 qpair failed and we were unable to recover it. 00:25:41.404 [2024-11-26 20:55:45.033842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.404 [2024-11-26 20:55:45.033898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.404 qpair failed and we were unable to recover it. 00:25:41.404 [2024-11-26 20:55:45.034078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.404 [2024-11-26 20:55:45.034133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.404 qpair failed and we were unable to recover it. 00:25:41.404 [2024-11-26 20:55:45.034381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.404 [2024-11-26 20:55:45.034439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.404 qpair failed and we were unable to recover it. 00:25:41.404 [2024-11-26 20:55:45.034652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.404 [2024-11-26 20:55:45.034709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.404 qpair failed and we were unable to recover it. 00:25:41.404 [2024-11-26 20:55:45.034960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.404 [2024-11-26 20:55:45.035015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.404 qpair failed and we were unable to recover it. 00:25:41.404 [2024-11-26 20:55:45.035269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.404 [2024-11-26 20:55:45.035341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.404 qpair failed and we were unable to recover it. 00:25:41.404 [2024-11-26 20:55:45.035620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.404 [2024-11-26 20:55:45.035684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.404 qpair failed and we were unable to recover it. 00:25:41.404 [2024-11-26 20:55:45.035905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.405 [2024-11-26 20:55:45.035970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.405 qpair failed and we were unable to recover it. 00:25:41.405 [2024-11-26 20:55:45.036237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.405 [2024-11-26 20:55:45.036321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.405 qpair failed and we were unable to recover it. 00:25:41.405 [2024-11-26 20:55:45.036615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.405 [2024-11-26 20:55:45.036681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.405 qpair failed and we were unable to recover it. 00:25:41.405 [2024-11-26 20:55:45.036926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.405 [2024-11-26 20:55:45.036992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.405 qpair failed and we were unable to recover it. 00:25:41.405 [2024-11-26 20:55:45.037232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.405 [2024-11-26 20:55:45.037297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.405 qpair failed and we were unable to recover it. 00:25:41.405 [2024-11-26 20:55:45.037605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.405 [2024-11-26 20:55:45.037670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.405 qpair failed and we were unable to recover it. 00:25:41.405 [2024-11-26 20:55:45.037917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.405 [2024-11-26 20:55:45.037982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.405 qpair failed and we were unable to recover it. 00:25:41.405 [2024-11-26 20:55:45.038260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.405 [2024-11-26 20:55:45.038344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.405 qpair failed and we were unable to recover it. 00:25:41.405 [2024-11-26 20:55:45.038622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.405 [2024-11-26 20:55:45.038678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.405 qpair failed and we were unable to recover it. 00:25:41.405 [2024-11-26 20:55:45.038932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.405 [2024-11-26 20:55:45.038988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.405 qpair failed and we were unable to recover it. 00:25:41.405 [2024-11-26 20:55:45.039147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.405 [2024-11-26 20:55:45.039203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.405 qpair failed and we were unable to recover it. 00:25:41.405 [2024-11-26 20:55:45.039450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.405 [2024-11-26 20:55:45.039508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.405 qpair failed and we were unable to recover it. 00:25:41.405 [2024-11-26 20:55:45.039770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.405 [2024-11-26 20:55:45.039826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.405 qpair failed and we were unable to recover it. 00:25:41.405 [2024-11-26 20:55:45.040135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.405 [2024-11-26 20:55:45.040200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.405 qpair failed and we were unable to recover it. 00:25:41.405 [2024-11-26 20:55:45.040426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.405 [2024-11-26 20:55:45.040494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.405 qpair failed and we were unable to recover it. 00:25:41.405 [2024-11-26 20:55:45.040667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.405 [2024-11-26 20:55:45.040724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.405 qpair failed and we were unable to recover it. 00:25:41.405 [2024-11-26 20:55:45.040938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.405 [2024-11-26 20:55:45.040994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.405 qpair failed and we were unable to recover it. 00:25:41.405 [2024-11-26 20:55:45.041200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.405 [2024-11-26 20:55:45.041257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.405 qpair failed and we were unable to recover it. 00:25:41.405 [2024-11-26 20:55:45.041469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.405 [2024-11-26 20:55:45.041526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.405 qpair failed and we were unable to recover it. 00:25:41.405 [2024-11-26 20:55:45.041743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.405 [2024-11-26 20:55:45.041801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.405 qpair failed and we were unable to recover it. 00:25:41.405 [2024-11-26 20:55:45.042014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.405 [2024-11-26 20:55:45.042069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.405 qpair failed and we were unable to recover it. 00:25:41.405 [2024-11-26 20:55:45.042282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.405 [2024-11-26 20:55:45.042358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.405 qpair failed and we were unable to recover it. 00:25:41.405 [2024-11-26 20:55:45.042559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.405 [2024-11-26 20:55:45.042614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.405 qpair failed and we were unable to recover it. 00:25:41.405 [2024-11-26 20:55:45.042843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.405 [2024-11-26 20:55:45.042899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.405 qpair failed and we were unable to recover it. 00:25:41.405 [2024-11-26 20:55:45.043151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.405 [2024-11-26 20:55:45.043206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.405 qpair failed and we were unable to recover it. 00:25:41.405 [2024-11-26 20:55:45.043434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.405 [2024-11-26 20:55:45.043491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.405 qpair failed and we were unable to recover it. 00:25:41.405 [2024-11-26 20:55:45.043749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.405 [2024-11-26 20:55:45.043805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.405 qpair failed and we were unable to recover it. 00:25:41.405 [2024-11-26 20:55:45.044054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.405 [2024-11-26 20:55:45.044110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.405 qpair failed and we were unable to recover it. 00:25:41.405 [2024-11-26 20:55:45.044352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.405 [2024-11-26 20:55:45.044410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.405 qpair failed and we were unable to recover it. 00:25:41.405 [2024-11-26 20:55:45.044656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.405 [2024-11-26 20:55:45.044712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.405 qpair failed and we were unable to recover it. 00:25:41.405 [2024-11-26 20:55:45.044965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.405 [2024-11-26 20:55:45.045021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.405 qpair failed and we were unable to recover it. 00:25:41.405 [2024-11-26 20:55:45.045244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.405 [2024-11-26 20:55:45.045299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.405 qpair failed and we were unable to recover it. 00:25:41.405 [2024-11-26 20:55:45.045526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.405 [2024-11-26 20:55:45.045581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.405 qpair failed and we were unable to recover it. 00:25:41.405 [2024-11-26 20:55:45.045820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.405 [2024-11-26 20:55:45.045889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.405 qpair failed and we were unable to recover it. 00:25:41.405 [2024-11-26 20:55:45.046156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.405 [2024-11-26 20:55:45.046221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.405 qpair failed and we were unable to recover it. 00:25:41.405 [2024-11-26 20:55:45.046479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.405 [2024-11-26 20:55:45.046537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.405 qpair failed and we were unable to recover it. 00:25:41.405 [2024-11-26 20:55:45.046722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.405 [2024-11-26 20:55:45.046787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.405 qpair failed and we were unable to recover it. 00:25:41.405 [2024-11-26 20:55:45.047032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.405 [2024-11-26 20:55:45.047096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.405 qpair failed and we were unable to recover it. 00:25:41.405 [2024-11-26 20:55:45.047299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.406 [2024-11-26 20:55:45.047378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.406 qpair failed and we were unable to recover it. 00:25:41.406 [2024-11-26 20:55:45.047658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.406 [2024-11-26 20:55:45.047724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.406 qpair failed and we were unable to recover it. 00:25:41.406 [2024-11-26 20:55:45.047982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.406 [2024-11-26 20:55:45.048046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.406 qpair failed and we were unable to recover it. 00:25:41.406 [2024-11-26 20:55:45.048385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.406 [2024-11-26 20:55:45.048485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.406 qpair failed and we were unable to recover it. 00:25:41.406 [2024-11-26 20:55:45.048767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.406 [2024-11-26 20:55:45.048835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.406 qpair failed and we were unable to recover it. 00:25:41.406 [2024-11-26 20:55:45.049051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.406 [2024-11-26 20:55:45.049115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.406 qpair failed and we were unable to recover it. 00:25:41.406 [2024-11-26 20:55:45.049404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.406 [2024-11-26 20:55:45.049470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.406 qpair failed and we were unable to recover it. 00:25:41.406 [2024-11-26 20:55:45.049728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.406 [2024-11-26 20:55:45.049792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.406 qpair failed and we were unable to recover it. 00:25:41.406 [2024-11-26 20:55:45.049991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.406 [2024-11-26 20:55:45.050054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.406 qpair failed and we were unable to recover it. 00:25:41.406 [2024-11-26 20:55:45.050281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.406 [2024-11-26 20:55:45.050363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.406 qpair failed and we were unable to recover it. 00:25:41.406 [2024-11-26 20:55:45.050660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.406 [2024-11-26 20:55:45.050723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.406 qpair failed and we were unable to recover it. 00:25:41.406 [2024-11-26 20:55:45.051023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.406 [2024-11-26 20:55:45.051076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.406 qpair failed and we were unable to recover it. 00:25:41.406 [2024-11-26 20:55:45.051346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.406 [2024-11-26 20:55:45.051411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.406 qpair failed and we were unable to recover it. 00:25:41.406 [2024-11-26 20:55:45.051638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.406 [2024-11-26 20:55:45.051701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.406 qpair failed and we were unable to recover it. 00:25:41.406 [2024-11-26 20:55:45.051927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.406 [2024-11-26 20:55:45.051990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.406 qpair failed and we were unable to recover it. 00:25:41.406 [2024-11-26 20:55:45.052208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.406 [2024-11-26 20:55:45.052274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.406 qpair failed and we were unable to recover it. 00:25:41.406 [2024-11-26 20:55:45.052565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.406 [2024-11-26 20:55:45.052643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.406 qpair failed and we were unable to recover it. 00:25:41.406 [2024-11-26 20:55:45.052891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.406 [2024-11-26 20:55:45.052955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.406 qpair failed and we were unable to recover it. 00:25:41.406 [2024-11-26 20:55:45.053162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.406 [2024-11-26 20:55:45.053236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.406 qpair failed and we were unable to recover it. 00:25:41.406 [2024-11-26 20:55:45.053525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.406 [2024-11-26 20:55:45.053590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.406 qpair failed and we were unable to recover it. 00:25:41.406 [2024-11-26 20:55:45.053834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.406 [2024-11-26 20:55:45.053898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.406 qpair failed and we were unable to recover it. 00:25:41.406 [2024-11-26 20:55:45.054153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.406 [2024-11-26 20:55:45.054217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.406 qpair failed and we were unable to recover it. 00:25:41.406 [2024-11-26 20:55:45.054485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.406 [2024-11-26 20:55:45.054549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.406 qpair failed and we were unable to recover it. 00:25:41.406 [2024-11-26 20:55:45.054810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.406 [2024-11-26 20:55:45.054873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.406 qpair failed and we were unable to recover it. 00:25:41.406 [2024-11-26 20:55:45.055174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.406 [2024-11-26 20:55:45.055228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.406 qpair failed and we were unable to recover it. 00:25:41.406 [2024-11-26 20:55:45.055494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.406 [2024-11-26 20:55:45.055557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.406 qpair failed and we were unable to recover it. 00:25:41.406 [2024-11-26 20:55:45.055773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.406 [2024-11-26 20:55:45.055838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.406 qpair failed and we were unable to recover it. 00:25:41.406 [2024-11-26 20:55:45.056111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.406 [2024-11-26 20:55:45.056176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.406 qpair failed and we were unable to recover it. 00:25:41.406 [2024-11-26 20:55:45.056506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.406 [2024-11-26 20:55:45.056570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.406 qpair failed and we were unable to recover it. 00:25:41.406 [2024-11-26 20:55:45.056849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.406 [2024-11-26 20:55:45.056912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.406 qpair failed and we were unable to recover it. 00:25:41.406 [2024-11-26 20:55:45.057160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.406 [2024-11-26 20:55:45.057225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.406 qpair failed and we were unable to recover it. 00:25:41.406 [2024-11-26 20:55:45.057483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.406 [2024-11-26 20:55:45.057547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.406 qpair failed and we were unable to recover it. 00:25:41.406 [2024-11-26 20:55:45.057807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.406 [2024-11-26 20:55:45.057870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.406 qpair failed and we were unable to recover it. 00:25:41.685 [2024-11-26 20:55:45.058141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.685 [2024-11-26 20:55:45.058210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.685 qpair failed and we were unable to recover it. 00:25:41.685 [2024-11-26 20:55:45.058459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.685 [2024-11-26 20:55:45.058529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.685 qpair failed and we were unable to recover it. 00:25:41.685 [2024-11-26 20:55:45.058788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.685 [2024-11-26 20:55:45.058852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.685 qpair failed and we were unable to recover it. 00:25:41.685 [2024-11-26 20:55:45.059106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.685 [2024-11-26 20:55:45.059168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.685 qpair failed and we were unable to recover it. 00:25:41.685 [2024-11-26 20:55:45.059412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.685 [2024-11-26 20:55:45.059482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.685 qpair failed and we were unable to recover it. 00:25:41.685 [2024-11-26 20:55:45.059730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.685 [2024-11-26 20:55:45.059798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.685 qpair failed and we were unable to recover it. 00:25:41.685 [2024-11-26 20:55:45.060095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.685 [2024-11-26 20:55:45.060159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.685 qpair failed and we were unable to recover it. 00:25:41.685 [2024-11-26 20:55:45.060448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.685 [2024-11-26 20:55:45.060514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.685 qpair failed and we were unable to recover it. 00:25:41.685 [2024-11-26 20:55:45.060736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.685 [2024-11-26 20:55:45.060800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.685 qpair failed and we were unable to recover it. 00:25:41.685 [2024-11-26 20:55:45.061044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.685 [2024-11-26 20:55:45.061110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.685 qpair failed and we were unable to recover it. 00:25:41.685 [2024-11-26 20:55:45.061407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.685 [2024-11-26 20:55:45.061473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.685 qpair failed and we were unable to recover it. 00:25:41.685 [2024-11-26 20:55:45.061770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.685 [2024-11-26 20:55:45.061834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.685 qpair failed and we were unable to recover it. 00:25:41.685 [2024-11-26 20:55:45.062083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.685 [2024-11-26 20:55:45.062146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.685 qpair failed and we were unable to recover it. 00:25:41.685 [2024-11-26 20:55:45.062387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.685 [2024-11-26 20:55:45.062454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.685 qpair failed and we were unable to recover it. 00:25:41.685 [2024-11-26 20:55:45.062745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.685 [2024-11-26 20:55:45.062810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.685 qpair failed and we were unable to recover it. 00:25:41.685 [2024-11-26 20:55:45.063087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.685 [2024-11-26 20:55:45.063150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.685 qpair failed and we were unable to recover it. 00:25:41.685 [2024-11-26 20:55:45.063397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.685 [2024-11-26 20:55:45.063464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.685 qpair failed and we were unable to recover it. 00:25:41.685 [2024-11-26 20:55:45.063705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.685 [2024-11-26 20:55:45.063770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.685 qpair failed and we were unable to recover it. 00:25:41.685 [2024-11-26 20:55:45.064050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.685 [2024-11-26 20:55:45.064113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.685 qpair failed and we were unable to recover it. 00:25:41.685 [2024-11-26 20:55:45.064392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.685 [2024-11-26 20:55:45.064457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.685 qpair failed and we were unable to recover it. 00:25:41.685 [2024-11-26 20:55:45.064681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.685 [2024-11-26 20:55:45.064745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.685 qpair failed and we were unable to recover it. 00:25:41.685 [2024-11-26 20:55:45.064948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.685 [2024-11-26 20:55:45.065013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.685 qpair failed and we were unable to recover it. 00:25:41.685 [2024-11-26 20:55:45.065296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.685 [2024-11-26 20:55:45.065384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.685 qpair failed and we were unable to recover it. 00:25:41.685 [2024-11-26 20:55:45.065673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.685 [2024-11-26 20:55:45.065736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.685 qpair failed and we were unable to recover it. 00:25:41.685 [2024-11-26 20:55:45.065990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.685 [2024-11-26 20:55:45.066055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.685 qpair failed and we were unable to recover it. 00:25:41.685 [2024-11-26 20:55:45.066317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.685 [2024-11-26 20:55:45.066383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.685 qpair failed and we were unable to recover it. 00:25:41.685 [2024-11-26 20:55:45.066678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.685 [2024-11-26 20:55:45.066740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.685 qpair failed and we were unable to recover it. 00:25:41.685 [2024-11-26 20:55:45.067030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.685 [2024-11-26 20:55:45.067093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.685 qpair failed and we were unable to recover it. 00:25:41.686 [2024-11-26 20:55:45.067344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.686 [2024-11-26 20:55:45.067420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.686 qpair failed and we were unable to recover it. 00:25:41.686 [2024-11-26 20:55:45.067707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.686 [2024-11-26 20:55:45.067772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.686 qpair failed and we were unable to recover it. 00:25:41.686 [2024-11-26 20:55:45.067974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.686 [2024-11-26 20:55:45.068037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.686 qpair failed and we were unable to recover it. 00:25:41.686 [2024-11-26 20:55:45.068238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.686 [2024-11-26 20:55:45.068301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.686 qpair failed and we were unable to recover it. 00:25:41.686 [2024-11-26 20:55:45.068641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.686 [2024-11-26 20:55:45.068705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.686 qpair failed and we were unable to recover it. 00:25:41.686 [2024-11-26 20:55:45.069007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.686 [2024-11-26 20:55:45.069070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.686 qpair failed and we were unable to recover it. 00:25:41.686 [2024-11-26 20:55:45.069337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.686 [2024-11-26 20:55:45.069404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.686 qpair failed and we were unable to recover it. 00:25:41.686 [2024-11-26 20:55:45.069690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.686 [2024-11-26 20:55:45.069754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.686 qpair failed and we were unable to recover it. 00:25:41.686 [2024-11-26 20:55:45.069965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.686 [2024-11-26 20:55:45.070030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.686 qpair failed and we were unable to recover it. 00:25:41.686 [2024-11-26 20:55:45.070321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.686 [2024-11-26 20:55:45.070387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.686 qpair failed and we were unable to recover it. 00:25:41.686 [2024-11-26 20:55:45.070670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.686 [2024-11-26 20:55:45.070732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.686 qpair failed and we were unable to recover it. 00:25:41.686 [2024-11-26 20:55:45.071017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.686 [2024-11-26 20:55:45.071080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.686 qpair failed and we were unable to recover it. 00:25:41.686 [2024-11-26 20:55:45.071364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.686 [2024-11-26 20:55:45.071432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.686 qpair failed and we were unable to recover it. 00:25:41.686 [2024-11-26 20:55:45.071720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.686 [2024-11-26 20:55:45.071785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.686 qpair failed and we were unable to recover it. 00:25:41.686 [2024-11-26 20:55:45.072031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.686 [2024-11-26 20:55:45.072095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.686 qpair failed and we were unable to recover it. 00:25:41.686 [2024-11-26 20:55:45.072383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.686 [2024-11-26 20:55:45.072448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.686 qpair failed and we were unable to recover it. 00:25:41.686 [2024-11-26 20:55:45.072739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.686 [2024-11-26 20:55:45.072801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.686 qpair failed and we were unable to recover it. 00:25:41.686 [2024-11-26 20:55:45.073094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.686 [2024-11-26 20:55:45.073157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.686 qpair failed and we were unable to recover it. 00:25:41.686 [2024-11-26 20:55:45.073453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.686 [2024-11-26 20:55:45.073518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.686 qpair failed and we were unable to recover it. 00:25:41.686 [2024-11-26 20:55:45.073808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.686 [2024-11-26 20:55:45.073870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.686 qpair failed and we were unable to recover it. 00:25:41.686 [2024-11-26 20:55:45.074146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.686 [2024-11-26 20:55:45.074209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.686 qpair failed and we were unable to recover it. 00:25:41.686 [2024-11-26 20:55:45.074484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.686 [2024-11-26 20:55:45.074551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.686 qpair failed and we were unable to recover it. 00:25:41.686 [2024-11-26 20:55:45.074798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.686 [2024-11-26 20:55:45.074873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.686 qpair failed and we were unable to recover it. 00:25:41.686 [2024-11-26 20:55:45.075122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.686 [2024-11-26 20:55:45.075186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.686 qpair failed and we were unable to recover it. 00:25:41.686 [2024-11-26 20:55:45.075409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.686 [2024-11-26 20:55:45.075477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.686 qpair failed and we were unable to recover it. 00:25:41.686 [2024-11-26 20:55:45.075774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.686 [2024-11-26 20:55:45.075837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.686 qpair failed and we were unable to recover it. 00:25:41.686 [2024-11-26 20:55:45.076132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.686 [2024-11-26 20:55:45.076195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.686 qpair failed and we were unable to recover it. 00:25:41.686 [2024-11-26 20:55:45.076454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.686 [2024-11-26 20:55:45.076521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.686 qpair failed and we were unable to recover it. 00:25:41.686 [2024-11-26 20:55:45.076808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.686 [2024-11-26 20:55:45.076873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.686 qpair failed and we were unable to recover it. 00:25:41.686 [2024-11-26 20:55:45.077097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.686 [2024-11-26 20:55:45.077159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.686 qpair failed and we were unable to recover it. 00:25:41.686 [2024-11-26 20:55:45.077373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.686 [2024-11-26 20:55:45.077438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.686 qpair failed and we were unable to recover it. 00:25:41.686 [2024-11-26 20:55:45.077727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.686 [2024-11-26 20:55:45.077791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.686 qpair failed and we were unable to recover it. 00:25:41.686 [2024-11-26 20:55:45.078049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.686 [2024-11-26 20:55:45.078112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.686 qpair failed and we were unable to recover it. 00:25:41.686 [2024-11-26 20:55:45.078414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.686 [2024-11-26 20:55:45.078479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.686 qpair failed and we were unable to recover it. 00:25:41.686 [2024-11-26 20:55:45.078762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.686 [2024-11-26 20:55:45.078827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.686 qpair failed and we were unable to recover it. 00:25:41.686 [2024-11-26 20:55:45.079070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.686 [2024-11-26 20:55:45.079133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.686 qpair failed and we were unable to recover it. 00:25:41.686 [2024-11-26 20:55:45.079407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.686 [2024-11-26 20:55:45.079475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.686 qpair failed and we were unable to recover it. 00:25:41.686 [2024-11-26 20:55:45.079734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.687 [2024-11-26 20:55:45.079798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.687 qpair failed and we were unable to recover it. 00:25:41.687 [2024-11-26 20:55:45.080037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.687 [2024-11-26 20:55:45.080099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.687 qpair failed and we were unable to recover it. 00:25:41.687 [2024-11-26 20:55:45.080341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.687 [2024-11-26 20:55:45.080407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.687 qpair failed and we were unable to recover it. 00:25:41.687 [2024-11-26 20:55:45.080667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.687 [2024-11-26 20:55:45.080732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.687 qpair failed and we were unable to recover it. 00:25:41.687 [2024-11-26 20:55:45.080981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.687 [2024-11-26 20:55:45.081047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.687 qpair failed and we were unable to recover it. 00:25:41.687 [2024-11-26 20:55:45.081341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.687 [2024-11-26 20:55:45.081406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.687 qpair failed and we were unable to recover it. 00:25:41.687 [2024-11-26 20:55:45.081659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.687 [2024-11-26 20:55:45.081722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.687 qpair failed and we were unable to recover it. 00:25:41.687 [2024-11-26 20:55:45.081961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.687 [2024-11-26 20:55:45.082025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.687 qpair failed and we were unable to recover it. 00:25:41.687 [2024-11-26 20:55:45.082331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.687 [2024-11-26 20:55:45.082396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.687 qpair failed and we were unable to recover it. 00:25:41.687 [2024-11-26 20:55:45.082678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.687 [2024-11-26 20:55:45.082742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.687 qpair failed and we were unable to recover it. 00:25:41.687 [2024-11-26 20:55:45.083027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.687 [2024-11-26 20:55:45.083090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.687 qpair failed and we were unable to recover it. 00:25:41.687 [2024-11-26 20:55:45.083342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.687 [2024-11-26 20:55:45.083412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.687 qpair failed and we were unable to recover it. 00:25:41.687 [2024-11-26 20:55:45.083682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.687 [2024-11-26 20:55:45.083747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.687 qpair failed and we were unable to recover it. 00:25:41.687 [2024-11-26 20:55:45.083995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.687 [2024-11-26 20:55:45.084059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.687 qpair failed and we were unable to recover it. 00:25:41.687 [2024-11-26 20:55:45.084314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.687 [2024-11-26 20:55:45.084400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.687 qpair failed and we were unable to recover it. 00:25:41.687 [2024-11-26 20:55:45.084703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.687 [2024-11-26 20:55:45.084767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.687 qpair failed and we were unable to recover it. 00:25:41.687 [2024-11-26 20:55:45.085054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.687 [2024-11-26 20:55:45.085118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.687 qpair failed and we were unable to recover it. 00:25:41.687 [2024-11-26 20:55:45.085364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.687 [2024-11-26 20:55:45.085428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.687 qpair failed and we were unable to recover it. 00:25:41.687 [2024-11-26 20:55:45.085721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.687 [2024-11-26 20:55:45.085783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.687 qpair failed and we were unable to recover it. 00:25:41.687 [2024-11-26 20:55:45.086027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.687 [2024-11-26 20:55:45.086090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.687 qpair failed and we were unable to recover it. 00:25:41.687 [2024-11-26 20:55:45.086353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.687 [2024-11-26 20:55:45.086418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.687 qpair failed and we were unable to recover it. 00:25:41.687 [2024-11-26 20:55:45.086658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.687 [2024-11-26 20:55:45.086721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.687 qpair failed and we were unable to recover it. 00:25:41.687 [2024-11-26 20:55:45.086963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.687 [2024-11-26 20:55:45.087026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.687 qpair failed and we were unable to recover it. 00:25:41.687 [2024-11-26 20:55:45.087264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.687 [2024-11-26 20:55:45.087356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.687 qpair failed and we were unable to recover it. 00:25:41.687 [2024-11-26 20:55:45.087652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.687 [2024-11-26 20:55:45.087714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.687 qpair failed and we were unable to recover it. 00:25:41.687 [2024-11-26 20:55:45.087987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.687 [2024-11-26 20:55:45.088060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.687 qpair failed and we were unable to recover it. 00:25:41.687 [2024-11-26 20:55:45.088343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.687 [2024-11-26 20:55:45.088407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.687 qpair failed and we were unable to recover it. 00:25:41.687 [2024-11-26 20:55:45.088654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.687 [2024-11-26 20:55:45.088719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.687 qpair failed and we were unable to recover it. 00:25:41.687 [2024-11-26 20:55:45.088953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.687 [2024-11-26 20:55:45.089015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.687 qpair failed and we were unable to recover it. 00:25:41.687 [2024-11-26 20:55:45.089292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.687 [2024-11-26 20:55:45.089374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.687 qpair failed and we were unable to recover it. 00:25:41.687 [2024-11-26 20:55:45.089659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.687 [2024-11-26 20:55:45.089723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.687 qpair failed and we were unable to recover it. 00:25:41.687 [2024-11-26 20:55:45.090012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.687 [2024-11-26 20:55:45.090074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.687 qpair failed and we were unable to recover it. 00:25:41.687 [2024-11-26 20:55:45.090371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.687 [2024-11-26 20:55:45.090435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.687 qpair failed and we were unable to recover it. 00:25:41.687 [2024-11-26 20:55:45.090667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.687 [2024-11-26 20:55:45.090729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.687 qpair failed and we were unable to recover it. 00:25:41.687 [2024-11-26 20:55:45.090983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.687 [2024-11-26 20:55:45.091046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.687 qpair failed and we were unable to recover it. 00:25:41.687 [2024-11-26 20:55:45.091298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.687 [2024-11-26 20:55:45.091395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.687 qpair failed and we were unable to recover it. 00:25:41.687 [2024-11-26 20:55:45.091671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.687 [2024-11-26 20:55:45.091736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.687 qpair failed and we were unable to recover it. 00:25:41.687 [2024-11-26 20:55:45.091996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.687 [2024-11-26 20:55:45.092060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.687 qpair failed and we were unable to recover it. 00:25:41.687 [2024-11-26 20:55:45.092354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.688 [2024-11-26 20:55:45.092419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.688 qpair failed and we were unable to recover it. 00:25:41.688 [2024-11-26 20:55:45.092644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.688 [2024-11-26 20:55:45.092708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.688 qpair failed and we were unable to recover it. 00:25:41.688 [2024-11-26 20:55:45.092965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.688 [2024-11-26 20:55:45.093027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.688 qpair failed and we were unable to recover it. 00:25:41.688 [2024-11-26 20:55:45.093332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.688 [2024-11-26 20:55:45.093396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.688 qpair failed and we were unable to recover it. 00:25:41.688 [2024-11-26 20:55:45.093684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.688 [2024-11-26 20:55:45.093748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.688 qpair failed and we were unable to recover it. 00:25:41.688 [2024-11-26 20:55:45.094037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.688 [2024-11-26 20:55:45.094099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.688 qpair failed and we were unable to recover it. 00:25:41.688 [2024-11-26 20:55:45.094354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.688 [2024-11-26 20:55:45.094419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.688 qpair failed and we were unable to recover it. 00:25:41.688 [2024-11-26 20:55:45.094661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.688 [2024-11-26 20:55:45.094725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.688 qpair failed and we were unable to recover it. 00:25:41.688 [2024-11-26 20:55:45.095010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.688 [2024-11-26 20:55:45.095071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.688 qpair failed and we were unable to recover it. 00:25:41.688 [2024-11-26 20:55:45.095285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.688 [2024-11-26 20:55:45.095379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.688 qpair failed and we were unable to recover it. 00:25:41.688 [2024-11-26 20:55:45.095636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.688 [2024-11-26 20:55:45.095701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.688 qpair failed and we were unable to recover it. 00:25:41.688 [2024-11-26 20:55:45.095979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.688 [2024-11-26 20:55:45.096041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.688 qpair failed and we were unable to recover it. 00:25:41.688 [2024-11-26 20:55:45.096357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.688 [2024-11-26 20:55:45.096421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.688 qpair failed and we were unable to recover it. 00:25:41.688 [2024-11-26 20:55:45.096639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.688 [2024-11-26 20:55:45.096706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.688 qpair failed and we were unable to recover it. 00:25:41.688 [2024-11-26 20:55:45.096979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.688 [2024-11-26 20:55:45.097042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.688 qpair failed and we were unable to recover it. 00:25:41.688 [2024-11-26 20:55:45.097281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.688 [2024-11-26 20:55:45.097359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.688 qpair failed and we were unable to recover it. 00:25:41.688 [2024-11-26 20:55:45.097637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.688 [2024-11-26 20:55:45.097700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.688 qpair failed and we were unable to recover it. 00:25:41.688 [2024-11-26 20:55:45.097942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.688 [2024-11-26 20:55:45.098004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.688 qpair failed and we were unable to recover it. 00:25:41.688 [2024-11-26 20:55:45.098273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.688 [2024-11-26 20:55:45.098355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.688 qpair failed and we were unable to recover it. 00:25:41.688 [2024-11-26 20:55:45.098659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.688 [2024-11-26 20:55:45.098722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.688 qpair failed and we were unable to recover it. 00:25:41.688 [2024-11-26 20:55:45.098964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.688 [2024-11-26 20:55:45.099027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.688 qpair failed and we were unable to recover it. 00:25:41.688 [2024-11-26 20:55:45.099268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.688 [2024-11-26 20:55:45.099367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.688 qpair failed and we were unable to recover it. 00:25:41.688 [2024-11-26 20:55:45.099640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.688 [2024-11-26 20:55:45.099705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.688 qpair failed and we were unable to recover it. 00:25:41.688 [2024-11-26 20:55:45.099958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.688 [2024-11-26 20:55:45.100020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.688 qpair failed and we were unable to recover it. 00:25:41.688 [2024-11-26 20:55:45.100250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.688 [2024-11-26 20:55:45.100340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.688 qpair failed and we were unable to recover it. 00:25:41.688 [2024-11-26 20:55:45.100642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.688 [2024-11-26 20:55:45.100705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.688 qpair failed and we were unable to recover it. 00:25:41.688 [2024-11-26 20:55:45.101014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.688 [2024-11-26 20:55:45.101077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.688 qpair failed and we were unable to recover it. 00:25:41.688 [2024-11-26 20:55:45.101336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.688 [2024-11-26 20:55:45.101411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.688 qpair failed and we were unable to recover it. 00:25:41.688 [2024-11-26 20:55:45.101693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.688 [2024-11-26 20:55:45.101757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.688 qpair failed and we were unable to recover it. 00:25:41.688 [2024-11-26 20:55:45.102052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.688 [2024-11-26 20:55:45.102115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.688 qpair failed and we were unable to recover it. 00:25:41.688 [2024-11-26 20:55:45.102368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.688 [2024-11-26 20:55:45.102435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.688 qpair failed and we were unable to recover it. 00:25:41.688 [2024-11-26 20:55:45.102698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.688 [2024-11-26 20:55:45.102761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.688 qpair failed and we were unable to recover it. 00:25:41.688 [2024-11-26 20:55:45.103060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.688 [2024-11-26 20:55:45.103122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.688 qpair failed and we were unable to recover it. 00:25:41.688 [2024-11-26 20:55:45.103370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.688 [2024-11-26 20:55:45.103437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.688 qpair failed and we were unable to recover it. 00:25:41.688 [2024-11-26 20:55:45.103680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.688 [2024-11-26 20:55:45.103743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.688 qpair failed and we were unable to recover it. 00:25:41.688 [2024-11-26 20:55:45.103991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.688 [2024-11-26 20:55:45.104054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.688 qpair failed and we were unable to recover it. 00:25:41.688 [2024-11-26 20:55:45.104345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.688 [2024-11-26 20:55:45.104409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.688 qpair failed and we were unable to recover it. 00:25:41.688 [2024-11-26 20:55:45.104696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.688 [2024-11-26 20:55:45.104760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.688 qpair failed and we were unable to recover it. 00:25:41.688 [2024-11-26 20:55:45.105009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.689 [2024-11-26 20:55:45.105072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.689 qpair failed and we were unable to recover it. 00:25:41.689 [2024-11-26 20:55:45.105279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.689 [2024-11-26 20:55:45.105357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.689 qpair failed and we were unable to recover it. 00:25:41.689 [2024-11-26 20:55:45.105655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.689 [2024-11-26 20:55:45.105718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.689 qpair failed and we were unable to recover it. 00:25:41.689 [2024-11-26 20:55:45.105967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.689 [2024-11-26 20:55:45.106031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.689 qpair failed and we were unable to recover it. 00:25:41.689 [2024-11-26 20:55:45.106279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.689 [2024-11-26 20:55:45.106361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.689 qpair failed and we were unable to recover it. 00:25:41.689 [2024-11-26 20:55:45.106561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.689 [2024-11-26 20:55:45.106627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.689 qpair failed and we were unable to recover it. 00:25:41.689 [2024-11-26 20:55:45.106890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.689 [2024-11-26 20:55:45.106953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.689 qpair failed and we were unable to recover it. 00:25:41.689 [2024-11-26 20:55:45.107193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.689 [2024-11-26 20:55:45.107259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.689 qpair failed and we were unable to recover it. 00:25:41.689 [2024-11-26 20:55:45.107514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.689 [2024-11-26 20:55:45.107580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.689 qpair failed and we were unable to recover it. 00:25:41.689 [2024-11-26 20:55:45.107822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.689 [2024-11-26 20:55:45.107888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.689 qpair failed and we were unable to recover it. 00:25:41.689 [2024-11-26 20:55:45.108174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.689 [2024-11-26 20:55:45.108241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.689 qpair failed and we were unable to recover it. 00:25:41.689 [2024-11-26 20:55:45.108568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.689 [2024-11-26 20:55:45.108632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.689 qpair failed and we were unable to recover it. 00:25:41.689 [2024-11-26 20:55:45.108912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.689 [2024-11-26 20:55:45.108975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.689 qpair failed and we were unable to recover it. 00:25:41.689 [2024-11-26 20:55:45.109166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.689 [2024-11-26 20:55:45.109229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.689 qpair failed and we were unable to recover it. 00:25:41.689 [2024-11-26 20:55:45.109497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.689 [2024-11-26 20:55:45.109561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.689 qpair failed and we were unable to recover it. 00:25:41.689 [2024-11-26 20:55:45.109849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.689 [2024-11-26 20:55:45.109912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.689 qpair failed and we were unable to recover it. 00:25:41.689 [2024-11-26 20:55:45.110213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.689 [2024-11-26 20:55:45.110276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.689 qpair failed and we were unable to recover it. 00:25:41.689 [2024-11-26 20:55:45.110511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.689 [2024-11-26 20:55:45.110575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.689 qpair failed and we were unable to recover it. 00:25:41.689 [2024-11-26 20:55:45.110830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.689 [2024-11-26 20:55:45.110892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.689 qpair failed and we were unable to recover it. 00:25:41.689 [2024-11-26 20:55:45.111136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.689 [2024-11-26 20:55:45.111198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.689 qpair failed and we were unable to recover it. 00:25:41.689 [2024-11-26 20:55:45.111437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.689 [2024-11-26 20:55:45.111506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.689 qpair failed and we were unable to recover it. 00:25:41.689 [2024-11-26 20:55:45.111743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.689 [2024-11-26 20:55:45.111806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.689 qpair failed and we were unable to recover it. 00:25:41.689 [2024-11-26 20:55:45.112094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.689 [2024-11-26 20:55:45.112158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.689 qpair failed and we were unable to recover it. 00:25:41.689 [2024-11-26 20:55:45.112416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.689 [2024-11-26 20:55:45.112481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.689 qpair failed and we were unable to recover it. 00:25:41.689 [2024-11-26 20:55:45.112779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.689 [2024-11-26 20:55:45.112841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.689 qpair failed and we were unable to recover it. 00:25:41.689 [2024-11-26 20:55:45.113133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.689 [2024-11-26 20:55:45.113196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.689 qpair failed and we were unable to recover it. 00:25:41.689 [2024-11-26 20:55:45.113468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.689 [2024-11-26 20:55:45.113534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.689 qpair failed and we were unable to recover it. 00:25:41.689 [2024-11-26 20:55:45.113789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.689 [2024-11-26 20:55:45.113851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.689 qpair failed and we were unable to recover it. 00:25:41.689 [2024-11-26 20:55:45.114099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.689 [2024-11-26 20:55:45.114162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.689 qpair failed and we were unable to recover it. 00:25:41.689 [2024-11-26 20:55:45.114400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.689 [2024-11-26 20:55:45.114484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.689 qpair failed and we were unable to recover it. 00:25:41.689 [2024-11-26 20:55:45.114693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.689 [2024-11-26 20:55:45.114756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.689 qpair failed and we were unable to recover it. 00:25:41.689 [2024-11-26 20:55:45.115048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.689 [2024-11-26 20:55:45.115111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.689 qpair failed and we were unable to recover it. 00:25:41.689 [2024-11-26 20:55:45.115361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.690 [2024-11-26 20:55:45.115428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.690 qpair failed and we were unable to recover it. 00:25:41.690 [2024-11-26 20:55:45.115622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.690 [2024-11-26 20:55:45.115687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.690 qpair failed and we were unable to recover it. 00:25:41.690 [2024-11-26 20:55:45.115957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.690 [2024-11-26 20:55:45.116020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.690 qpair failed and we were unable to recover it. 00:25:41.690 [2024-11-26 20:55:45.116267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.690 [2024-11-26 20:55:45.116362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.690 qpair failed and we were unable to recover it. 00:25:41.690 [2024-11-26 20:55:45.116574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.690 [2024-11-26 20:55:45.116638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.690 qpair failed and we were unable to recover it. 00:25:41.690 [2024-11-26 20:55:45.116932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.690 [2024-11-26 20:55:45.116995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.690 qpair failed and we were unable to recover it. 00:25:41.690 [2024-11-26 20:55:45.117210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.690 [2024-11-26 20:55:45.117272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.690 qpair failed and we were unable to recover it. 00:25:41.690 [2024-11-26 20:55:45.117522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.690 [2024-11-26 20:55:45.117585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.690 qpair failed and we were unable to recover it. 00:25:41.690 [2024-11-26 20:55:45.117778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.690 [2024-11-26 20:55:45.117841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.690 qpair failed and we were unable to recover it. 00:25:41.690 [2024-11-26 20:55:45.118126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.690 [2024-11-26 20:55:45.118187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.690 qpair failed and we were unable to recover it. 00:25:41.690 [2024-11-26 20:55:45.118439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.690 [2024-11-26 20:55:45.118506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.690 qpair failed and we were unable to recover it. 00:25:41.690 [2024-11-26 20:55:45.118812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.690 [2024-11-26 20:55:45.118877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.690 qpair failed and we were unable to recover it. 00:25:41.690 [2024-11-26 20:55:45.119173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.690 [2024-11-26 20:55:45.119236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.690 qpair failed and we were unable to recover it. 00:25:41.690 [2024-11-26 20:55:45.119541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.690 [2024-11-26 20:55:45.119607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.690 qpair failed and we were unable to recover it. 00:25:41.690 [2024-11-26 20:55:45.119898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.690 [2024-11-26 20:55:45.119962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.690 qpair failed and we were unable to recover it. 00:25:41.690 [2024-11-26 20:55:45.120249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.690 [2024-11-26 20:55:45.120326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.690 qpair failed and we were unable to recover it. 00:25:41.690 [2024-11-26 20:55:45.120568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.690 [2024-11-26 20:55:45.120631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.690 qpair failed and we were unable to recover it. 00:25:41.690 [2024-11-26 20:55:45.120880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.690 [2024-11-26 20:55:45.120943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.690 qpair failed and we were unable to recover it. 00:25:41.690 [2024-11-26 20:55:45.121219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.690 [2024-11-26 20:55:45.121281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.690 qpair failed and we were unable to recover it. 00:25:41.690 [2024-11-26 20:55:45.121543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.690 [2024-11-26 20:55:45.121605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.690 qpair failed and we were unable to recover it. 00:25:41.690 [2024-11-26 20:55:45.121888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.690 [2024-11-26 20:55:45.121952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.690 qpair failed and we were unable to recover it. 00:25:41.690 [2024-11-26 20:55:45.122157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.690 [2024-11-26 20:55:45.122220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.690 qpair failed and we were unable to recover it. 00:25:41.690 [2024-11-26 20:55:45.122477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.690 [2024-11-26 20:55:45.122545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.690 qpair failed and we were unable to recover it. 00:25:41.690 [2024-11-26 20:55:45.122795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.690 [2024-11-26 20:55:45.122859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.690 qpair failed and we were unable to recover it. 00:25:41.690 [2024-11-26 20:55:45.123111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.690 [2024-11-26 20:55:45.123174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.690 qpair failed and we were unable to recover it. 00:25:41.690 [2024-11-26 20:55:45.123458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.690 [2024-11-26 20:55:45.123525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.690 qpair failed and we were unable to recover it. 00:25:41.690 [2024-11-26 20:55:45.123821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.690 [2024-11-26 20:55:45.123883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.690 qpair failed and we were unable to recover it. 00:25:41.690 [2024-11-26 20:55:45.124136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.690 [2024-11-26 20:55:45.124198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.690 qpair failed and we were unable to recover it. 00:25:41.690 [2024-11-26 20:55:45.124444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.690 [2024-11-26 20:55:45.124509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.690 qpair failed and we were unable to recover it. 00:25:41.690 [2024-11-26 20:55:45.124707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.690 [2024-11-26 20:55:45.124772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.690 qpair failed and we were unable to recover it. 00:25:41.690 [2024-11-26 20:55:45.125058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.690 [2024-11-26 20:55:45.125122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.690 qpair failed and we were unable to recover it. 00:25:41.690 [2024-11-26 20:55:45.125398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.690 [2024-11-26 20:55:45.125463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.690 qpair failed and we were unable to recover it. 00:25:41.690 [2024-11-26 20:55:45.125749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.690 [2024-11-26 20:55:45.125812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.690 qpair failed and we were unable to recover it. 00:25:41.690 [2024-11-26 20:55:45.126078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.690 [2024-11-26 20:55:45.126141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.690 qpair failed and we were unable to recover it. 00:25:41.690 [2024-11-26 20:55:45.126381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.690 [2024-11-26 20:55:45.126449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.690 qpair failed and we were unable to recover it. 00:25:41.690 [2024-11-26 20:55:45.126662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.690 [2024-11-26 20:55:45.126725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.690 qpair failed and we were unable to recover it. 00:25:41.690 [2024-11-26 20:55:45.127006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.690 [2024-11-26 20:55:45.127068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.690 qpair failed and we were unable to recover it. 00:25:41.690 [2024-11-26 20:55:45.127299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.690 [2024-11-26 20:55:45.127405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.690 qpair failed and we were unable to recover it. 00:25:41.690 [2024-11-26 20:55:45.127721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.691 [2024-11-26 20:55:45.127784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.691 qpair failed and we were unable to recover it. 00:25:41.691 [2024-11-26 20:55:45.128065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.691 [2024-11-26 20:55:45.128129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.691 qpair failed and we were unable to recover it. 00:25:41.691 [2024-11-26 20:55:45.128341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.691 [2024-11-26 20:55:45.128404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.691 qpair failed and we were unable to recover it. 00:25:41.691 [2024-11-26 20:55:45.128606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.691 [2024-11-26 20:55:45.128669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.691 qpair failed and we were unable to recover it. 00:25:41.691 [2024-11-26 20:55:45.128946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.691 [2024-11-26 20:55:45.129008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.691 qpair failed and we were unable to recover it. 00:25:41.691 [2024-11-26 20:55:45.129228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.691 [2024-11-26 20:55:45.129290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.691 qpair failed and we were unable to recover it. 00:25:41.691 [2024-11-26 20:55:45.129559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.691 [2024-11-26 20:55:45.129624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.691 qpair failed and we were unable to recover it. 00:25:41.691 [2024-11-26 20:55:45.129919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.691 [2024-11-26 20:55:45.129981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.691 qpair failed and we were unable to recover it. 00:25:41.691 [2024-11-26 20:55:45.130237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.691 [2024-11-26 20:55:45.130299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.691 qpair failed and we were unable to recover it. 00:25:41.691 [2024-11-26 20:55:45.130596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.691 [2024-11-26 20:55:45.130660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.691 qpair failed and we were unable to recover it. 00:25:41.691 [2024-11-26 20:55:45.130940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.691 [2024-11-26 20:55:45.131004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.691 qpair failed and we were unable to recover it. 00:25:41.691 [2024-11-26 20:55:45.131291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.691 [2024-11-26 20:55:45.131385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.691 qpair failed and we were unable to recover it. 00:25:41.691 [2024-11-26 20:55:45.131691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.691 [2024-11-26 20:55:45.131754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.691 qpair failed and we were unable to recover it. 00:25:41.691 [2024-11-26 20:55:45.132063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.691 [2024-11-26 20:55:45.132127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.691 qpair failed and we were unable to recover it. 00:25:41.691 [2024-11-26 20:55:45.132410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.691 [2024-11-26 20:55:45.132475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.691 qpair failed and we were unable to recover it. 00:25:41.691 [2024-11-26 20:55:45.132761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.691 [2024-11-26 20:55:45.132824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.691 qpair failed and we were unable to recover it. 00:25:41.691 [2024-11-26 20:55:45.133125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.691 [2024-11-26 20:55:45.133187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.691 qpair failed and we were unable to recover it. 00:25:41.691 [2024-11-26 20:55:45.133490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.691 [2024-11-26 20:55:45.133555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.691 qpair failed and we were unable to recover it. 00:25:41.691 [2024-11-26 20:55:45.133836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.691 [2024-11-26 20:55:45.133899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.691 qpair failed and we were unable to recover it. 00:25:41.691 [2024-11-26 20:55:45.134162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.691 [2024-11-26 20:55:45.134224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.691 qpair failed and we were unable to recover it. 00:25:41.691 [2024-11-26 20:55:45.134546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.691 [2024-11-26 20:55:45.134612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.691 qpair failed and we were unable to recover it. 00:25:41.691 [2024-11-26 20:55:45.134899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.691 [2024-11-26 20:55:45.134962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.691 qpair failed and we were unable to recover it. 00:25:41.691 [2024-11-26 20:55:45.135203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.691 [2024-11-26 20:55:45.135265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.691 qpair failed and we were unable to recover it. 00:25:41.691 [2024-11-26 20:55:45.135557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.691 [2024-11-26 20:55:45.135619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.691 qpair failed and we were unable to recover it. 00:25:41.691 [2024-11-26 20:55:45.135907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.691 [2024-11-26 20:55:45.135969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.691 qpair failed and we were unable to recover it. 00:25:41.691 [2024-11-26 20:55:45.136166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.691 [2024-11-26 20:55:45.136228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.691 qpair failed and we were unable to recover it. 00:25:41.691 [2024-11-26 20:55:45.136509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.691 [2024-11-26 20:55:45.136576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.691 qpair failed and we were unable to recover it. 00:25:41.691 [2024-11-26 20:55:45.136864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.691 [2024-11-26 20:55:45.136929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.691 qpair failed and we were unable to recover it. 00:25:41.691 [2024-11-26 20:55:45.137175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.691 [2024-11-26 20:55:45.137241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.691 qpair failed and we were unable to recover it. 00:25:41.691 [2024-11-26 20:55:45.137565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.691 [2024-11-26 20:55:45.137631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.691 qpair failed and we were unable to recover it. 00:25:41.691 [2024-11-26 20:55:45.137926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.691 [2024-11-26 20:55:45.137989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.691 qpair failed and we were unable to recover it. 00:25:41.691 [2024-11-26 20:55:45.138190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.691 [2024-11-26 20:55:45.138256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.691 qpair failed and we were unable to recover it. 00:25:41.691 [2024-11-26 20:55:45.138555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.691 [2024-11-26 20:55:45.138620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.691 qpair failed and we were unable to recover it. 00:25:41.691 [2024-11-26 20:55:45.138904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.691 [2024-11-26 20:55:45.138966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.691 qpair failed and we were unable to recover it. 00:25:41.691 [2024-11-26 20:55:45.139258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.691 [2024-11-26 20:55:45.139339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.691 qpair failed and we were unable to recover it. 00:25:41.691 [2024-11-26 20:55:45.139588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.691 [2024-11-26 20:55:45.139654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.691 qpair failed and we were unable to recover it. 00:25:41.691 [2024-11-26 20:55:45.139943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.691 [2024-11-26 20:55:45.140006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.691 qpair failed and we were unable to recover it. 00:25:41.691 [2024-11-26 20:55:45.140210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.691 [2024-11-26 20:55:45.140273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.691 qpair failed and we were unable to recover it. 00:25:41.692 [2024-11-26 20:55:45.140588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.692 [2024-11-26 20:55:45.140653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.692 qpair failed and we were unable to recover it. 00:25:41.692 [2024-11-26 20:55:45.140897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.692 [2024-11-26 20:55:45.140974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.692 qpair failed and we were unable to recover it. 00:25:41.692 [2024-11-26 20:55:45.141222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.692 [2024-11-26 20:55:45.141288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.692 qpair failed and we were unable to recover it. 00:25:41.692 [2024-11-26 20:55:45.141574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.692 [2024-11-26 20:55:45.141637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.692 qpair failed and we were unable to recover it. 00:25:41.692 [2024-11-26 20:55:45.141936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.692 [2024-11-26 20:55:45.141999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.692 qpair failed and we were unable to recover it. 00:25:41.692 [2024-11-26 20:55:45.142259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.692 [2024-11-26 20:55:45.142340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.692 qpair failed and we were unable to recover it. 00:25:41.692 [2024-11-26 20:55:45.142592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.692 [2024-11-26 20:55:45.142655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.692 qpair failed and we were unable to recover it. 00:25:41.692 [2024-11-26 20:55:45.142938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.692 [2024-11-26 20:55:45.143000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.692 qpair failed and we were unable to recover it. 00:25:41.692 [2024-11-26 20:55:45.143216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.692 [2024-11-26 20:55:45.143279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.692 qpair failed and we were unable to recover it. 00:25:41.692 [2024-11-26 20:55:45.143580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.692 [2024-11-26 20:55:45.143642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.692 qpair failed and we were unable to recover it. 00:25:41.692 [2024-11-26 20:55:45.143832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.692 [2024-11-26 20:55:45.143894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.692 qpair failed and we were unable to recover it. 00:25:41.692 [2024-11-26 20:55:45.144151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.692 [2024-11-26 20:55:45.144213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.692 qpair failed and we were unable to recover it. 00:25:41.692 [2024-11-26 20:55:45.144523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.692 [2024-11-26 20:55:45.144589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.692 qpair failed and we were unable to recover it. 00:25:41.692 [2024-11-26 20:55:45.144833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.692 [2024-11-26 20:55:45.144895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.692 qpair failed and we were unable to recover it. 00:25:41.692 [2024-11-26 20:55:45.145181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.692 [2024-11-26 20:55:45.145244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.692 qpair failed and we were unable to recover it. 00:25:41.692 [2024-11-26 20:55:45.145537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.692 [2024-11-26 20:55:45.145602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.692 qpair failed and we were unable to recover it. 00:25:41.692 [2024-11-26 20:55:45.145852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.692 [2024-11-26 20:55:45.145915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.692 qpair failed and we were unable to recover it. 00:25:41.692 [2024-11-26 20:55:45.146191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.692 [2024-11-26 20:55:45.146255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.692 qpair failed and we were unable to recover it. 00:25:41.692 [2024-11-26 20:55:45.146569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.692 [2024-11-26 20:55:45.146632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.692 qpair failed and we were unable to recover it. 00:25:41.692 [2024-11-26 20:55:45.146912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.692 [2024-11-26 20:55:45.146975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.692 qpair failed and we were unable to recover it. 00:25:41.692 [2024-11-26 20:55:45.147272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.692 [2024-11-26 20:55:45.147352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.692 qpair failed and we were unable to recover it. 00:25:41.692 [2024-11-26 20:55:45.147636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.692 [2024-11-26 20:55:45.147698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.692 qpair failed and we were unable to recover it. 00:25:41.692 [2024-11-26 20:55:45.147979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.692 [2024-11-26 20:55:45.148042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.692 qpair failed and we were unable to recover it. 00:25:41.692 [2024-11-26 20:55:45.148338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.692 [2024-11-26 20:55:45.148412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.692 qpair failed and we were unable to recover it. 00:25:41.692 [2024-11-26 20:55:45.148681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.692 [2024-11-26 20:55:45.148744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.692 qpair failed and we were unable to recover it. 00:25:41.692 [2024-11-26 20:55:45.149035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.692 [2024-11-26 20:55:45.149098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.692 qpair failed and we were unable to recover it. 00:25:41.692 [2024-11-26 20:55:45.149357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.692 [2024-11-26 20:55:45.149422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.692 qpair failed and we were unable to recover it. 00:25:41.692 [2024-11-26 20:55:45.149682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.692 [2024-11-26 20:55:45.149744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.692 qpair failed and we were unable to recover it. 00:25:41.692 [2024-11-26 20:55:45.150043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.692 [2024-11-26 20:55:45.150107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.692 qpair failed and we were unable to recover it. 00:25:41.692 [2024-11-26 20:55:45.150363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.692 [2024-11-26 20:55:45.150427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.692 qpair failed and we were unable to recover it. 00:25:41.692 [2024-11-26 20:55:45.150706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.692 [2024-11-26 20:55:45.150769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.692 qpair failed and we were unable to recover it. 00:25:41.692 [2024-11-26 20:55:45.151051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.692 [2024-11-26 20:55:45.151113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.692 qpair failed and we were unable to recover it. 00:25:41.692 [2024-11-26 20:55:45.151376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.692 [2024-11-26 20:55:45.151440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.692 qpair failed and we were unable to recover it. 00:25:41.692 [2024-11-26 20:55:45.151688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.692 [2024-11-26 20:55:45.151750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.692 qpair failed and we were unable to recover it. 00:25:41.692 [2024-11-26 20:55:45.151999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.692 [2024-11-26 20:55:45.152062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.692 qpair failed and we were unable to recover it. 00:25:41.692 [2024-11-26 20:55:45.152344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.692 [2024-11-26 20:55:45.152416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.692 qpair failed and we were unable to recover it. 00:25:41.692 [2024-11-26 20:55:45.152708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.692 [2024-11-26 20:55:45.152771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.692 qpair failed and we were unable to recover it. 00:25:41.692 [2024-11-26 20:55:45.153047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.692 [2024-11-26 20:55:45.153110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.692 qpair failed and we were unable to recover it. 00:25:41.693 [2024-11-26 20:55:45.153406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.693 [2024-11-26 20:55:45.153471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.693 qpair failed and we were unable to recover it. 00:25:41.693 [2024-11-26 20:55:45.153719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.693 [2024-11-26 20:55:45.153784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.693 qpair failed and we were unable to recover it. 00:25:41.693 [2024-11-26 20:55:45.154066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.693 [2024-11-26 20:55:45.154129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.693 qpair failed and we were unable to recover it. 00:25:41.693 [2024-11-26 20:55:45.154428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.693 [2024-11-26 20:55:45.154504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.693 qpair failed and we were unable to recover it. 00:25:41.693 [2024-11-26 20:55:45.154757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.693 [2024-11-26 20:55:45.154820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.693 qpair failed and we were unable to recover it. 00:25:41.693 [2024-11-26 20:55:45.155062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.693 [2024-11-26 20:55:45.155125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.693 qpair failed and we were unable to recover it. 00:25:41.693 [2024-11-26 20:55:45.155330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.693 [2024-11-26 20:55:45.155398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.693 qpair failed and we were unable to recover it. 00:25:41.693 [2024-11-26 20:55:45.155694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.693 [2024-11-26 20:55:45.155757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.693 qpair failed and we were unable to recover it. 00:25:41.693 [2024-11-26 20:55:45.156048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.693 [2024-11-26 20:55:45.156111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.693 qpair failed and we were unable to recover it. 00:25:41.693 [2024-11-26 20:55:45.156362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.693 [2024-11-26 20:55:45.156430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.693 qpair failed and we were unable to recover it. 00:25:41.693 [2024-11-26 20:55:45.156671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.693 [2024-11-26 20:55:45.156738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.693 qpair failed and we were unable to recover it. 00:25:41.693 [2024-11-26 20:55:45.156983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.693 [2024-11-26 20:55:45.157047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.693 qpair failed and we were unable to recover it. 00:25:41.693 [2024-11-26 20:55:45.157294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.693 [2024-11-26 20:55:45.157376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.693 qpair failed and we were unable to recover it. 00:25:41.693 [2024-11-26 20:55:45.157610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.693 [2024-11-26 20:55:45.157673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.693 qpair failed and we were unable to recover it. 00:25:41.693 [2024-11-26 20:55:45.157867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.693 [2024-11-26 20:55:45.157932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.693 qpair failed and we were unable to recover it. 00:25:41.693 [2024-11-26 20:55:45.158218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.693 [2024-11-26 20:55:45.158285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.693 qpair failed and we were unable to recover it. 00:25:41.693 [2024-11-26 20:55:45.158561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.693 [2024-11-26 20:55:45.158625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.693 qpair failed and we were unable to recover it. 00:25:41.693 [2024-11-26 20:55:45.158933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.693 [2024-11-26 20:55:45.158997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.693 qpair failed and we were unable to recover it. 00:25:41.693 [2024-11-26 20:55:45.159250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.693 [2024-11-26 20:55:45.159330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.693 qpair failed and we were unable to recover it. 00:25:41.693 [2024-11-26 20:55:45.159634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.693 [2024-11-26 20:55:45.159697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.693 qpair failed and we were unable to recover it. 00:25:41.693 [2024-11-26 20:55:45.159980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.693 [2024-11-26 20:55:45.160043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.693 qpair failed and we were unable to recover it. 00:25:41.693 [2024-11-26 20:55:45.160295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.693 [2024-11-26 20:55:45.160390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.693 qpair failed and we were unable to recover it. 00:25:41.693 [2024-11-26 20:55:45.160648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.693 [2024-11-26 20:55:45.160712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.693 qpair failed and we were unable to recover it. 00:25:41.693 [2024-11-26 20:55:45.160995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.693 [2024-11-26 20:55:45.161060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.693 qpair failed and we were unable to recover it. 00:25:41.693 [2024-11-26 20:55:45.161353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.693 [2024-11-26 20:55:45.161417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.693 qpair failed and we were unable to recover it. 00:25:41.693 [2024-11-26 20:55:45.161662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.693 [2024-11-26 20:55:45.161726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.693 qpair failed and we were unable to recover it. 00:25:41.693 [2024-11-26 20:55:45.161907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.693 [2024-11-26 20:55:45.161970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.693 qpair failed and we were unable to recover it. 00:25:41.693 [2024-11-26 20:55:45.162183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.693 [2024-11-26 20:55:45.162249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.693 qpair failed and we were unable to recover it. 00:25:41.693 [2024-11-26 20:55:45.162523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.693 [2024-11-26 20:55:45.162588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.693 qpair failed and we were unable to recover it. 00:25:41.693 [2024-11-26 20:55:45.162873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.693 [2024-11-26 20:55:45.162937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.693 qpair failed and we were unable to recover it. 00:25:41.693 [2024-11-26 20:55:45.163242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.693 [2024-11-26 20:55:45.163320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.693 qpair failed and we were unable to recover it. 00:25:41.693 [2024-11-26 20:55:45.163616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.693 [2024-11-26 20:55:45.163679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.693 qpair failed and we were unable to recover it. 00:25:41.693 [2024-11-26 20:55:45.163873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.693 [2024-11-26 20:55:45.163939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.693 qpair failed and we were unable to recover it. 00:25:41.693 [2024-11-26 20:55:45.164187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.693 [2024-11-26 20:55:45.164251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.693 qpair failed and we were unable to recover it. 00:25:41.693 [2024-11-26 20:55:45.164496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.693 [2024-11-26 20:55:45.164564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.693 qpair failed and we were unable to recover it. 00:25:41.693 [2024-11-26 20:55:45.164777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.693 [2024-11-26 20:55:45.164840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.693 qpair failed and we were unable to recover it. 00:25:41.693 [2024-11-26 20:55:45.165065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.693 [2024-11-26 20:55:45.165128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.693 qpair failed and we were unable to recover it. 00:25:41.693 [2024-11-26 20:55:45.165416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.693 [2024-11-26 20:55:45.165483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.693 qpair failed and we were unable to recover it. 00:25:41.694 [2024-11-26 20:55:45.165739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.694 [2024-11-26 20:55:45.165805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.694 qpair failed and we were unable to recover it. 00:25:41.694 [2024-11-26 20:55:45.166056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.694 [2024-11-26 20:55:45.166119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.694 qpair failed and we were unable to recover it. 00:25:41.694 [2024-11-26 20:55:45.166351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.694 [2024-11-26 20:55:45.166415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.694 qpair failed and we were unable to recover it. 00:25:41.694 [2024-11-26 20:55:45.166659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.694 [2024-11-26 20:55:45.166725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.694 qpair failed and we were unable to recover it. 00:25:41.694 [2024-11-26 20:55:45.166973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.694 [2024-11-26 20:55:45.167038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.694 qpair failed and we were unable to recover it. 00:25:41.694 [2024-11-26 20:55:45.167247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.694 [2024-11-26 20:55:45.167335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.694 qpair failed and we were unable to recover it. 00:25:41.694 [2024-11-26 20:55:45.167540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.694 [2024-11-26 20:55:45.167603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.694 qpair failed and we were unable to recover it. 00:25:41.694 [2024-11-26 20:55:45.167862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.694 [2024-11-26 20:55:45.167924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.694 qpair failed and we were unable to recover it. 00:25:41.694 [2024-11-26 20:55:45.168223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.694 [2024-11-26 20:55:45.168286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.694 qpair failed and we were unable to recover it. 00:25:41.694 [2024-11-26 20:55:45.168587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.694 [2024-11-26 20:55:45.168650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.694 qpair failed and we were unable to recover it. 00:25:41.694 [2024-11-26 20:55:45.168950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.694 [2024-11-26 20:55:45.169013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.694 qpair failed and we were unable to recover it. 00:25:41.694 [2024-11-26 20:55:45.169256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.694 [2024-11-26 20:55:45.169351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.694 qpair failed and we were unable to recover it. 00:25:41.694 [2024-11-26 20:55:45.169584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.694 [2024-11-26 20:55:45.169647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.694 qpair failed and we were unable to recover it. 00:25:41.694 [2024-11-26 20:55:45.169922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.694 [2024-11-26 20:55:45.169986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.694 qpair failed and we were unable to recover it. 00:25:41.694 [2024-11-26 20:55:45.170227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.694 [2024-11-26 20:55:45.170290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.694 qpair failed and we were unable to recover it. 00:25:41.694 [2024-11-26 20:55:45.170598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.694 [2024-11-26 20:55:45.170663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.694 qpair failed and we were unable to recover it. 00:25:41.694 [2024-11-26 20:55:45.170908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.694 [2024-11-26 20:55:45.170972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.694 qpair failed and we were unable to recover it. 00:25:41.694 [2024-11-26 20:55:45.171254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.694 [2024-11-26 20:55:45.171335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.694 qpair failed and we were unable to recover it. 00:25:41.694 [2024-11-26 20:55:45.171640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.694 [2024-11-26 20:55:45.171702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.694 qpair failed and we were unable to recover it. 00:25:41.694 [2024-11-26 20:55:45.172010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.694 [2024-11-26 20:55:45.172073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.694 qpair failed and we were unable to recover it. 00:25:41.694 [2024-11-26 20:55:45.172362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.694 [2024-11-26 20:55:45.172429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.694 qpair failed and we were unable to recover it. 00:25:41.694 [2024-11-26 20:55:45.172719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.694 [2024-11-26 20:55:45.172782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.694 qpair failed and we were unable to recover it. 00:25:41.694 [2024-11-26 20:55:45.173063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.694 [2024-11-26 20:55:45.173126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.694 qpair failed and we were unable to recover it. 00:25:41.694 [2024-11-26 20:55:45.173419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.694 [2024-11-26 20:55:45.173484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.694 qpair failed and we were unable to recover it. 00:25:41.694 [2024-11-26 20:55:45.173765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.694 [2024-11-26 20:55:45.173827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.694 qpair failed and we were unable to recover it. 00:25:41.694 [2024-11-26 20:55:45.174113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.694 [2024-11-26 20:55:45.174178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.694 qpair failed and we were unable to recover it. 00:25:41.694 [2024-11-26 20:55:45.174433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.694 [2024-11-26 20:55:45.174501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.694 qpair failed and we were unable to recover it. 00:25:41.694 [2024-11-26 20:55:45.174794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.694 [2024-11-26 20:55:45.174857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.694 qpair failed and we were unable to recover it. 00:25:41.694 [2024-11-26 20:55:45.175118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.694 [2024-11-26 20:55:45.175181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.694 qpair failed and we were unable to recover it. 00:25:41.694 [2024-11-26 20:55:45.175434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.694 [2024-11-26 20:55:45.175497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.694 qpair failed and we were unable to recover it. 00:25:41.694 [2024-11-26 20:55:45.175790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.694 [2024-11-26 20:55:45.175853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.694 qpair failed and we were unable to recover it. 00:25:41.694 [2024-11-26 20:55:45.176134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.694 [2024-11-26 20:55:45.176197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.694 qpair failed and we were unable to recover it. 00:25:41.694 [2024-11-26 20:55:45.176473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.694 [2024-11-26 20:55:45.176541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.694 qpair failed and we were unable to recover it. 00:25:41.694 [2024-11-26 20:55:45.176821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.695 [2024-11-26 20:55:45.176883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.695 qpair failed and we were unable to recover it. 00:25:41.695 [2024-11-26 20:55:45.177128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.695 [2024-11-26 20:55:45.177191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.695 qpair failed and we were unable to recover it. 00:25:41.695 [2024-11-26 20:55:45.177420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.695 [2024-11-26 20:55:45.177485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.695 qpair failed and we were unable to recover it. 00:25:41.695 [2024-11-26 20:55:45.177701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.695 [2024-11-26 20:55:45.177767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.695 qpair failed and we were unable to recover it. 00:25:41.695 [2024-11-26 20:55:45.178055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.695 [2024-11-26 20:55:45.178118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.695 qpair failed and we were unable to recover it. 00:25:41.695 [2024-11-26 20:55:45.178376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.695 [2024-11-26 20:55:45.178441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.695 qpair failed and we were unable to recover it. 00:25:41.695 [2024-11-26 20:55:45.178738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.695 [2024-11-26 20:55:45.178802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.695 qpair failed and we were unable to recover it. 00:25:41.695 [2024-11-26 20:55:45.179086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.695 [2024-11-26 20:55:45.179148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.695 qpair failed and we were unable to recover it. 00:25:41.695 [2024-11-26 20:55:45.179359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.695 [2024-11-26 20:55:45.179424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.695 qpair failed and we were unable to recover it. 00:25:41.695 [2024-11-26 20:55:45.179645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.695 [2024-11-26 20:55:45.179711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.695 qpair failed and we were unable to recover it. 00:25:41.695 [2024-11-26 20:55:45.179955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.695 [2024-11-26 20:55:45.180019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.695 qpair failed and we were unable to recover it. 00:25:41.695 [2024-11-26 20:55:45.180269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.695 [2024-11-26 20:55:45.180355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.695 qpair failed and we were unable to recover it. 00:25:41.695 [2024-11-26 20:55:45.180657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.695 [2024-11-26 20:55:45.180737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.695 qpair failed and we were unable to recover it. 00:25:41.695 [2024-11-26 20:55:45.180988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.695 [2024-11-26 20:55:45.181052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.695 qpair failed and we were unable to recover it. 00:25:41.695 [2024-11-26 20:55:45.181285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.695 [2024-11-26 20:55:45.181364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.695 qpair failed and we were unable to recover it. 00:25:41.695 [2024-11-26 20:55:45.181656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.695 [2024-11-26 20:55:45.181719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.695 qpair failed and we were unable to recover it. 00:25:41.695 [2024-11-26 20:55:45.181910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.695 [2024-11-26 20:55:45.181976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.695 qpair failed and we were unable to recover it. 00:25:41.695 [2024-11-26 20:55:45.182237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.695 [2024-11-26 20:55:45.182300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.695 qpair failed and we were unable to recover it. 00:25:41.695 [2024-11-26 20:55:45.182627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.695 [2024-11-26 20:55:45.182691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.695 qpair failed and we were unable to recover it. 00:25:41.695 [2024-11-26 20:55:45.182902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.695 [2024-11-26 20:55:45.182967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.695 qpair failed and we were unable to recover it. 00:25:41.695 [2024-11-26 20:55:45.183227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.695 [2024-11-26 20:55:45.183292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.695 qpair failed and we were unable to recover it. 00:25:41.695 [2024-11-26 20:55:45.183581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.695 [2024-11-26 20:55:45.183644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.695 qpair failed and we were unable to recover it. 00:25:41.695 [2024-11-26 20:55:45.183851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.695 [2024-11-26 20:55:45.183915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.695 qpair failed and we were unable to recover it. 00:25:41.695 [2024-11-26 20:55:45.184201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.695 [2024-11-26 20:55:45.184264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.695 qpair failed and we were unable to recover it. 00:25:41.695 [2024-11-26 20:55:45.184493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.695 [2024-11-26 20:55:45.184558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.695 qpair failed and we were unable to recover it. 00:25:41.695 [2024-11-26 20:55:45.184848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.695 [2024-11-26 20:55:45.184912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.695 qpair failed and we were unable to recover it. 00:25:41.695 [2024-11-26 20:55:45.185190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.695 [2024-11-26 20:55:45.185253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.695 qpair failed and we were unable to recover it. 00:25:41.695 [2024-11-26 20:55:45.185530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.695 [2024-11-26 20:55:45.185595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.695 qpair failed and we were unable to recover it. 00:25:41.695 [2024-11-26 20:55:45.185845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.695 [2024-11-26 20:55:45.185907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.695 qpair failed and we were unable to recover it. 00:25:41.695 [2024-11-26 20:55:45.186191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.695 [2024-11-26 20:55:45.186253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.695 qpair failed and we were unable to recover it. 00:25:41.695 [2024-11-26 20:55:45.186553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.695 [2024-11-26 20:55:45.186617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.695 qpair failed and we were unable to recover it. 00:25:41.695 [2024-11-26 20:55:45.186870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.695 [2024-11-26 20:55:45.186932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.695 qpair failed and we were unable to recover it. 00:25:41.695 [2024-11-26 20:55:45.187181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.695 [2024-11-26 20:55:45.187243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.695 qpair failed and we were unable to recover it. 00:25:41.695 [2024-11-26 20:55:45.187469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.695 [2024-11-26 20:55:45.187534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.695 qpair failed and we were unable to recover it. 00:25:41.695 [2024-11-26 20:55:45.187815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.695 [2024-11-26 20:55:45.187879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.695 qpair failed and we were unable to recover it. 00:25:41.695 [2024-11-26 20:55:45.188070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.695 [2024-11-26 20:55:45.188134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.695 qpair failed and we were unable to recover it. 00:25:41.695 [2024-11-26 20:55:45.188431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.695 [2024-11-26 20:55:45.188498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.695 qpair failed and we were unable to recover it. 00:25:41.695 [2024-11-26 20:55:45.188749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.695 [2024-11-26 20:55:45.188812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.695 qpair failed and we were unable to recover it. 00:25:41.695 [2024-11-26 20:55:45.189030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.696 [2024-11-26 20:55:45.189094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.696 qpair failed and we were unable to recover it. 00:25:41.696 [2024-11-26 20:55:45.189350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.696 [2024-11-26 20:55:45.189417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.696 qpair failed and we were unable to recover it. 00:25:41.696 [2024-11-26 20:55:45.189706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.696 [2024-11-26 20:55:45.189770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.696 qpair failed and we were unable to recover it. 00:25:41.696 [2024-11-26 20:55:45.190054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.696 [2024-11-26 20:55:45.190120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.696 qpair failed and we were unable to recover it. 00:25:41.696 [2024-11-26 20:55:45.190399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.696 [2024-11-26 20:55:45.190464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.696 qpair failed and we were unable to recover it. 00:25:41.696 [2024-11-26 20:55:45.190718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.696 [2024-11-26 20:55:45.190781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.696 qpair failed and we were unable to recover it. 00:25:41.696 [2024-11-26 20:55:45.191027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.696 [2024-11-26 20:55:45.191094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.696 qpair failed and we were unable to recover it. 00:25:41.696 [2024-11-26 20:55:45.191382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.696 [2024-11-26 20:55:45.191447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.696 qpair failed and we were unable to recover it. 00:25:41.696 [2024-11-26 20:55:45.191748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.696 [2024-11-26 20:55:45.191812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.696 qpair failed and we were unable to recover it. 00:25:41.696 [2024-11-26 20:55:45.192098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.696 [2024-11-26 20:55:45.192162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.696 qpair failed and we were unable to recover it. 00:25:41.696 [2024-11-26 20:55:45.192420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.696 [2024-11-26 20:55:45.192488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.696 qpair failed and we were unable to recover it. 00:25:41.696 [2024-11-26 20:55:45.192733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.696 [2024-11-26 20:55:45.192797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.696 qpair failed and we were unable to recover it. 00:25:41.696 [2024-11-26 20:55:45.193085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.696 [2024-11-26 20:55:45.193149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.696 qpair failed and we were unable to recover it. 00:25:41.696 [2024-11-26 20:55:45.193433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.696 [2024-11-26 20:55:45.193500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.696 qpair failed and we were unable to recover it. 00:25:41.696 [2024-11-26 20:55:45.193763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.696 [2024-11-26 20:55:45.193838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.696 qpair failed and we were unable to recover it. 00:25:41.696 [2024-11-26 20:55:45.194074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.696 [2024-11-26 20:55:45.194138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.696 qpair failed and we were unable to recover it. 00:25:41.696 [2024-11-26 20:55:45.194419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.696 [2024-11-26 20:55:45.194483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.696 qpair failed and we were unable to recover it. 00:25:41.696 [2024-11-26 20:55:45.194773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.696 [2024-11-26 20:55:45.194836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.696 qpair failed and we were unable to recover it. 00:25:41.696 [2024-11-26 20:55:45.195089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.696 [2024-11-26 20:55:45.195153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.696 qpair failed and we were unable to recover it. 00:25:41.696 [2024-11-26 20:55:45.195390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.696 [2024-11-26 20:55:45.195454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.696 qpair failed and we were unable to recover it. 00:25:41.696 [2024-11-26 20:55:45.195710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.696 [2024-11-26 20:55:45.195774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.696 qpair failed and we were unable to recover it. 00:25:41.696 [2024-11-26 20:55:45.196018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.696 [2024-11-26 20:55:45.196081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.696 qpair failed and we were unable to recover it. 00:25:41.696 [2024-11-26 20:55:45.196343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.696 [2024-11-26 20:55:45.196412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.696 qpair failed and we were unable to recover it. 00:25:41.696 [2024-11-26 20:55:45.196671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.696 [2024-11-26 20:55:45.196734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.696 qpair failed and we were unable to recover it. 00:25:41.696 [2024-11-26 20:55:45.196970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.696 [2024-11-26 20:55:45.197034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.696 qpair failed and we were unable to recover it. 00:25:41.696 [2024-11-26 20:55:45.197287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.696 [2024-11-26 20:55:45.197380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.696 qpair failed and we were unable to recover it. 00:25:41.696 [2024-11-26 20:55:45.197605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.696 [2024-11-26 20:55:45.197671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.696 qpair failed and we were unable to recover it. 00:25:41.696 [2024-11-26 20:55:45.197957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.696 [2024-11-26 20:55:45.198022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.696 qpair failed and we were unable to recover it. 00:25:41.696 [2024-11-26 20:55:45.198355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.696 [2024-11-26 20:55:45.198422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.696 qpair failed and we were unable to recover it. 00:25:41.696 [2024-11-26 20:55:45.198686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.696 [2024-11-26 20:55:45.198750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.696 qpair failed and we were unable to recover it. 00:25:41.696 [2024-11-26 20:55:45.199032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.696 [2024-11-26 20:55:45.199094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.696 qpair failed and we were unable to recover it. 00:25:41.696 [2024-11-26 20:55:45.199333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.696 [2024-11-26 20:55:45.199401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.696 qpair failed and we were unable to recover it. 00:25:41.696 [2024-11-26 20:55:45.199653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.696 [2024-11-26 20:55:45.199716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.696 qpair failed and we were unable to recover it. 00:25:41.696 [2024-11-26 20:55:45.200012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.696 [2024-11-26 20:55:45.200075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.696 qpair failed and we were unable to recover it. 00:25:41.696 [2024-11-26 20:55:45.200334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.696 [2024-11-26 20:55:45.200399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.696 qpair failed and we were unable to recover it. 00:25:41.696 [2024-11-26 20:55:45.200659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.696 [2024-11-26 20:55:45.200723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.696 qpair failed and we were unable to recover it. 00:25:41.696 [2024-11-26 20:55:45.200893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.696 [2024-11-26 20:55:45.200956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.696 qpair failed and we were unable to recover it. 00:25:41.696 [2024-11-26 20:55:45.201211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.696 [2024-11-26 20:55:45.201274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.696 qpair failed and we were unable to recover it. 00:25:41.696 [2024-11-26 20:55:45.201596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.696 [2024-11-26 20:55:45.201662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.696 qpair failed and we were unable to recover it. 00:25:41.696 [2024-11-26 20:55:45.201904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.696 [2024-11-26 20:55:45.201966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.696 qpair failed and we were unable to recover it. 00:25:41.696 [2024-11-26 20:55:45.202216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.696 [2024-11-26 20:55:45.202282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.696 qpair failed and we were unable to recover it. 00:25:41.696 [2024-11-26 20:55:45.202628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.696 [2024-11-26 20:55:45.202693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.696 qpair failed and we were unable to recover it. 00:25:41.696 [2024-11-26 20:55:45.202952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.696 [2024-11-26 20:55:45.203014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.696 qpair failed and we were unable to recover it. 00:25:41.696 [2024-11-26 20:55:45.203267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.696 [2024-11-26 20:55:45.203350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.696 qpair failed and we were unable to recover it. 00:25:41.696 [2024-11-26 20:55:45.203597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.697 [2024-11-26 20:55:45.203664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.697 qpair failed and we were unable to recover it. 00:25:41.697 [2024-11-26 20:55:45.203948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.697 [2024-11-26 20:55:45.204011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.697 qpair failed and we were unable to recover it. 00:25:41.697 [2024-11-26 20:55:45.204322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.697 [2024-11-26 20:55:45.204387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.697 qpair failed and we were unable to recover it. 00:25:41.697 [2024-11-26 20:55:45.204684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.697 [2024-11-26 20:55:45.204747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.697 qpair failed and we were unable to recover it. 00:25:41.697 [2024-11-26 20:55:45.204984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.697 [2024-11-26 20:55:45.205047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.697 qpair failed and we were unable to recover it. 00:25:41.697 [2024-11-26 20:55:45.205290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.697 [2024-11-26 20:55:45.205388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.697 qpair failed and we were unable to recover it. 00:25:41.697 [2024-11-26 20:55:45.205607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.697 [2024-11-26 20:55:45.205673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.697 qpair failed and we were unable to recover it. 00:25:41.697 [2024-11-26 20:55:45.205867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.697 [2024-11-26 20:55:45.205930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.697 qpair failed and we were unable to recover it. 00:25:41.697 [2024-11-26 20:55:45.206208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.697 [2024-11-26 20:55:45.206271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.697 qpair failed and we were unable to recover it. 00:25:41.697 [2024-11-26 20:55:45.206587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.697 [2024-11-26 20:55:45.206650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.697 qpair failed and we were unable to recover it. 00:25:41.697 [2024-11-26 20:55:45.206946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.697 [2024-11-26 20:55:45.207019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.697 qpair failed and we were unable to recover it. 00:25:41.697 [2024-11-26 20:55:45.207237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.697 [2024-11-26 20:55:45.207322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.697 qpair failed and we were unable to recover it. 00:25:41.697 [2024-11-26 20:55:45.207556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.697 [2024-11-26 20:55:45.207619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.697 qpair failed and we were unable to recover it. 00:25:41.697 [2024-11-26 20:55:45.207855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.697 [2024-11-26 20:55:45.207919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.697 qpair failed and we were unable to recover it. 00:25:41.697 [2024-11-26 20:55:45.208158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.697 [2024-11-26 20:55:45.208234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.697 qpair failed and we were unable to recover it. 00:25:41.697 [2024-11-26 20:55:45.208550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.697 [2024-11-26 20:55:45.208614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.697 qpair failed and we were unable to recover it. 00:25:41.697 [2024-11-26 20:55:45.208905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.697 [2024-11-26 20:55:45.208968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.697 qpair failed and we were unable to recover it. 00:25:41.697 [2024-11-26 20:55:45.209210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.697 [2024-11-26 20:55:45.209274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.697 qpair failed and we were unable to recover it. 00:25:41.697 [2024-11-26 20:55:45.209568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.697 [2024-11-26 20:55:45.209633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.697 qpair failed and we were unable to recover it. 00:25:41.697 [2024-11-26 20:55:45.209938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.697 [2024-11-26 20:55:45.210001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.697 qpair failed and we were unable to recover it. 00:25:41.697 [2024-11-26 20:55:45.210243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.697 [2024-11-26 20:55:45.210335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.697 qpair failed and we were unable to recover it. 00:25:41.697 [2024-11-26 20:55:45.210604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.697 [2024-11-26 20:55:45.210668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.697 qpair failed and we were unable to recover it. 00:25:41.697 [2024-11-26 20:55:45.210964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.697 [2024-11-26 20:55:45.211026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.697 qpair failed and we were unable to recover it. 00:25:41.697 [2024-11-26 20:55:45.211276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.697 [2024-11-26 20:55:45.211362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.697 qpair failed and we were unable to recover it. 00:25:41.697 [2024-11-26 20:55:45.211603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.697 [2024-11-26 20:55:45.211666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.697 qpair failed and we were unable to recover it. 00:25:41.697 [2024-11-26 20:55:45.211955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.697 [2024-11-26 20:55:45.212017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.697 qpair failed and we were unable to recover it. 00:25:41.697 [2024-11-26 20:55:45.212324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.697 [2024-11-26 20:55:45.212389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.697 qpair failed and we were unable to recover it. 00:25:41.697 [2024-11-26 20:55:45.212645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.697 [2024-11-26 20:55:45.212710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.697 qpair failed and we were unable to recover it. 00:25:41.697 [2024-11-26 20:55:45.212999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.697 [2024-11-26 20:55:45.213061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.697 qpair failed and we were unable to recover it. 00:25:41.697 [2024-11-26 20:55:45.213300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.697 [2024-11-26 20:55:45.213398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.697 qpair failed and we were unable to recover it. 00:25:41.697 [2024-11-26 20:55:45.213629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.697 [2024-11-26 20:55:45.213692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.697 qpair failed and we were unable to recover it. 00:25:41.697 [2024-11-26 20:55:45.213989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.697 [2024-11-26 20:55:45.214051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.697 qpair failed and we were unable to recover it. 00:25:41.697 [2024-11-26 20:55:45.214288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.697 [2024-11-26 20:55:45.214379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.697 qpair failed and we were unable to recover it. 00:25:41.697 [2024-11-26 20:55:45.214632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.697 [2024-11-26 20:55:45.214695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.697 qpair failed and we were unable to recover it. 00:25:41.697 [2024-11-26 20:55:45.214944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.697 [2024-11-26 20:55:45.215008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.697 qpair failed and we were unable to recover it. 00:25:41.697 [2024-11-26 20:55:45.215321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.697 [2024-11-26 20:55:45.215384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.697 qpair failed and we were unable to recover it. 00:25:41.697 [2024-11-26 20:55:45.215597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.697 [2024-11-26 20:55:45.215663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.697 qpair failed and we were unable to recover it. 00:25:41.697 [2024-11-26 20:55:45.215965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.697 [2024-11-26 20:55:45.216030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.697 qpair failed and we were unable to recover it. 00:25:41.697 [2024-11-26 20:55:45.216251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.697 [2024-11-26 20:55:45.216333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.697 qpair failed and we were unable to recover it. 00:25:41.697 [2024-11-26 20:55:45.216581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.697 [2024-11-26 20:55:45.216643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.697 qpair failed and we were unable to recover it. 00:25:41.697 [2024-11-26 20:55:45.216890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.697 [2024-11-26 20:55:45.216953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.697 qpair failed and we were unable to recover it. 00:25:41.697 [2024-11-26 20:55:45.217233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.697 [2024-11-26 20:55:45.217294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.697 qpair failed and we were unable to recover it. 00:25:41.697 [2024-11-26 20:55:45.217553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.697 [2024-11-26 20:55:45.217619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.697 qpair failed and we were unable to recover it. 00:25:41.697 [2024-11-26 20:55:45.217832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.697 [2024-11-26 20:55:45.217898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.697 qpair failed and we were unable to recover it. 00:25:41.697 [2024-11-26 20:55:45.218146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.697 [2024-11-26 20:55:45.218211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.697 qpair failed and we were unable to recover it. 00:25:41.698 [2024-11-26 20:55:45.218529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.698 [2024-11-26 20:55:45.218595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.698 qpair failed and we were unable to recover it. 00:25:41.698 [2024-11-26 20:55:45.218899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.698 [2024-11-26 20:55:45.218961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.698 qpair failed and we were unable to recover it. 00:25:41.698 [2024-11-26 20:55:45.219255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.698 [2024-11-26 20:55:45.219336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.698 qpair failed and we were unable to recover it. 00:25:41.698 [2024-11-26 20:55:45.219631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.698 [2024-11-26 20:55:45.219694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.698 qpair failed and we were unable to recover it. 00:25:41.698 [2024-11-26 20:55:45.219975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.698 [2024-11-26 20:55:45.220037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.698 qpair failed and we were unable to recover it. 00:25:41.698 [2024-11-26 20:55:45.220334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.698 [2024-11-26 20:55:45.220410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.698 qpair failed and we were unable to recover it. 00:25:41.698 [2024-11-26 20:55:45.220629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.698 [2024-11-26 20:55:45.220696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.698 qpair failed and we were unable to recover it. 00:25:41.698 [2024-11-26 20:55:45.220948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.698 [2024-11-26 20:55:45.221012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.698 qpair failed and we were unable to recover it. 00:25:41.698 [2024-11-26 20:55:45.221222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.698 [2024-11-26 20:55:45.221285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.698 qpair failed and we were unable to recover it. 00:25:41.698 [2024-11-26 20:55:45.221605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.698 [2024-11-26 20:55:45.221669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.698 qpair failed and we were unable to recover it. 00:25:41.698 [2024-11-26 20:55:45.221968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.698 [2024-11-26 20:55:45.222031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.698 qpair failed and we were unable to recover it. 00:25:41.698 [2024-11-26 20:55:45.222352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.698 [2024-11-26 20:55:45.222417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.698 qpair failed and we were unable to recover it. 00:25:41.698 [2024-11-26 20:55:45.222697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.698 [2024-11-26 20:55:45.222763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.698 qpair failed and we were unable to recover it. 00:25:41.698 [2024-11-26 20:55:45.223045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.698 [2024-11-26 20:55:45.223109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.698 qpair failed and we were unable to recover it. 00:25:41.698 [2024-11-26 20:55:45.223359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.698 [2024-11-26 20:55:45.223424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.698 qpair failed and we were unable to recover it. 00:25:41.698 [2024-11-26 20:55:45.223631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.698 [2024-11-26 20:55:45.223697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.698 qpair failed and we were unable to recover it. 00:25:41.698 [2024-11-26 20:55:45.223933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.698 [2024-11-26 20:55:45.223997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.698 qpair failed and we were unable to recover it. 00:25:41.698 [2024-11-26 20:55:45.224239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.698 [2024-11-26 20:55:45.224316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.698 qpair failed and we were unable to recover it. 00:25:41.698 [2024-11-26 20:55:45.224609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.698 [2024-11-26 20:55:45.224671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.698 qpair failed and we were unable to recover it. 00:25:41.698 [2024-11-26 20:55:45.224989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.698 [2024-11-26 20:55:45.225052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.698 qpair failed and we were unable to recover it. 00:25:41.698 [2024-11-26 20:55:45.225295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.698 [2024-11-26 20:55:45.225388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.698 qpair failed and we were unable to recover it. 00:25:41.698 [2024-11-26 20:55:45.225681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.698 [2024-11-26 20:55:45.225745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.698 qpair failed and we were unable to recover it. 00:25:41.698 [2024-11-26 20:55:45.226035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.698 [2024-11-26 20:55:45.226097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.698 qpair failed and we were unable to recover it. 00:25:41.698 [2024-11-26 20:55:45.226380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.698 [2024-11-26 20:55:45.226446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.698 qpair failed and we were unable to recover it. 00:25:41.698 [2024-11-26 20:55:45.226704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.698 [2024-11-26 20:55:45.226769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.698 qpair failed and we were unable to recover it. 00:25:41.698 [2024-11-26 20:55:45.227028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.698 [2024-11-26 20:55:45.227092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.698 qpair failed and we were unable to recover it. 00:25:41.698 [2024-11-26 20:55:45.227332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.698 [2024-11-26 20:55:45.227397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.698 qpair failed and we were unable to recover it. 00:25:41.698 [2024-11-26 20:55:45.227644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.698 [2024-11-26 20:55:45.227706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.698 qpair failed and we were unable to recover it. 00:25:41.698 [2024-11-26 20:55:45.227959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.698 [2024-11-26 20:55:45.228022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.698 qpair failed and we were unable to recover it. 00:25:41.698 [2024-11-26 20:55:45.228256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.698 [2024-11-26 20:55:45.228331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.698 qpair failed and we were unable to recover it. 00:25:41.698 [2024-11-26 20:55:45.228545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.698 [2024-11-26 20:55:45.228609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.699 qpair failed and we were unable to recover it. 00:25:41.699 [2024-11-26 20:55:45.228895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.699 [2024-11-26 20:55:45.228958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.699 qpair failed and we were unable to recover it. 00:25:41.699 [2024-11-26 20:55:45.229248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.699 [2024-11-26 20:55:45.229330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.699 qpair failed and we were unable to recover it. 00:25:41.699 [2024-11-26 20:55:45.229601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.699 [2024-11-26 20:55:45.229667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.699 qpair failed and we were unable to recover it. 00:25:41.699 [2024-11-26 20:55:45.229918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.699 [2024-11-26 20:55:45.229981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.699 qpair failed and we were unable to recover it. 00:25:41.699 [2024-11-26 20:55:45.230169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.699 [2024-11-26 20:55:45.230232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.699 qpair failed and we were unable to recover it. 00:25:41.699 [2024-11-26 20:55:45.230538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.699 [2024-11-26 20:55:45.230603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.699 qpair failed and we were unable to recover it. 00:25:41.699 [2024-11-26 20:55:45.230855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.699 [2024-11-26 20:55:45.230917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.699 qpair failed and we were unable to recover it. 00:25:41.699 [2024-11-26 20:55:45.231165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.699 [2024-11-26 20:55:45.231229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.699 qpair failed and we were unable to recover it. 00:25:41.699 [2024-11-26 20:55:45.231539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.699 [2024-11-26 20:55:45.231603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.699 qpair failed and we were unable to recover it. 00:25:41.699 [2024-11-26 20:55:45.231895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.699 [2024-11-26 20:55:45.231959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.699 qpair failed and we were unable to recover it. 00:25:41.699 [2024-11-26 20:55:45.232216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.699 [2024-11-26 20:55:45.232279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.699 qpair failed and we were unable to recover it. 00:25:41.699 [2024-11-26 20:55:45.232546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.699 [2024-11-26 20:55:45.232608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.699 qpair failed and we were unable to recover it. 00:25:41.699 [2024-11-26 20:55:45.232855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.699 [2024-11-26 20:55:45.232918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.699 qpair failed and we were unable to recover it. 00:25:41.699 [2024-11-26 20:55:45.233209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.699 [2024-11-26 20:55:45.233273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.699 qpair failed and we were unable to recover it. 00:25:41.699 [2024-11-26 20:55:45.233575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.699 [2024-11-26 20:55:45.233638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.699 qpair failed and we were unable to recover it. 00:25:41.699 [2024-11-26 20:55:45.233935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.699 [2024-11-26 20:55:45.233998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.699 qpair failed and we were unable to recover it. 00:25:41.699 [2024-11-26 20:55:45.234298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.699 [2024-11-26 20:55:45.234394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.699 qpair failed and we were unable to recover it. 00:25:41.699 [2024-11-26 20:55:45.234647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.699 [2024-11-26 20:55:45.234710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.699 qpair failed and we were unable to recover it. 00:25:41.699 [2024-11-26 20:55:45.234993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.699 [2024-11-26 20:55:45.235055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.699 qpair failed and we were unable to recover it. 00:25:41.699 [2024-11-26 20:55:45.235274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.699 [2024-11-26 20:55:45.235367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.699 qpair failed and we were unable to recover it. 00:25:41.699 [2024-11-26 20:55:45.235619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.699 [2024-11-26 20:55:45.235681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.699 qpair failed and we were unable to recover it. 00:25:41.699 [2024-11-26 20:55:45.235881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.699 [2024-11-26 20:55:45.235943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.699 qpair failed and we were unable to recover it. 00:25:41.699 [2024-11-26 20:55:45.236182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.699 [2024-11-26 20:55:45.236246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.699 qpair failed and we were unable to recover it. 00:25:41.699 [2024-11-26 20:55:45.236529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.699 [2024-11-26 20:55:45.236594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.699 qpair failed and we were unable to recover it. 00:25:41.699 [2024-11-26 20:55:45.236874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.699 [2024-11-26 20:55:45.236937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.699 qpair failed and we were unable to recover it. 00:25:41.699 [2024-11-26 20:55:45.237155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.699 [2024-11-26 20:55:45.237217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.699 qpair failed and we were unable to recover it. 00:25:41.699 [2024-11-26 20:55:45.237512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.699 [2024-11-26 20:55:45.237576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.699 qpair failed and we were unable to recover it. 00:25:41.699 [2024-11-26 20:55:45.237865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.699 [2024-11-26 20:55:45.237928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.699 qpair failed and we were unable to recover it. 00:25:41.699 [2024-11-26 20:55:45.238223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.699 [2024-11-26 20:55:45.238286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.699 qpair failed and we were unable to recover it. 00:25:41.699 [2024-11-26 20:55:45.238576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.699 [2024-11-26 20:55:45.238640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.699 qpair failed and we were unable to recover it. 00:25:41.699 [2024-11-26 20:55:45.238891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.699 [2024-11-26 20:55:45.238955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.699 qpair failed and we were unable to recover it. 00:25:41.699 [2024-11-26 20:55:45.239230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.699 [2024-11-26 20:55:45.239293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.699 qpair failed and we were unable to recover it. 00:25:41.699 [2024-11-26 20:55:45.239610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.699 [2024-11-26 20:55:45.239674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.699 qpair failed and we were unable to recover it. 00:25:41.699 [2024-11-26 20:55:45.239984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.699 [2024-11-26 20:55:45.240047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.699 qpair failed and we were unable to recover it. 00:25:41.699 [2024-11-26 20:55:45.240290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.699 [2024-11-26 20:55:45.240376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.699 qpair failed and we were unable to recover it. 00:25:41.699 [2024-11-26 20:55:45.240664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.699 [2024-11-26 20:55:45.240728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.699 qpair failed and we were unable to recover it. 00:25:41.699 [2024-11-26 20:55:45.241019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.699 [2024-11-26 20:55:45.241081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.699 qpair failed and we were unable to recover it. 00:25:41.699 [2024-11-26 20:55:45.241320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.699 [2024-11-26 20:55:45.241386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.699 qpair failed and we were unable to recover it. 00:25:41.700 [2024-11-26 20:55:45.241632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.700 [2024-11-26 20:55:45.241695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.700 qpair failed and we were unable to recover it. 00:25:41.700 [2024-11-26 20:55:45.241980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.700 [2024-11-26 20:55:45.242042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.700 qpair failed and we were unable to recover it. 00:25:41.700 [2024-11-26 20:55:45.242290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.700 [2024-11-26 20:55:45.242383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.700 qpair failed and we were unable to recover it. 00:25:41.700 [2024-11-26 20:55:45.242636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.700 [2024-11-26 20:55:45.242712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.700 qpair failed and we were unable to recover it. 00:25:41.700 [2024-11-26 20:55:45.242997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.700 [2024-11-26 20:55:45.243059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.700 qpair failed and we were unable to recover it. 00:25:41.700 [2024-11-26 20:55:45.243328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.700 [2024-11-26 20:55:45.243393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.700 qpair failed and we were unable to recover it. 00:25:41.700 [2024-11-26 20:55:45.243633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.700 [2024-11-26 20:55:45.243697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.700 qpair failed and we were unable to recover it. 00:25:41.700 [2024-11-26 20:55:45.243926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.700 [2024-11-26 20:55:45.243992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.700 qpair failed and we were unable to recover it. 00:25:41.700 [2024-11-26 20:55:45.244248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.700 [2024-11-26 20:55:45.244328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.700 qpair failed and we were unable to recover it. 00:25:41.700 [2024-11-26 20:55:45.244624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.700 [2024-11-26 20:55:45.244688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.700 qpair failed and we were unable to recover it. 00:25:41.700 [2024-11-26 20:55:45.244934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.700 [2024-11-26 20:55:45.245000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.700 qpair failed and we were unable to recover it. 00:25:41.700 [2024-11-26 20:55:45.245281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.700 [2024-11-26 20:55:45.245363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.700 qpair failed and we were unable to recover it. 00:25:41.700 [2024-11-26 20:55:45.245647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.700 [2024-11-26 20:55:45.245710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.700 qpair failed and we were unable to recover it. 00:25:41.700 [2024-11-26 20:55:45.246008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.700 [2024-11-26 20:55:45.246072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.700 qpair failed and we were unable to recover it. 00:25:41.700 [2024-11-26 20:55:45.246338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.700 [2024-11-26 20:55:45.246409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.700 qpair failed and we were unable to recover it. 00:25:41.700 [2024-11-26 20:55:45.246642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.700 [2024-11-26 20:55:45.246705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.700 qpair failed and we were unable to recover it. 00:25:41.700 [2024-11-26 20:55:45.246987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.700 [2024-11-26 20:55:45.247050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.700 qpair failed and we were unable to recover it. 00:25:41.700 [2024-11-26 20:55:45.247373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.700 [2024-11-26 20:55:45.247437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.700 qpair failed and we were unable to recover it. 00:25:41.700 [2024-11-26 20:55:45.247728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.700 [2024-11-26 20:55:45.247791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.700 qpair failed and we were unable to recover it. 00:25:41.700 [2024-11-26 20:55:45.248085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.700 [2024-11-26 20:55:45.248149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.700 qpair failed and we were unable to recover it. 00:25:41.700 [2024-11-26 20:55:45.248397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.700 [2024-11-26 20:55:45.248461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.700 qpair failed and we were unable to recover it. 00:25:41.700 [2024-11-26 20:55:45.248741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.700 [2024-11-26 20:55:45.248803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.700 qpair failed and we were unable to recover it. 00:25:41.700 [2024-11-26 20:55:45.249048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.700 [2024-11-26 20:55:45.249111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.700 qpair failed and we were unable to recover it. 00:25:41.700 [2024-11-26 20:55:45.249356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.700 [2024-11-26 20:55:45.249420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.700 qpair failed and we were unable to recover it. 00:25:41.700 [2024-11-26 20:55:45.249699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.700 [2024-11-26 20:55:45.249762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.700 qpair failed and we were unable to recover it. 00:25:41.700 [2024-11-26 20:55:45.250020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.700 [2024-11-26 20:55:45.250085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.700 qpair failed and we were unable to recover it. 00:25:41.700 [2024-11-26 20:55:45.250334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.700 [2024-11-26 20:55:45.250408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.700 qpair failed and we were unable to recover it. 00:25:41.700 [2024-11-26 20:55:45.250705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.700 [2024-11-26 20:55:45.250769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.700 qpair failed and we were unable to recover it. 00:25:41.700 [2024-11-26 20:55:45.251018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.700 [2024-11-26 20:55:45.251080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.700 qpair failed and we were unable to recover it. 00:25:41.700 [2024-11-26 20:55:45.251361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.700 [2024-11-26 20:55:45.251429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.700 qpair failed and we were unable to recover it. 00:25:41.700 [2024-11-26 20:55:45.251698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.700 [2024-11-26 20:55:45.251763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.700 qpair failed and we were unable to recover it. 00:25:41.700 [2024-11-26 20:55:45.252044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.700 [2024-11-26 20:55:45.252107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.700 qpair failed and we were unable to recover it. 00:25:41.700 [2024-11-26 20:55:45.252353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.700 [2024-11-26 20:55:45.252419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.700 qpair failed and we were unable to recover it. 00:25:41.700 [2024-11-26 20:55:45.252666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.700 [2024-11-26 20:55:45.252729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.700 qpair failed and we were unable to recover it. 00:25:41.700 [2024-11-26 20:55:45.253009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.700 [2024-11-26 20:55:45.253071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.700 qpair failed and we were unable to recover it. 00:25:41.700 [2024-11-26 20:55:45.253330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.700 [2024-11-26 20:55:45.253394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.700 qpair failed and we were unable to recover it. 00:25:41.700 [2024-11-26 20:55:45.253648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.700 [2024-11-26 20:55:45.253710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.700 qpair failed and we were unable to recover it. 00:25:41.700 [2024-11-26 20:55:45.253967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.700 [2024-11-26 20:55:45.254029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.700 qpair failed and we were unable to recover it. 00:25:41.701 [2024-11-26 20:55:45.254267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.701 [2024-11-26 20:55:45.254354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.701 qpair failed and we were unable to recover it. 00:25:41.701 [2024-11-26 20:55:45.254614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.701 [2024-11-26 20:55:45.254677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.701 qpair failed and we were unable to recover it. 00:25:41.701 [2024-11-26 20:55:45.254935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.701 [2024-11-26 20:55:45.254998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.701 qpair failed and we were unable to recover it. 00:25:41.701 [2024-11-26 20:55:45.255295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.701 [2024-11-26 20:55:45.255393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.701 qpair failed and we were unable to recover it. 00:25:41.701 [2024-11-26 20:55:45.255675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.701 [2024-11-26 20:55:45.255738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.701 qpair failed and we were unable to recover it. 00:25:41.701 [2024-11-26 20:55:45.255958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.701 [2024-11-26 20:55:45.256033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.701 qpair failed and we were unable to recover it. 00:25:41.701 [2024-11-26 20:55:45.256335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.701 [2024-11-26 20:55:45.256400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.701 qpair failed and we were unable to recover it. 00:25:41.701 [2024-11-26 20:55:45.256645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.701 [2024-11-26 20:55:45.256711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.701 qpair failed and we were unable to recover it. 00:25:41.701 [2024-11-26 20:55:45.256996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.701 [2024-11-26 20:55:45.257060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.701 qpair failed and we were unable to recover it. 00:25:41.701 [2024-11-26 20:55:45.257337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.701 [2024-11-26 20:55:45.257403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.701 qpair failed and we were unable to recover it. 00:25:41.701 [2024-11-26 20:55:45.257673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.701 [2024-11-26 20:55:45.257736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.701 qpair failed and we were unable to recover it. 00:25:41.701 [2024-11-26 20:55:45.258000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.701 [2024-11-26 20:55:45.258063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.701 qpair failed and we were unable to recover it. 00:25:41.701 [2024-11-26 20:55:45.258326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.701 [2024-11-26 20:55:45.258409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.701 qpair failed and we were unable to recover it. 00:25:41.701 [2024-11-26 20:55:45.258659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.701 [2024-11-26 20:55:45.258726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.701 qpair failed and we were unable to recover it. 00:25:41.701 [2024-11-26 20:55:45.258971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.701 [2024-11-26 20:55:45.259034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.701 qpair failed and we were unable to recover it. 00:25:41.701 [2024-11-26 20:55:45.259332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.701 [2024-11-26 20:55:45.259397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.701 qpair failed and we were unable to recover it. 00:25:41.701 [2024-11-26 20:55:45.259658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.701 [2024-11-26 20:55:45.259720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.701 qpair failed and we were unable to recover it. 00:25:41.701 [2024-11-26 20:55:45.259978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.701 [2024-11-26 20:55:45.260042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.701 qpair failed and we were unable to recover it. 00:25:41.701 [2024-11-26 20:55:45.260337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.701 [2024-11-26 20:55:45.260401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.701 qpair failed and we were unable to recover it. 00:25:41.701 [2024-11-26 20:55:45.260671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.701 [2024-11-26 20:55:45.260733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.701 qpair failed and we were unable to recover it. 00:25:41.701 [2024-11-26 20:55:45.261019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.701 [2024-11-26 20:55:45.261081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.701 qpair failed and we were unable to recover it. 00:25:41.701 [2024-11-26 20:55:45.261290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.701 [2024-11-26 20:55:45.261374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.701 qpair failed and we were unable to recover it. 00:25:41.701 [2024-11-26 20:55:45.261657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.701 [2024-11-26 20:55:45.261720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.701 qpair failed and we were unable to recover it. 00:25:41.701 [2024-11-26 20:55:45.262009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.701 [2024-11-26 20:55:45.262071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.701 qpair failed and we were unable to recover it. 00:25:41.701 [2024-11-26 20:55:45.262331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.701 [2024-11-26 20:55:45.262396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.701 qpair failed and we were unable to recover it. 00:25:41.701 [2024-11-26 20:55:45.262636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.701 [2024-11-26 20:55:45.262702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.701 qpair failed and we were unable to recover it. 00:25:41.701 [2024-11-26 20:55:45.262995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.701 [2024-11-26 20:55:45.263058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.701 qpair failed and we were unable to recover it. 00:25:41.701 [2024-11-26 20:55:45.263282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.701 [2024-11-26 20:55:45.263374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.701 qpair failed and we were unable to recover it. 00:25:41.701 [2024-11-26 20:55:45.263615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.701 [2024-11-26 20:55:45.263677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.701 qpair failed and we were unable to recover it. 00:25:41.701 [2024-11-26 20:55:45.263959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.701 [2024-11-26 20:55:45.264022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.701 qpair failed and we were unable to recover it. 00:25:41.701 [2024-11-26 20:55:45.264245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.701 [2024-11-26 20:55:45.264327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.701 qpair failed and we were unable to recover it. 00:25:41.701 [2024-11-26 20:55:45.264620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.701 [2024-11-26 20:55:45.264684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.701 qpair failed and we were unable to recover it. 00:25:41.701 [2024-11-26 20:55:45.264985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.701 [2024-11-26 20:55:45.265047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.701 qpair failed and we were unable to recover it. 00:25:41.701 [2024-11-26 20:55:45.265292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.701 [2024-11-26 20:55:45.265373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.701 qpair failed and we were unable to recover it. 00:25:41.701 [2024-11-26 20:55:45.265652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.701 [2024-11-26 20:55:45.265715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.701 qpair failed and we were unable to recover it. 00:25:41.701 [2024-11-26 20:55:45.265982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.701 [2024-11-26 20:55:45.266045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.701 qpair failed and we were unable to recover it. 00:25:41.701 [2024-11-26 20:55:45.266348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.701 [2024-11-26 20:55:45.266412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.701 qpair failed and we were unable to recover it. 00:25:41.701 [2024-11-26 20:55:45.266667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.702 [2024-11-26 20:55:45.266730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.702 qpair failed and we were unable to recover it. 00:25:41.702 [2024-11-26 20:55:45.267027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.702 [2024-11-26 20:55:45.267090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.702 qpair failed and we were unable to recover it. 00:25:41.702 [2024-11-26 20:55:45.267391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.702 [2024-11-26 20:55:45.267456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.702 qpair failed and we were unable to recover it. 00:25:41.702 [2024-11-26 20:55:45.267765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.702 [2024-11-26 20:55:45.267828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.702 qpair failed and we were unable to recover it. 00:25:41.702 [2024-11-26 20:55:45.268079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.702 [2024-11-26 20:55:45.268142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.702 qpair failed and we were unable to recover it. 00:25:41.702 [2024-11-26 20:55:45.268358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.702 [2024-11-26 20:55:45.268423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.702 qpair failed and we were unable to recover it. 00:25:41.702 [2024-11-26 20:55:45.268708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.702 [2024-11-26 20:55:45.268772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.702 qpair failed and we were unable to recover it. 00:25:41.702 [2024-11-26 20:55:45.269064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.702 [2024-11-26 20:55:45.269127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.702 qpair failed and we were unable to recover it. 00:25:41.702 [2024-11-26 20:55:45.269413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.702 [2024-11-26 20:55:45.269488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.702 qpair failed and we were unable to recover it. 00:25:41.702 [2024-11-26 20:55:45.269684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.702 [2024-11-26 20:55:45.269750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.702 qpair failed and we were unable to recover it. 00:25:41.702 [2024-11-26 20:55:45.270030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.702 [2024-11-26 20:55:45.270094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.702 qpair failed and we were unable to recover it. 00:25:41.702 [2024-11-26 20:55:45.270376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.702 [2024-11-26 20:55:45.270440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.702 qpair failed and we were unable to recover it. 00:25:41.702 [2024-11-26 20:55:45.270701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.702 [2024-11-26 20:55:45.270764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.702 qpair failed and we were unable to recover it. 00:25:41.702 [2024-11-26 20:55:45.270953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.702 [2024-11-26 20:55:45.271019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.702 qpair failed and we were unable to recover it. 00:25:41.702 [2024-11-26 20:55:45.271271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.702 [2024-11-26 20:55:45.271360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.702 qpair failed and we were unable to recover it. 00:25:41.702 [2024-11-26 20:55:45.271649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.702 [2024-11-26 20:55:45.271712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.702 qpair failed and we were unable to recover it. 00:25:41.702 [2024-11-26 20:55:45.271958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.702 [2024-11-26 20:55:45.272021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.702 qpair failed and we were unable to recover it. 00:25:41.702 [2024-11-26 20:55:45.272317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.702 [2024-11-26 20:55:45.272382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.702 qpair failed and we were unable to recover it. 00:25:41.702 [2024-11-26 20:55:45.272667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.702 [2024-11-26 20:55:45.272730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.702 qpair failed and we were unable to recover it. 00:25:41.702 [2024-11-26 20:55:45.273008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.702 [2024-11-26 20:55:45.273070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.702 qpair failed and we were unable to recover it. 00:25:41.702 [2024-11-26 20:55:45.273351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.702 [2024-11-26 20:55:45.273416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.702 qpair failed and we were unable to recover it. 00:25:41.702 [2024-11-26 20:55:45.273662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.702 [2024-11-26 20:55:45.273729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.702 qpair failed and we were unable to recover it. 00:25:41.702 [2024-11-26 20:55:45.274032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.702 [2024-11-26 20:55:45.274095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.702 qpair failed and we were unable to recover it. 00:25:41.702 [2024-11-26 20:55:45.274341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.702 [2024-11-26 20:55:45.274406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.702 qpair failed and we were unable to recover it. 00:25:41.702 [2024-11-26 20:55:45.274667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.702 [2024-11-26 20:55:45.274730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.702 qpair failed and we were unable to recover it. 00:25:41.702 [2024-11-26 20:55:45.275026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.702 [2024-11-26 20:55:45.275088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.702 qpair failed and we were unable to recover it. 00:25:41.702 [2024-11-26 20:55:45.275342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.702 [2024-11-26 20:55:45.275406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.702 qpair failed and we were unable to recover it. 00:25:41.702 [2024-11-26 20:55:45.275655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.702 [2024-11-26 20:55:45.275719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.702 qpair failed and we were unable to recover it. 00:25:41.702 [2024-11-26 20:55:45.276002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.702 [2024-11-26 20:55:45.276064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.702 qpair failed and we were unable to recover it. 00:25:41.702 [2024-11-26 20:55:45.276288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.702 [2024-11-26 20:55:45.276364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.702 qpair failed and we were unable to recover it. 00:25:41.702 [2024-11-26 20:55:45.276606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.702 [2024-11-26 20:55:45.276669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.702 qpair failed and we were unable to recover it. 00:25:41.702 [2024-11-26 20:55:45.276917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.702 [2024-11-26 20:55:45.276981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.702 qpair failed and we were unable to recover it. 00:25:41.702 [2024-11-26 20:55:45.277204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.702 [2024-11-26 20:55:45.277271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.702 qpair failed and we were unable to recover it. 00:25:41.702 [2024-11-26 20:55:45.277527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.702 [2024-11-26 20:55:45.277592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.702 qpair failed and we were unable to recover it. 00:25:41.702 [2024-11-26 20:55:45.277842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.702 [2024-11-26 20:55:45.277904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.702 qpair failed and we were unable to recover it. 00:25:41.702 [2024-11-26 20:55:45.278213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.702 [2024-11-26 20:55:45.278277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.702 qpair failed and we were unable to recover it. 00:25:41.702 [2024-11-26 20:55:45.278588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.702 [2024-11-26 20:55:45.278650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.702 qpair failed and we were unable to recover it. 00:25:41.702 [2024-11-26 20:55:45.278891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.702 [2024-11-26 20:55:45.278954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.702 qpair failed and we were unable to recover it. 00:25:41.702 [2024-11-26 20:55:45.279233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.703 [2024-11-26 20:55:45.279296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.703 qpair failed and we were unable to recover it. 00:25:41.703 [2024-11-26 20:55:45.279620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.703 [2024-11-26 20:55:45.279683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.703 qpair failed and we were unable to recover it. 00:25:41.703 [2024-11-26 20:55:45.279979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.703 [2024-11-26 20:55:45.280041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.703 qpair failed and we were unable to recover it. 00:25:41.703 [2024-11-26 20:55:45.280286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.703 [2024-11-26 20:55:45.280372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.703 qpair failed and we were unable to recover it. 00:25:41.703 [2024-11-26 20:55:45.280628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.703 [2024-11-26 20:55:45.280690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.703 qpair failed and we were unable to recover it. 00:25:41.703 [2024-11-26 20:55:45.280942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.703 [2024-11-26 20:55:45.281005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.703 qpair failed and we were unable to recover it. 00:25:41.703 [2024-11-26 20:55:45.281291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.703 [2024-11-26 20:55:45.281371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.703 qpair failed and we were unable to recover it. 00:25:41.703 [2024-11-26 20:55:45.281642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.703 [2024-11-26 20:55:45.281704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.703 qpair failed and we were unable to recover it. 00:25:41.703 [2024-11-26 20:55:45.281941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.703 [2024-11-26 20:55:45.282004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.703 qpair failed and we were unable to recover it. 00:25:41.703 [2024-11-26 20:55:45.282267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.703 [2024-11-26 20:55:45.282357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.703 qpair failed and we were unable to recover it. 00:25:41.703 [2024-11-26 20:55:45.282604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.703 [2024-11-26 20:55:45.282678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.703 qpair failed and we were unable to recover it. 00:25:41.703 [2024-11-26 20:55:45.282974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.703 [2024-11-26 20:55:45.283037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.703 qpair failed and we were unable to recover it. 00:25:41.703 [2024-11-26 20:55:45.283333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.703 [2024-11-26 20:55:45.283398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.703 qpair failed and we were unable to recover it. 00:25:41.703 [2024-11-26 20:55:45.283643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.703 [2024-11-26 20:55:45.283706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.703 qpair failed and we were unable to recover it. 00:25:41.703 [2024-11-26 20:55:45.283987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.703 [2024-11-26 20:55:45.284050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.703 qpair failed and we were unable to recover it. 00:25:41.703 [2024-11-26 20:55:45.284298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.703 [2024-11-26 20:55:45.284381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.703 qpair failed and we were unable to recover it. 00:25:41.703 [2024-11-26 20:55:45.284670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.703 [2024-11-26 20:55:45.284733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.703 qpair failed and we were unable to recover it. 00:25:41.703 [2024-11-26 20:55:45.284971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.703 [2024-11-26 20:55:45.285033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.703 qpair failed and we were unable to recover it. 00:25:41.703 [2024-11-26 20:55:45.285282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.703 [2024-11-26 20:55:45.285364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.703 qpair failed and we were unable to recover it. 00:25:41.703 [2024-11-26 20:55:45.285607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.703 [2024-11-26 20:55:45.285671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.703 qpair failed and we were unable to recover it. 00:25:41.703 [2024-11-26 20:55:45.285923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.703 [2024-11-26 20:55:45.285985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.703 qpair failed and we were unable to recover it. 00:25:41.703 [2024-11-26 20:55:45.286219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.703 [2024-11-26 20:55:45.286282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.703 qpair failed and we were unable to recover it. 00:25:41.703 [2024-11-26 20:55:45.286532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.703 [2024-11-26 20:55:45.286596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.703 qpair failed and we were unable to recover it. 00:25:41.703 [2024-11-26 20:55:45.286882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.703 [2024-11-26 20:55:45.286945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.703 qpair failed and we were unable to recover it. 00:25:41.703 [2024-11-26 20:55:45.287187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.703 [2024-11-26 20:55:45.287251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.703 qpair failed and we were unable to recover it. 00:25:41.703 [2024-11-26 20:55:45.287583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.703 [2024-11-26 20:55:45.287647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.703 qpair failed and we were unable to recover it. 00:25:41.703 [2024-11-26 20:55:45.287930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.703 [2024-11-26 20:55:45.287993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.703 qpair failed and we were unable to recover it. 00:25:41.703 [2024-11-26 20:55:45.288236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.703 [2024-11-26 20:55:45.288298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.703 qpair failed and we were unable to recover it. 00:25:41.703 [2024-11-26 20:55:45.288537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.703 [2024-11-26 20:55:45.288600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.703 qpair failed and we were unable to recover it. 00:25:41.703 [2024-11-26 20:55:45.288861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.703 [2024-11-26 20:55:45.288924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.703 qpair failed and we were unable to recover it. 00:25:41.703 [2024-11-26 20:55:45.289136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.703 [2024-11-26 20:55:45.289201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.703 qpair failed and we were unable to recover it. 00:25:41.703 [2024-11-26 20:55:45.289500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.703 [2024-11-26 20:55:45.289564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.703 qpair failed and we were unable to recover it. 00:25:41.703 [2024-11-26 20:55:45.289821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.703 [2024-11-26 20:55:45.289884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.703 qpair failed and we were unable to recover it. 00:25:41.703 [2024-11-26 20:55:45.290175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.704 [2024-11-26 20:55:45.290237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.704 qpair failed and we were unable to recover it. 00:25:41.704 [2024-11-26 20:55:45.290541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.704 [2024-11-26 20:55:45.290606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.704 qpair failed and we were unable to recover it. 00:25:41.704 [2024-11-26 20:55:45.290860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.704 [2024-11-26 20:55:45.290925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.704 qpair failed and we were unable to recover it. 00:25:41.704 [2024-11-26 20:55:45.291119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.704 [2024-11-26 20:55:45.291182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.704 qpair failed and we were unable to recover it. 00:25:41.704 [2024-11-26 20:55:45.291465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.704 [2024-11-26 20:55:45.291530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.704 qpair failed and we were unable to recover it. 00:25:41.704 [2024-11-26 20:55:45.291812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.704 [2024-11-26 20:55:45.291875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.704 qpair failed and we were unable to recover it. 00:25:41.704 [2024-11-26 20:55:45.292167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.704 [2024-11-26 20:55:45.292229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.704 qpair failed and we were unable to recover it. 00:25:41.704 [2024-11-26 20:55:45.292524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.704 [2024-11-26 20:55:45.292588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.704 qpair failed and we were unable to recover it. 00:25:41.704 [2024-11-26 20:55:45.292839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.704 [2024-11-26 20:55:45.292901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.704 qpair failed and we were unable to recover it. 00:25:41.704 [2024-11-26 20:55:45.293151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.704 [2024-11-26 20:55:45.293213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.704 qpair failed and we were unable to recover it. 00:25:41.704 [2024-11-26 20:55:45.293485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.704 [2024-11-26 20:55:45.293550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.704 qpair failed and we were unable to recover it. 00:25:41.704 [2024-11-26 20:55:45.293799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.704 [2024-11-26 20:55:45.293860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.704 qpair failed and we were unable to recover it. 00:25:41.704 [2024-11-26 20:55:45.294152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.704 [2024-11-26 20:55:45.294215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.704 qpair failed and we were unable to recover it. 00:25:41.704 [2024-11-26 20:55:45.294523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.704 [2024-11-26 20:55:45.294587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.704 qpair failed and we were unable to recover it. 00:25:41.704 [2024-11-26 20:55:45.294879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.704 [2024-11-26 20:55:45.294942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.704 qpair failed and we were unable to recover it. 00:25:41.704 [2024-11-26 20:55:45.295156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.704 [2024-11-26 20:55:45.295220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.704 qpair failed and we were unable to recover it. 00:25:41.704 [2024-11-26 20:55:45.295519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.704 [2024-11-26 20:55:45.295583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.704 qpair failed and we were unable to recover it. 00:25:41.704 [2024-11-26 20:55:45.295776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.704 [2024-11-26 20:55:45.295851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.704 qpair failed and we were unable to recover it. 00:25:41.704 [2024-11-26 20:55:45.296111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.704 [2024-11-26 20:55:45.296173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.704 qpair failed and we were unable to recover it. 00:25:41.704 [2024-11-26 20:55:45.296458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.704 [2024-11-26 20:55:45.296523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.704 qpair failed and we were unable to recover it. 00:25:41.704 [2024-11-26 20:55:45.296766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.704 [2024-11-26 20:55:45.296831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.704 qpair failed and we were unable to recover it. 00:25:41.704 [2024-11-26 20:55:45.297018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.704 [2024-11-26 20:55:45.297084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.704 qpair failed and we were unable to recover it. 00:25:41.704 [2024-11-26 20:55:45.297317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.704 [2024-11-26 20:55:45.297381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.704 qpair failed and we were unable to recover it. 00:25:41.704 [2024-11-26 20:55:45.297592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.704 [2024-11-26 20:55:45.297654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.704 qpair failed and we were unable to recover it. 00:25:41.704 [2024-11-26 20:55:45.297896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.704 [2024-11-26 20:55:45.297959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.704 qpair failed and we were unable to recover it. 00:25:41.704 [2024-11-26 20:55:45.298192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.704 [2024-11-26 20:55:45.298254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.704 qpair failed and we were unable to recover it. 00:25:41.704 [2024-11-26 20:55:45.298570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.704 [2024-11-26 20:55:45.298634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.704 qpair failed and we were unable to recover it. 00:25:41.704 [2024-11-26 20:55:45.298908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.704 [2024-11-26 20:55:45.298970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.704 qpair failed and we were unable to recover it. 00:25:41.704 [2024-11-26 20:55:45.299163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.704 [2024-11-26 20:55:45.299227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.704 qpair failed and we were unable to recover it. 00:25:41.704 [2024-11-26 20:55:45.299523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.704 [2024-11-26 20:55:45.299587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.704 qpair failed and we were unable to recover it. 00:25:41.704 [2024-11-26 20:55:45.299883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.704 [2024-11-26 20:55:45.299946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.704 qpair failed and we were unable to recover it. 00:25:41.704 [2024-11-26 20:55:45.300175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.704 [2024-11-26 20:55:45.300238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.704 qpair failed and we were unable to recover it. 00:25:41.704 [2024-11-26 20:55:45.300535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.704 [2024-11-26 20:55:45.300599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.704 qpair failed and we were unable to recover it. 00:25:41.704 [2024-11-26 20:55:45.300896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.704 [2024-11-26 20:55:45.300958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.704 qpair failed and we were unable to recover it. 00:25:41.704 [2024-11-26 20:55:45.301242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.704 [2024-11-26 20:55:45.301323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.704 qpair failed and we were unable to recover it. 00:25:41.704 [2024-11-26 20:55:45.301571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.704 [2024-11-26 20:55:45.301636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.704 qpair failed and we were unable to recover it. 00:25:41.704 [2024-11-26 20:55:45.301899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.704 [2024-11-26 20:55:45.301963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.704 qpair failed and we were unable to recover it. 00:25:41.704 [2024-11-26 20:55:45.302244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.704 [2024-11-26 20:55:45.302339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.704 qpair failed and we were unable to recover it. 00:25:41.705 [2024-11-26 20:55:45.302632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.705 [2024-11-26 20:55:45.302695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.705 qpair failed and we were unable to recover it. 00:25:41.705 [2024-11-26 20:55:45.302945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.705 [2024-11-26 20:55:45.303007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.705 qpair failed and we were unable to recover it. 00:25:41.705 [2024-11-26 20:55:45.303244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.705 [2024-11-26 20:55:45.303325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.705 qpair failed and we were unable to recover it. 00:25:41.705 [2024-11-26 20:55:45.303593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.705 [2024-11-26 20:55:45.303656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.705 qpair failed and we were unable to recover it. 00:25:41.705 [2024-11-26 20:55:45.303906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.705 [2024-11-26 20:55:45.303968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.705 qpair failed and we were unable to recover it. 00:25:41.705 [2024-11-26 20:55:45.304203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.705 [2024-11-26 20:55:45.304265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.705 qpair failed and we were unable to recover it. 00:25:41.705 [2024-11-26 20:55:45.304549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.705 [2024-11-26 20:55:45.304612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.705 qpair failed and we were unable to recover it. 00:25:41.705 [2024-11-26 20:55:45.304846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.705 [2024-11-26 20:55:45.304909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.705 qpair failed and we were unable to recover it. 00:25:41.705 [2024-11-26 20:55:45.305164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.705 [2024-11-26 20:55:45.305227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.705 qpair failed and we were unable to recover it. 00:25:41.705 [2024-11-26 20:55:45.305464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.705 [2024-11-26 20:55:45.305529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.705 qpair failed and we were unable to recover it. 00:25:41.705 [2024-11-26 20:55:45.305739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.705 [2024-11-26 20:55:45.305803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.705 qpair failed and we were unable to recover it. 00:25:41.705 [2024-11-26 20:55:45.306011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.705 [2024-11-26 20:55:45.306073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.705 qpair failed and we were unable to recover it. 00:25:41.705 [2024-11-26 20:55:45.306357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.705 [2024-11-26 20:55:45.306422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.705 qpair failed and we were unable to recover it. 00:25:41.705 [2024-11-26 20:55:45.306714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.705 [2024-11-26 20:55:45.306776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.705 qpair failed and we were unable to recover it. 00:25:41.705 [2024-11-26 20:55:45.306987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.705 [2024-11-26 20:55:45.307049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.705 qpair failed and we were unable to recover it. 00:25:41.705 [2024-11-26 20:55:45.307297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.705 [2024-11-26 20:55:45.307372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.705 qpair failed and we were unable to recover it. 00:25:41.705 [2024-11-26 20:55:45.307626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.705 [2024-11-26 20:55:45.307688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.705 qpair failed and we were unable to recover it. 00:25:41.705 [2024-11-26 20:55:45.307897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.705 [2024-11-26 20:55:45.307959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.705 qpair failed and we were unable to recover it. 00:25:41.705 [2024-11-26 20:55:45.308223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.705 [2024-11-26 20:55:45.308285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.705 qpair failed and we were unable to recover it. 00:25:41.705 [2024-11-26 20:55:45.308599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.705 [2024-11-26 20:55:45.308676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.705 qpair failed and we were unable to recover it. 00:25:41.705 [2024-11-26 20:55:45.308971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.705 [2024-11-26 20:55:45.309034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.705 qpair failed and we were unable to recover it. 00:25:41.705 [2024-11-26 20:55:45.309338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.705 [2024-11-26 20:55:45.309404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.705 qpair failed and we were unable to recover it. 00:25:41.705 [2024-11-26 20:55:45.309691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.705 [2024-11-26 20:55:45.309753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.705 qpair failed and we were unable to recover it. 00:25:41.705 [2024-11-26 20:55:45.309970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.705 [2024-11-26 20:55:45.310036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.705 qpair failed and we were unable to recover it. 00:25:41.705 [2024-11-26 20:55:45.310267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.705 [2024-11-26 20:55:45.310358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.705 qpair failed and we were unable to recover it. 00:25:41.705 [2024-11-26 20:55:45.310642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.705 [2024-11-26 20:55:45.310705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.705 qpair failed and we were unable to recover it. 00:25:41.705 [2024-11-26 20:55:45.311007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.705 [2024-11-26 20:55:45.311069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.705 qpair failed and we were unable to recover it. 00:25:41.705 [2024-11-26 20:55:45.311330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.705 [2024-11-26 20:55:45.311394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.705 qpair failed and we were unable to recover it. 00:25:41.705 [2024-11-26 20:55:45.311648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.705 [2024-11-26 20:55:45.311711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.705 qpair failed and we were unable to recover it. 00:25:41.705 [2024-11-26 20:55:45.312005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.705 [2024-11-26 20:55:45.312068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.705 qpair failed and we were unable to recover it. 00:25:41.705 [2024-11-26 20:55:45.312366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.705 [2024-11-26 20:55:45.312430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.705 qpair failed and we were unable to recover it. 00:25:41.705 [2024-11-26 20:55:45.312712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.705 [2024-11-26 20:55:45.312775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.705 qpair failed and we were unable to recover it. 00:25:41.705 [2024-11-26 20:55:45.313014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.705 [2024-11-26 20:55:45.313076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.705 qpair failed and we were unable to recover it. 00:25:41.705 [2024-11-26 20:55:45.313375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.705 [2024-11-26 20:55:45.313439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.705 qpair failed and we were unable to recover it. 00:25:41.705 [2024-11-26 20:55:45.313730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.705 [2024-11-26 20:55:45.313799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.705 qpair failed and we were unable to recover it. 00:25:41.705 [2024-11-26 20:55:45.314076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.705 [2024-11-26 20:55:45.314140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.705 qpair failed and we were unable to recover it. 00:25:41.705 [2024-11-26 20:55:45.314378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.705 [2024-11-26 20:55:45.314442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.705 qpair failed and we were unable to recover it. 00:25:41.705 [2024-11-26 20:55:45.314655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.705 [2024-11-26 20:55:45.314719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.705 qpair failed and we were unable to recover it. 00:25:41.706 [2024-11-26 20:55:45.314999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.706 [2024-11-26 20:55:45.315062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.706 qpair failed and we were unable to recover it. 00:25:41.706 [2024-11-26 20:55:45.315266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.706 [2024-11-26 20:55:45.315342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.706 qpair failed and we were unable to recover it. 00:25:41.706 [2024-11-26 20:55:45.315593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.706 [2024-11-26 20:55:45.315659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.706 qpair failed and we were unable to recover it. 00:25:41.706 [2024-11-26 20:55:45.315909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.706 [2024-11-26 20:55:45.315972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.706 qpair failed and we were unable to recover it. 00:25:41.706 [2024-11-26 20:55:45.316250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.706 [2024-11-26 20:55:45.316329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.706 qpair failed and we were unable to recover it. 00:25:41.706 [2024-11-26 20:55:45.316581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.706 [2024-11-26 20:55:45.316643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.706 qpair failed and we were unable to recover it. 00:25:41.706 [2024-11-26 20:55:45.316858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.706 [2024-11-26 20:55:45.316922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.706 qpair failed and we were unable to recover it. 00:25:41.706 [2024-11-26 20:55:45.317213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.706 [2024-11-26 20:55:45.317276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.706 qpair failed and we were unable to recover it. 00:25:41.706 [2024-11-26 20:55:45.317586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.706 [2024-11-26 20:55:45.317649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.706 qpair failed and we were unable to recover it. 00:25:41.706 [2024-11-26 20:55:45.317931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.706 [2024-11-26 20:55:45.317994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.706 qpair failed and we were unable to recover it. 00:25:41.706 [2024-11-26 20:55:45.318276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.706 [2024-11-26 20:55:45.318365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.706 qpair failed and we were unable to recover it. 00:25:41.706 [2024-11-26 20:55:45.318615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.706 [2024-11-26 20:55:45.318681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.706 qpair failed and we were unable to recover it. 00:25:41.706 [2024-11-26 20:55:45.318967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.706 [2024-11-26 20:55:45.319029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.706 qpair failed and we were unable to recover it. 00:25:41.706 [2024-11-26 20:55:45.319285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.706 [2024-11-26 20:55:45.319367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.706 qpair failed and we were unable to recover it. 00:25:41.706 [2024-11-26 20:55:45.319599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.706 [2024-11-26 20:55:45.319662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.706 qpair failed and we were unable to recover it. 00:25:41.706 [2024-11-26 20:55:45.319875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.706 [2024-11-26 20:55:45.319939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.706 qpair failed and we were unable to recover it. 00:25:41.706 [2024-11-26 20:55:45.320191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.706 [2024-11-26 20:55:45.320254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.706 qpair failed and we were unable to recover it. 00:25:41.706 [2024-11-26 20:55:45.320503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.706 [2024-11-26 20:55:45.320566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.706 qpair failed and we were unable to recover it. 00:25:41.706 [2024-11-26 20:55:45.320806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.706 [2024-11-26 20:55:45.320871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.706 qpair failed and we were unable to recover it. 00:25:41.706 [2024-11-26 20:55:45.321156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.706 [2024-11-26 20:55:45.321220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.706 qpair failed and we were unable to recover it. 00:25:41.706 [2024-11-26 20:55:45.321584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.706 [2024-11-26 20:55:45.321648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.706 qpair failed and we were unable to recover it. 00:25:41.706 [2024-11-26 20:55:45.321882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.706 [2024-11-26 20:55:45.321955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.706 qpair failed and we were unable to recover it. 00:25:41.706 [2024-11-26 20:55:45.322242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.706 [2024-11-26 20:55:45.322323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.706 qpair failed and we were unable to recover it. 00:25:41.706 [2024-11-26 20:55:45.322540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.706 [2024-11-26 20:55:45.322604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.706 qpair failed and we were unable to recover it. 00:25:41.706 [2024-11-26 20:55:45.322867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.706 [2024-11-26 20:55:45.322928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.706 qpair failed and we were unable to recover it. 00:25:41.706 [2024-11-26 20:55:45.323209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.706 [2024-11-26 20:55:45.323273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.706 qpair failed and we were unable to recover it. 00:25:41.706 [2024-11-26 20:55:45.323514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.706 [2024-11-26 20:55:45.323577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.706 qpair failed and we were unable to recover it. 00:25:41.706 [2024-11-26 20:55:45.323836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.706 [2024-11-26 20:55:45.323898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.706 qpair failed and we were unable to recover it. 00:25:41.706 [2024-11-26 20:55:45.324144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.706 [2024-11-26 20:55:45.324207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.706 qpair failed and we were unable to recover it. 00:25:41.706 [2024-11-26 20:55:45.324505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.706 [2024-11-26 20:55:45.324569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.706 qpair failed and we were unable to recover it. 00:25:41.706 [2024-11-26 20:55:45.324814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.706 [2024-11-26 20:55:45.324876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.706 qpair failed and we were unable to recover it. 00:25:41.706 [2024-11-26 20:55:45.325099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.706 [2024-11-26 20:55:45.325162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.706 qpair failed and we were unable to recover it. 00:25:41.706 [2024-11-26 20:55:45.325441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.706 [2024-11-26 20:55:45.325505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.706 qpair failed and we were unable to recover it. 00:25:41.706 [2024-11-26 20:55:45.325758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.706 [2024-11-26 20:55:45.325821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.706 qpair failed and we were unable to recover it. 00:25:41.706 [2024-11-26 20:55:45.326031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.706 [2024-11-26 20:55:45.326094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.706 qpair failed and we were unable to recover it. 00:25:41.706 [2024-11-26 20:55:45.326360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.706 [2024-11-26 20:55:45.326427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.706 qpair failed and we were unable to recover it. 00:25:41.706 [2024-11-26 20:55:45.326673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.706 [2024-11-26 20:55:45.326736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.706 qpair failed and we were unable to recover it. 00:25:41.706 [2024-11-26 20:55:45.327019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.706 [2024-11-26 20:55:45.327082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.707 qpair failed and we were unable to recover it. 00:25:41.707 [2024-11-26 20:55:45.327379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.707 [2024-11-26 20:55:45.327442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.707 qpair failed and we were unable to recover it. 00:25:41.707 [2024-11-26 20:55:45.327734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.707 [2024-11-26 20:55:45.327797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.707 qpair failed and we were unable to recover it. 00:25:41.707 [2024-11-26 20:55:45.328078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.707 [2024-11-26 20:55:45.328141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.707 qpair failed and we were unable to recover it. 00:25:41.707 [2024-11-26 20:55:45.328423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.707 [2024-11-26 20:55:45.328486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.707 qpair failed and we were unable to recover it. 00:25:41.707 [2024-11-26 20:55:45.328770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.707 [2024-11-26 20:55:45.328832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.707 qpair failed and we were unable to recover it. 00:25:41.707 [2024-11-26 20:55:45.329077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.707 [2024-11-26 20:55:45.329144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.707 qpair failed and we were unable to recover it. 00:25:41.707 [2024-11-26 20:55:45.329387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.707 [2024-11-26 20:55:45.329451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.707 qpair failed and we were unable to recover it. 00:25:41.707 [2024-11-26 20:55:45.329732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.707 [2024-11-26 20:55:45.329794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.707 qpair failed and we were unable to recover it. 00:25:41.707 [2024-11-26 20:55:45.330042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.707 [2024-11-26 20:55:45.330105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.707 qpair failed and we were unable to recover it. 00:25:41.707 [2024-11-26 20:55:45.330365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.707 [2024-11-26 20:55:45.330430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.707 qpair failed and we were unable to recover it. 00:25:41.707 [2024-11-26 20:55:45.330652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.707 [2024-11-26 20:55:45.330716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.707 qpair failed and we were unable to recover it. 00:25:41.707 [2024-11-26 20:55:45.331003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.707 [2024-11-26 20:55:45.331066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.707 qpair failed and we were unable to recover it. 00:25:41.707 [2024-11-26 20:55:45.331330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.707 [2024-11-26 20:55:45.331394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.707 qpair failed and we were unable to recover it. 00:25:41.707 [2024-11-26 20:55:45.331682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.707 [2024-11-26 20:55:45.331745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.707 qpair failed and we were unable to recover it. 00:25:41.707 [2024-11-26 20:55:45.331978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.707 [2024-11-26 20:55:45.332040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.707 qpair failed and we were unable to recover it. 00:25:41.707 [2024-11-26 20:55:45.332278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.707 [2024-11-26 20:55:45.332356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.707 qpair failed and we were unable to recover it. 00:25:41.707 [2024-11-26 20:55:45.332645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.707 [2024-11-26 20:55:45.332708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.707 qpair failed and we were unable to recover it. 00:25:41.707 [2024-11-26 20:55:45.332951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.707 [2024-11-26 20:55:45.333014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.707 qpair failed and we were unable to recover it. 00:25:41.707 [2024-11-26 20:55:45.333264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.707 [2024-11-26 20:55:45.333347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.707 qpair failed and we were unable to recover it. 00:25:41.707 [2024-11-26 20:55:45.333564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.707 [2024-11-26 20:55:45.333627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.707 qpair failed and we were unable to recover it. 00:25:41.707 [2024-11-26 20:55:45.333876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.707 [2024-11-26 20:55:45.333938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.707 qpair failed and we were unable to recover it. 00:25:41.707 [2024-11-26 20:55:45.334220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.707 [2024-11-26 20:55:45.334283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.707 qpair failed and we were unable to recover it. 00:25:41.707 [2024-11-26 20:55:45.334617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.707 [2024-11-26 20:55:45.334680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.707 qpair failed and we were unable to recover it. 00:25:41.707 [2024-11-26 20:55:45.334896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.707 [2024-11-26 20:55:45.334972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.707 qpair failed and we were unable to recover it. 00:25:41.707 [2024-11-26 20:55:45.335251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.707 [2024-11-26 20:55:45.335335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.707 qpair failed and we were unable to recover it. 00:25:41.707 [2024-11-26 20:55:45.335564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.707 [2024-11-26 20:55:45.335627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.707 qpair failed and we were unable to recover it. 00:25:41.707 [2024-11-26 20:55:45.335859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.707 [2024-11-26 20:55:45.335923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.707 qpair failed and we were unable to recover it. 00:25:41.707 [2024-11-26 20:55:45.336216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.707 [2024-11-26 20:55:45.336278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.707 qpair failed and we were unable to recover it. 00:25:41.707 [2024-11-26 20:55:45.336558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.707 [2024-11-26 20:55:45.336621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.707 qpair failed and we were unable to recover it. 00:25:41.707 [2024-11-26 20:55:45.336841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.707 [2024-11-26 20:55:45.336903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.707 qpair failed and we were unable to recover it. 00:25:41.707 [2024-11-26 20:55:45.337200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.707 [2024-11-26 20:55:45.337263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.707 qpair failed and we were unable to recover it. 00:25:41.707 [2024-11-26 20:55:45.337538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.707 [2024-11-26 20:55:45.337601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.707 qpair failed and we were unable to recover it. 00:25:41.707 [2024-11-26 20:55:45.337815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.707 [2024-11-26 20:55:45.337877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.707 qpair failed and we were unable to recover it. 00:25:41.707 [2024-11-26 20:55:45.338104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.708 [2024-11-26 20:55:45.338167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.708 qpair failed and we were unable to recover it. 00:25:41.708 [2024-11-26 20:55:45.338465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.708 [2024-11-26 20:55:45.338530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.708 qpair failed and we were unable to recover it. 00:25:41.708 [2024-11-26 20:55:45.338801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.708 [2024-11-26 20:55:45.338863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.708 qpair failed and we were unable to recover it. 00:25:41.708 [2024-11-26 20:55:45.339106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.708 [2024-11-26 20:55:45.339170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.708 qpair failed and we were unable to recover it. 00:25:41.708 [2024-11-26 20:55:45.339477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.708 [2024-11-26 20:55:45.339543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.708 qpair failed and we were unable to recover it. 00:25:41.708 [2024-11-26 20:55:45.339808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.708 [2024-11-26 20:55:45.339870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.708 qpair failed and we were unable to recover it. 00:25:41.708 [2024-11-26 20:55:45.340168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.708 [2024-11-26 20:55:45.340231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.708 qpair failed and we were unable to recover it. 00:25:41.708 [2024-11-26 20:55:45.340525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.708 [2024-11-26 20:55:45.340588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.708 qpair failed and we were unable to recover it. 00:25:41.708 [2024-11-26 20:55:45.340867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.708 [2024-11-26 20:55:45.340929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.708 qpair failed and we were unable to recover it. 00:25:41.708 [2024-11-26 20:55:45.341162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.708 [2024-11-26 20:55:45.341224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.708 qpair failed and we were unable to recover it. 00:25:41.708 [2024-11-26 20:55:45.341499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.708 [2024-11-26 20:55:45.341564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.708 qpair failed and we were unable to recover it. 00:25:41.708 [2024-11-26 20:55:45.341863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.708 [2024-11-26 20:55:45.341926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.708 qpair failed and we were unable to recover it. 00:25:41.708 [2024-11-26 20:55:45.342168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.708 [2024-11-26 20:55:45.342230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.708 qpair failed and we were unable to recover it. 00:25:41.708 [2024-11-26 20:55:45.342454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.708 [2024-11-26 20:55:45.342519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.708 qpair failed and we were unable to recover it. 00:25:41.708 [2024-11-26 20:55:45.342776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.708 [2024-11-26 20:55:45.342839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.708 qpair failed and we were unable to recover it. 00:25:41.708 [2024-11-26 20:55:45.343113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.708 [2024-11-26 20:55:45.343176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.708 qpair failed and we were unable to recover it. 00:25:41.708 [2024-11-26 20:55:45.343469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.708 [2024-11-26 20:55:45.343533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.708 qpair failed and we were unable to recover it. 00:25:41.708 [2024-11-26 20:55:45.343823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.708 [2024-11-26 20:55:45.343887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.708 qpair failed and we were unable to recover it. 00:25:41.708 [2024-11-26 20:55:45.344127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.708 [2024-11-26 20:55:45.344193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.708 qpair failed and we were unable to recover it. 00:25:41.708 [2024-11-26 20:55:45.344422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.708 [2024-11-26 20:55:45.344489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.708 qpair failed and we were unable to recover it. 00:25:41.708 [2024-11-26 20:55:45.344755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.708 [2024-11-26 20:55:45.344818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.708 qpair failed and we were unable to recover it. 00:25:41.708 [2024-11-26 20:55:45.345058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.708 [2024-11-26 20:55:45.345120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.708 qpair failed and we were unable to recover it. 00:25:41.708 [2024-11-26 20:55:45.345369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.708 [2024-11-26 20:55:45.345435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.708 qpair failed and we were unable to recover it. 00:25:41.708 [2024-11-26 20:55:45.345693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.708 [2024-11-26 20:55:45.345756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.708 qpair failed and we were unable to recover it. 00:25:41.708 [2024-11-26 20:55:45.345998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.708 [2024-11-26 20:55:45.346064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.708 qpair failed and we were unable to recover it. 00:25:41.708 [2024-11-26 20:55:45.346298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.708 [2024-11-26 20:55:45.346375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.708 qpair failed and we were unable to recover it. 00:25:41.708 [2024-11-26 20:55:45.346584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.708 [2024-11-26 20:55:45.346649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.708 qpair failed and we were unable to recover it. 00:25:41.708 [2024-11-26 20:55:45.346895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.708 [2024-11-26 20:55:45.346961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.708 qpair failed and we were unable to recover it. 00:25:41.708 [2024-11-26 20:55:45.347252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.708 [2024-11-26 20:55:45.347332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.708 qpair failed and we were unable to recover it. 00:25:41.708 [2024-11-26 20:55:45.347588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.708 [2024-11-26 20:55:45.347652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.708 qpair failed and we were unable to recover it. 00:25:41.708 [2024-11-26 20:55:45.347852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.708 [2024-11-26 20:55:45.347929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.708 qpair failed and we were unable to recover it. 00:25:41.708 [2024-11-26 20:55:45.348231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.708 [2024-11-26 20:55:45.348295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.708 qpair failed and we were unable to recover it. 00:25:41.708 [2024-11-26 20:55:45.348541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.708 [2024-11-26 20:55:45.348605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.708 qpair failed and we were unable to recover it. 00:25:41.708 [2024-11-26 20:55:45.348882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.708 [2024-11-26 20:55:45.348945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.708 qpair failed and we were unable to recover it. 00:25:41.708 [2024-11-26 20:55:45.349197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.708 [2024-11-26 20:55:45.349260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.708 qpair failed and we were unable to recover it. 00:25:41.708 [2024-11-26 20:55:45.349565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.708 [2024-11-26 20:55:45.349628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.708 qpair failed and we were unable to recover it. 00:25:41.708 [2024-11-26 20:55:45.349910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.708 [2024-11-26 20:55:45.349972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.708 qpair failed and we were unable to recover it. 00:25:41.708 [2024-11-26 20:55:45.350263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.708 [2024-11-26 20:55:45.350360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.708 qpair failed and we were unable to recover it. 00:25:41.709 [2024-11-26 20:55:45.350618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.709 [2024-11-26 20:55:45.350684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.709 qpair failed and we were unable to recover it. 00:25:41.709 [2024-11-26 20:55:45.350987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.709 [2024-11-26 20:55:45.351050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.709 qpair failed and we were unable to recover it. 00:25:41.709 [2024-11-26 20:55:45.351244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.709 [2024-11-26 20:55:45.351326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.709 qpair failed and we were unable to recover it. 00:25:41.709 [2024-11-26 20:55:45.351578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.709 [2024-11-26 20:55:45.351644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.709 qpair failed and we were unable to recover it. 00:25:41.709 [2024-11-26 20:55:45.351923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.709 [2024-11-26 20:55:45.351985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.709 qpair failed and we were unable to recover it. 00:25:41.709 [2024-11-26 20:55:45.352223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.709 [2024-11-26 20:55:45.352288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.709 qpair failed and we were unable to recover it. 00:25:41.709 [2024-11-26 20:55:45.352603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.709 [2024-11-26 20:55:45.352666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.709 qpair failed and we were unable to recover it. 00:25:41.709 [2024-11-26 20:55:45.352958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.709 [2024-11-26 20:55:45.353021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.709 qpair failed and we were unable to recover it. 00:25:41.709 [2024-11-26 20:55:45.353320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.709 [2024-11-26 20:55:45.353386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.709 qpair failed and we were unable to recover it. 00:25:41.709 [2024-11-26 20:55:45.353640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.709 [2024-11-26 20:55:45.353702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.709 qpair failed and we were unable to recover it. 00:25:41.709 [2024-11-26 20:55:45.353980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.709 [2024-11-26 20:55:45.354042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.709 qpair failed and we were unable to recover it. 00:25:41.709 [2024-11-26 20:55:45.354364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.709 [2024-11-26 20:55:45.354430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.709 qpair failed and we were unable to recover it. 00:25:41.709 [2024-11-26 20:55:45.354717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.709 [2024-11-26 20:55:45.354780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.709 qpair failed and we were unable to recover it. 00:25:41.709 [2024-11-26 20:55:45.354975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.709 [2024-11-26 20:55:45.355039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.709 qpair failed and we were unable to recover it. 00:25:41.709 [2024-11-26 20:55:45.355299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.709 [2024-11-26 20:55:45.355380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.709 qpair failed and we were unable to recover it. 00:25:41.709 [2024-11-26 20:55:45.355599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.709 [2024-11-26 20:55:45.355663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.709 qpair failed and we were unable to recover it. 00:25:41.709 [2024-11-26 20:55:45.355910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.709 [2024-11-26 20:55:45.355972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.709 qpair failed and we were unable to recover it. 00:25:41.709 [2024-11-26 20:55:45.356211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.709 [2024-11-26 20:55:45.356274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.709 qpair failed and we were unable to recover it. 00:25:41.709 [2024-11-26 20:55:45.356541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.709 [2024-11-26 20:55:45.356605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.709 qpair failed and we were unable to recover it. 00:25:41.709 [2024-11-26 20:55:45.356849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.709 [2024-11-26 20:55:45.356914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.709 qpair failed and we were unable to recover it. 00:25:41.709 [2024-11-26 20:55:45.357208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.709 [2024-11-26 20:55:45.357270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.709 qpair failed and we were unable to recover it. 00:25:41.709 [2024-11-26 20:55:45.357580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.709 [2024-11-26 20:55:45.357643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.709 qpair failed and we were unable to recover it. 00:25:41.709 [2024-11-26 20:55:45.357925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.709 [2024-11-26 20:55:45.357988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.709 qpair failed and we were unable to recover it. 00:25:41.709 [2024-11-26 20:55:45.358215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.709 [2024-11-26 20:55:45.358277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.709 qpair failed and we were unable to recover it. 00:25:41.709 [2024-11-26 20:55:45.358578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.709 [2024-11-26 20:55:45.358645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.709 qpair failed and we were unable to recover it. 00:25:41.709 [2024-11-26 20:55:45.358839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.709 [2024-11-26 20:55:45.358902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.709 qpair failed and we were unable to recover it. 00:25:41.709 [2024-11-26 20:55:45.359131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.709 [2024-11-26 20:55:45.359196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.709 qpair failed and we were unable to recover it. 00:25:41.709 [2024-11-26 20:55:45.359501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.709 [2024-11-26 20:55:45.359565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.709 qpair failed and we were unable to recover it. 00:25:41.709 [2024-11-26 20:55:45.359852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.709 [2024-11-26 20:55:45.359915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.709 qpair failed and we were unable to recover it. 00:25:41.709 [2024-11-26 20:55:45.360155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.709 [2024-11-26 20:55:45.360220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.709 qpair failed and we were unable to recover it. 00:25:41.709 [2024-11-26 20:55:45.360489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.709 [2024-11-26 20:55:45.360554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.709 qpair failed and we were unable to recover it. 00:25:41.709 [2024-11-26 20:55:45.360795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.983 [2024-11-26 20:55:45.360858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.983 qpair failed and we were unable to recover it. 00:25:41.983 [2024-11-26 20:55:45.361142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.983 [2024-11-26 20:55:45.361218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.983 qpair failed and we were unable to recover it. 00:25:41.983 [2024-11-26 20:55:45.361490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.983 [2024-11-26 20:55:45.361555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.983 qpair failed and we were unable to recover it. 00:25:41.983 [2024-11-26 20:55:45.361797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.983 [2024-11-26 20:55:45.361861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.983 qpair failed and we were unable to recover it. 00:25:41.983 [2024-11-26 20:55:45.362093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.983 [2024-11-26 20:55:45.362155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.983 qpair failed and we were unable to recover it. 00:25:41.983 [2024-11-26 20:55:45.362439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.983 [2024-11-26 20:55:45.362504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.983 qpair failed and we were unable to recover it. 00:25:41.983 [2024-11-26 20:55:45.362710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.983 [2024-11-26 20:55:45.362776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.983 qpair failed and we were unable to recover it. 00:25:41.983 [2024-11-26 20:55:45.363041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.983 [2024-11-26 20:55:45.363103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.983 qpair failed and we were unable to recover it. 00:25:41.983 [2024-11-26 20:55:45.363389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.983 [2024-11-26 20:55:45.363453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.983 qpair failed and we were unable to recover it. 00:25:41.983 [2024-11-26 20:55:45.363712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.983 [2024-11-26 20:55:45.363776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.983 qpair failed and we were unable to recover it. 00:25:41.983 [2024-11-26 20:55:45.364015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.983 [2024-11-26 20:55:45.364078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.983 qpair failed and we were unable to recover it. 00:25:41.983 [2024-11-26 20:55:45.364317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.983 [2024-11-26 20:55:45.364382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.983 qpair failed and we were unable to recover it. 00:25:41.983 [2024-11-26 20:55:45.364673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.983 [2024-11-26 20:55:45.364737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.983 qpair failed and we were unable to recover it. 00:25:41.983 [2024-11-26 20:55:45.364988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.983 [2024-11-26 20:55:45.365050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.983 qpair failed and we were unable to recover it. 00:25:41.983 [2024-11-26 20:55:45.365288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.983 [2024-11-26 20:55:45.365363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.983 qpair failed and we were unable to recover it. 00:25:41.983 [2024-11-26 20:55:45.365558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.983 [2024-11-26 20:55:45.365622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.983 qpair failed and we were unable to recover it. 00:25:41.983 [2024-11-26 20:55:45.365904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.983 [2024-11-26 20:55:45.365968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.983 qpair failed and we were unable to recover it. 00:25:41.983 [2024-11-26 20:55:45.366216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.983 [2024-11-26 20:55:45.366281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.983 qpair failed and we were unable to recover it. 00:25:41.983 [2024-11-26 20:55:45.366605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.983 [2024-11-26 20:55:45.366668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.983 qpair failed and we were unable to recover it. 00:25:41.983 [2024-11-26 20:55:45.366979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.983 [2024-11-26 20:55:45.367042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.983 qpair failed and we were unable to recover it. 00:25:41.983 [2024-11-26 20:55:45.367279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.983 [2024-11-26 20:55:45.367362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.983 qpair failed and we were unable to recover it. 00:25:41.983 [2024-11-26 20:55:45.367606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.983 [2024-11-26 20:55:45.367669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.983 qpair failed and we were unable to recover it. 00:25:41.983 [2024-11-26 20:55:45.367923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.983 [2024-11-26 20:55:45.367986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.983 qpair failed and we were unable to recover it. 00:25:41.983 [2024-11-26 20:55:45.368267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.983 [2024-11-26 20:55:45.368348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.983 qpair failed and we were unable to recover it. 00:25:41.983 [2024-11-26 20:55:45.368603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.983 [2024-11-26 20:55:45.368665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.983 qpair failed and we were unable to recover it. 00:25:41.983 [2024-11-26 20:55:45.368921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.983 [2024-11-26 20:55:45.368985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.983 qpair failed and we were unable to recover it. 00:25:41.983 [2024-11-26 20:55:45.369228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.983 [2024-11-26 20:55:45.369292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.983 qpair failed and we were unable to recover it. 00:25:41.983 [2024-11-26 20:55:45.369581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.983 [2024-11-26 20:55:45.369643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.983 qpair failed and we were unable to recover it. 00:25:41.983 [2024-11-26 20:55:45.369952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.983 [2024-11-26 20:55:45.370016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.983 qpair failed and we were unable to recover it. 00:25:41.983 [2024-11-26 20:55:45.370334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.983 [2024-11-26 20:55:45.370399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.983 qpair failed and we were unable to recover it. 00:25:41.983 [2024-11-26 20:55:45.370657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.983 [2024-11-26 20:55:45.370720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.983 qpair failed and we were unable to recover it. 00:25:41.983 [2024-11-26 20:55:45.370933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.983 [2024-11-26 20:55:45.370998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.983 qpair failed and we were unable to recover it. 00:25:41.983 [2024-11-26 20:55:45.371233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.983 [2024-11-26 20:55:45.371296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.983 qpair failed and we were unable to recover it. 00:25:41.983 [2024-11-26 20:55:45.371569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.983 [2024-11-26 20:55:45.371634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.983 qpair failed and we were unable to recover it. 00:25:41.983 [2024-11-26 20:55:45.371897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.983 [2024-11-26 20:55:45.371959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.983 qpair failed and we were unable to recover it. 00:25:41.983 [2024-11-26 20:55:45.372212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.983 [2024-11-26 20:55:45.372276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.983 qpair failed and we were unable to recover it. 00:25:41.983 [2024-11-26 20:55:45.372585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.983 [2024-11-26 20:55:45.372648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.983 qpair failed and we were unable to recover it. 00:25:41.983 [2024-11-26 20:55:45.372926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.983 [2024-11-26 20:55:45.372989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.983 qpair failed and we were unable to recover it. 00:25:41.983 [2024-11-26 20:55:45.373239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.983 [2024-11-26 20:55:45.373322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.983 qpair failed and we were unable to recover it. 00:25:41.983 [2024-11-26 20:55:45.373574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.983 [2024-11-26 20:55:45.373637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.983 qpair failed and we were unable to recover it. 00:25:41.983 [2024-11-26 20:55:45.373881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.983 [2024-11-26 20:55:45.373943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.983 qpair failed and we were unable to recover it. 00:25:41.983 [2024-11-26 20:55:45.374174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.983 [2024-11-26 20:55:45.374248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.983 qpair failed and we were unable to recover it. 00:25:41.983 [2024-11-26 20:55:45.374549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.983 [2024-11-26 20:55:45.374612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.983 qpair failed and we were unable to recover it. 00:25:41.983 [2024-11-26 20:55:45.374891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.983 [2024-11-26 20:55:45.374954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.983 qpair failed and we were unable to recover it. 00:25:41.983 [2024-11-26 20:55:45.375182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.983 [2024-11-26 20:55:45.375244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.983 qpair failed and we were unable to recover it. 00:25:41.983 [2024-11-26 20:55:45.375506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.983 [2024-11-26 20:55:45.375572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.983 qpair failed and we were unable to recover it. 00:25:41.983 [2024-11-26 20:55:45.375868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.983 [2024-11-26 20:55:45.375932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.983 qpair failed and we were unable to recover it. 00:25:41.983 [2024-11-26 20:55:45.376180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.983 [2024-11-26 20:55:45.376243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.983 qpair failed and we were unable to recover it. 00:25:41.983 [2024-11-26 20:55:45.376521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.983 [2024-11-26 20:55:45.376584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.983 qpair failed and we were unable to recover it. 00:25:41.983 [2024-11-26 20:55:45.376795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.983 [2024-11-26 20:55:45.376859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.983 qpair failed and we were unable to recover it. 00:25:41.983 [2024-11-26 20:55:45.377108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.983 [2024-11-26 20:55:45.377171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.983 qpair failed and we were unable to recover it. 00:25:41.983 [2024-11-26 20:55:45.377461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.983 [2024-11-26 20:55:45.377527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.983 qpair failed and we were unable to recover it. 00:25:41.983 [2024-11-26 20:55:45.377819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.983 [2024-11-26 20:55:45.377883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.983 qpair failed and we were unable to recover it. 00:25:41.983 [2024-11-26 20:55:45.378142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.983 [2024-11-26 20:55:45.378208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.983 qpair failed and we were unable to recover it. 00:25:41.983 [2024-11-26 20:55:45.378462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.983 [2024-11-26 20:55:45.378527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.983 qpair failed and we were unable to recover it. 00:25:41.983 [2024-11-26 20:55:45.378835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.983 [2024-11-26 20:55:45.378898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.983 qpair failed and we were unable to recover it. 00:25:41.983 [2024-11-26 20:55:45.379157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.983 [2024-11-26 20:55:45.379220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.983 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.379565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.379629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.379873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.379939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.380250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.380327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.380615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.380678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.380937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.380998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.381265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.381346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.381635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.381698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.381928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.381990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.382237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.382346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.382644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.382707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.382905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.382972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.383276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.383360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.383660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.383723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.383979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.384041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.384288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.384365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.384663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.384727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.384973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.385035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.385270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.385352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.385618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.385681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.385916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.385979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.386187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.386250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.386514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.386579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.386881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.386945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.387224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.387286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.387538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.387613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.387863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.387926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.388222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.388285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.388617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.388679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.388974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.389037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.389284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.389367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.389612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.389674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.389924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.389988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.390233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.390298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.390543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.390608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.390900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.390964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.391268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.391350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.391612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.391674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.391884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.391950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.392221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.392285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.392592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.392655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.392941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.393003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.393255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.393335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.393611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.393673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.393890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.393956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.394243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.394336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.394637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.394701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.394946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.395009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.395254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.395336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.395594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.395657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.395902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.395968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.396218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.396281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.396615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.396680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.396960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.397022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.397337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.397402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.397649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.397712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.397960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.398024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.398267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.398365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.398660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.398723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.398963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.399026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.399204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.399266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.399526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.399591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.399882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.399945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.400235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.400296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.400563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.400626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.400911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.400985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.401263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.401343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.401556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.401620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.401819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.401884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.402140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.402202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.402522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.402587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.402872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.402937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.984 [2024-11-26 20:55:45.403173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.984 [2024-11-26 20:55:45.403238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.984 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.403449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.403513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.403711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.403777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.404060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.404124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.404411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.404476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.404763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.404827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.405079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.405145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.405464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.405528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.405767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.405831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.406021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.406084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.406327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.406391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.406678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.406741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.406990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.407055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.407351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.407415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.407669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.407732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.407940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.408003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.408178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.408242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.408480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.408544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.408857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.408922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.409163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.409227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.409527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.409604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.409814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.409879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.410122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.410187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.410426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.410493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.410776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.410840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.411085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.411147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.411390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.411457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.411704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.411769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.412050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.412112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.412407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.412471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.412730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.412793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.413070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.413132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.413427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.413491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.413768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.413831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.414055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.414118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.414413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.414477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.414760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.414824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.415074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.415137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.415419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.415483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.415737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.415801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.416089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.416151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.416443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.416507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.416753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.416818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.417064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.417129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.417425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.417491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.417732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.417795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.418089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.418151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.418456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.418521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.418805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.418868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.419118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.419181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.419459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.419523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.419808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.419872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.420119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.420180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.420425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.420491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.420753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.420816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.421106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.421169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.421464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.421529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.421808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.421871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.422125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.422189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.422401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.422465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.422714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.422788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.423071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.423133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.423423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.423487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.423675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.423738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.985 qpair failed and we were unable to recover it. 00:25:41.985 [2024-11-26 20:55:45.423951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.985 [2024-11-26 20:55:45.424016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.424273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.424349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.424659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.424722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.425007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.425070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.425354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.425419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.425668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.425732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.425948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.426010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.426296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.426389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.426636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.426698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.426888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.426951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.427196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.427259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.427536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.427598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.427855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.427918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.428156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.428218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.428440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.428503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.428770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.428832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.429069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.429132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.429413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.429478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.429763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.429826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.430061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.430124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.430371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.430434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.430675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.430738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.430984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.431046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.431360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.431424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.431712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.431776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.432068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.432130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.432346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.432411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.432700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.432764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.433049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.433112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.433370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.433434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.433660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.433725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.433999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.434063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.434244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.434320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.434538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.434602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.434899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.434963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.435223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.435285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.435575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.435649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.435893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.435960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.436240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.436317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.436616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.436679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.436921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.436983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.437282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.437364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.437642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.437704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.437996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.438058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.438299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.438397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.438683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.438746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.438985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.439047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.439340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.439405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.439687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.439750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.439992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.440057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.440331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.440396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.440686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.440750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.441034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.441098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.441333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.441399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.441653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.441716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.442006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.442068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.442347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.442413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.442666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.442729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.443018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.443081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.443333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.443397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.443639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.443706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.443957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.444021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.444231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.444295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.444739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.444804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.445060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.445123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.445414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.445478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.445761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.445825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.446067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.446132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.446420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.446484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.446768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.446832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.447091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.447155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.447452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.447516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.986 [2024-11-26 20:55:45.447810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.986 [2024-11-26 20:55:45.447873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.986 qpair failed and we were unable to recover it. 00:25:41.987 [2024-11-26 20:55:45.448153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.448216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 [2024-11-26 20:55:45.448511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.448574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 [2024-11-26 20:55:45.448823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.448888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 [2024-11-26 20:55:45.449099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.449173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1772141 Killed "${NVMF_APP[@]}" "$@" 00:25:41.987 [2024-11-26 20:55:45.449421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.449487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 [2024-11-26 20:55:45.449679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.449743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 [2024-11-26 20:55:45.450027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 20:55:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:25:41.987 [2024-11-26 20:55:45.450091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 [2024-11-26 20:55:45.450351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.450417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 20:55:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:41.987 [2024-11-26 20:55:45.450610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.450675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 20:55:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:41.987 [2024-11-26 20:55:45.450872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.450935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 20:55:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:41.987 [2024-11-26 20:55:45.451147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.451211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 20:55:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:41.987 [2024-11-26 20:55:45.451480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.451547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 [2024-11-26 20:55:45.451743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.451807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 [2024-11-26 20:55:45.452088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.452151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 [2024-11-26 20:55:45.452404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.452440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 [2024-11-26 20:55:45.452577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.452610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 [2024-11-26 20:55:45.452761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.452794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 [2024-11-26 20:55:45.452967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.453002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 [2024-11-26 20:55:45.453146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.453181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 [2024-11-26 20:55:45.453422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.453457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 [2024-11-26 20:55:45.453607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.453642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 [2024-11-26 20:55:45.453784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.453818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 [2024-11-26 20:55:45.453989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.454023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 [2024-11-26 20:55:45.454146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.454179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 [2024-11-26 20:55:45.454326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.454378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 [2024-11-26 20:55:45.454491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.454524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 [2024-11-26 20:55:45.454666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.454698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 [2024-11-26 20:55:45.454823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.454861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 [2024-11-26 20:55:45.455024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.455059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 [2024-11-26 20:55:45.455312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.455345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 [2024-11-26 20:55:45.455471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.455505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 [2024-11-26 20:55:45.455662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.455695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 [2024-11-26 20:55:45.455873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.455938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 [2024-11-26 20:55:45.456187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.456247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 [2024-11-26 20:55:45.456435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.456468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 [2024-11-26 20:55:45.456574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.456624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 20:55:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1772664 00:25:41.987 20:55:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:41.987 [2024-11-26 20:55:45.456797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.456873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b9 20:55:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1772664 00:25:41.987 0 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 [2024-11-26 20:55:45.457159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.457219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 20:55:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1772664 ']' 00:25:41.987 [2024-11-26 20:55:45.457406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.457439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 20:55:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:41.987 [2024-11-26 20:55:45.457562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.457593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 20:55:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:41.987 [2024-11-26 20:55:45.457847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.457912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 20:55:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:41.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:41.987 [2024-11-26 20:55:45.458190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.458253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 20:55:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:41.987 [2024-11-26 20:55:45.458447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.458480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 20:55:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:41.987 [2024-11-26 20:55:45.458618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.458651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 [2024-11-26 20:55:45.458848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.458912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 [2024-11-26 20:55:45.459093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.459128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 [2024-11-26 20:55:45.459277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.459327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 [2024-11-26 20:55:45.459466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.459500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 [2024-11-26 20:55:45.459686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.459721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 [2024-11-26 20:55:45.459844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.459885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 [2024-11-26 20:55:45.460001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.460038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 [2024-11-26 20:55:45.460184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.460218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 [2024-11-26 20:55:45.460393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.460426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 [2024-11-26 20:55:45.460561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.460595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 [2024-11-26 20:55:45.460725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.460759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 [2024-11-26 20:55:45.460892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.460926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 [2024-11-26 20:55:45.461071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.461104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 [2024-11-26 20:55:45.461247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.461279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 [2024-11-26 20:55:45.461417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.461451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 [2024-11-26 20:55:45.461594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.461628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 [2024-11-26 20:55:45.461767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.461799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 [2024-11-26 20:55:45.461914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.461948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 [2024-11-26 20:55:45.462079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.462112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 [2024-11-26 20:55:45.462231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.462265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 [2024-11-26 20:55:45.462388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.462422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.987 [2024-11-26 20:55:45.462604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.987 [2024-11-26 20:55:45.462639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.987 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.462768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.462801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.462933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.462967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.463099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.463132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.463299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.463336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.463465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.463496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.463614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.463645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.463762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.463793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.463954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.463985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.464124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.464165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.464276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.464319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.464439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.464472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.464632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.464663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.464793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.464835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.464952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.464985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.465100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.465142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.465248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.465279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.465391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.465423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.465540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.465572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.465707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.465738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.465872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.465903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.466075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.466106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.466247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.466281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.466439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.466471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.466574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.466610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.466718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.466748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.466900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.466930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.467094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.467128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.467263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.467292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.467416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.467446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.467572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.467604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.467736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.467769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.467909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.467939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.468041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.468070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.468212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.468242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.468359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.468391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.468526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.468555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.468701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.468731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.468876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.468907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.469015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.469044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.469186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.469217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.469372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.469403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.469505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.469535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.469637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.469666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.469821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.469849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.469974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.470002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.470098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.470138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.470276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.470309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.470418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.470448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.470601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.470630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.470784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.470814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.470913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.470942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.471075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.471105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.471206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.471237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.471357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.471387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.471516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.471545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.471649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.471680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.471836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.471865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.471961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.471994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.472131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.472161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.472283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.472324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.472478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.472508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.472638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.472667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.472795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.472824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.472957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.472992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.988 [2024-11-26 20:55:45.473128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.988 [2024-11-26 20:55:45.473157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.988 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.473280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.473319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.473428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.473458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.473584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.473615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.473750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.473779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.473930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.473960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.474053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.474082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.474186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.474215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.474339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.474371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.474498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.474527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.474662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.474692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.474800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.474829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.474991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.475021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.475157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.475186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.475318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.475350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.475475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.475505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.475624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.475652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.475761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.475790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.475917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.475947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.476042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.476073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.476172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.476202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.476327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.476358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.476487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.476516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.476640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.476670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.476824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.476854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.476947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.476976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.477105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.477135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.477244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.477274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.477406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.477437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.477562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.477592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.477722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.477751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.477904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.477934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.478063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.478093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.478192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.478222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.478334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.478364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.478492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.478523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.478623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.478654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.478764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.478793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.478893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.478922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.479051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.479086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.479229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.479258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.479417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.479443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.479528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.479555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.479653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.479679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.479817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.479843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.479935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.479961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.480080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.480106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.480196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.480222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.480306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.480333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.480415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.480441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.480533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.480559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.480674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.480699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.480788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.480814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.480910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.480937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.481053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.481078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.481203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.481228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.481317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.481343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.481429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.481455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.481544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.481569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.481672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.481698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.481776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.481803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.481915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.481941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.482053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.482078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.482198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.482223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.482333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.482359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.482440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.482466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.482580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.482606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.482721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.482747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.482840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.482866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.482974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.483000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.483118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.483144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.483259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.483285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.989 [2024-11-26 20:55:45.483386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.989 [2024-11-26 20:55:45.483413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.989 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.483522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.483548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.483634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.483660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.483771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.483796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.483902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.483928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.484015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.484041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.484114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.484139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.484240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.484271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.484363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.484389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.484505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.484531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.484639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.484665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.484758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.484784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.484874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.484900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.484973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.484998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.485114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.485139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.485248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.485273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.485396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.485422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.485539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.485564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.485656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.485682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.485767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.485793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.485883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.485909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.486033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.486060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.486142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.486168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.486251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.486276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.486404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.486431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.486515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.486541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.486637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.486663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.486779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.486805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.486887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.486913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.487024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.487050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.487136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.487161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.487280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.487312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.487431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.487456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.487576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.487602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.487688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.487714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.487820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.487846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.487931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.487958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.488046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.488071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.488187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.488213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.488311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.488338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.488430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.488455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.488543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.488568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.488653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.488679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.488771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.488797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.488885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.488911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.489037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.489063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.489144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.489171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.489258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.489289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.489396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.489422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.489537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.489563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.489647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.489672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.489787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.489813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.489912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.489938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.490042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.490068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.490142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.490167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.490259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.490285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.490382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.490409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.490501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.490527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.490613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.490638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.490762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.490789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.490908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.490933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.491073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.491098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.491212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.491237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.491360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.491387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.491473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.491499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.491602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.990 [2024-11-26 20:55:45.491628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.990 qpair failed and we were unable to recover it. 00:25:41.990 [2024-11-26 20:55:45.491743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.491768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.491850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.491877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.491983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.492008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.492092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.492119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.492216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.492242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.492329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.492354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.492444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.492471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.492561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.492587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.492681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.492708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.492833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.492859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.492943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.492970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.493112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.493137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.493271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.493297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.493427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.493453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.493530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.493555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.493654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.493679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.493752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.493778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.493869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.493895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.494029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.494054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.494191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.494217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.494327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.494354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.494465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.494496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.494612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.494637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.494773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.494799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.494902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.494928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.495039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.495064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.495147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.495173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.495284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.495319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.495410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.495436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.495578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.495604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.495719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.495745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.495823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.495849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.495945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.495971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.496095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.496121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.496195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.496220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.496366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.496392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.496483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.496508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.496586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.496612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.496687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.496713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.496793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.496820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.496944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.496970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.497060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.497086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.497191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.497217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.497312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.497338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.497430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.497456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.497542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.497567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.497683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.497709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.497829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.497855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.497935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.497960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.498044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.498070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.498160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.498187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.498297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.498330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.498450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.498476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.498558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.498584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.498675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.498701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.498792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.498830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.498940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.498967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.499060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.499085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.499172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.499198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.499313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.499339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.499450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.499476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.499557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.499588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.499671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.499697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.499824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.499849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.499959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.499986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.500078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.500103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.500224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.500249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.500351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.500379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.500492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.500518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.500658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.500683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.500794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.500819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.500936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.500961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.501075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.991 [2024-11-26 20:55:45.501101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.991 qpair failed and we were unable to recover it. 00:25:41.991 [2024-11-26 20:55:45.501180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.501206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.501297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.501338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.501426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.501451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.501593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.501618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.501733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.501758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.501873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.501898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.501985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.502011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.502128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.502153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.502265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.502292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.502413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.502438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.502627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.502653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.502745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.502770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.502868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.502893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.503002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.503027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 [2024-11-26 20:55:45.503012] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.503088] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:41.992 [2024-11-26 20:55:45.503127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.503152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.503348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.503374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.503490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.503513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.503601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.503628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.503715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.503742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.503855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.503880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.503995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.504021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.504159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.504185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.504265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.504291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.504415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.504442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.504551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.504577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.504672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.504698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.504794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.504820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.504916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.504947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.505030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.505055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.505145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.505174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.505294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.505335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.505447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.505474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.505592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.505618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.505754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.505780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.505875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.505901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.506039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.506065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.506177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.506203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.506289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.506325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.506413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.506440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.506533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.506558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.506678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.506704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.506821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.506860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.506953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.506979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.507117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.507143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.507263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.507290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.507413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.507439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.507578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.507604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.507693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.507719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.507841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.507878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.507983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.508009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.508118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.508144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.508284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.508317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.508436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.508462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.508542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.508569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.508691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.508717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.508840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.508868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.508981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.509007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.509117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.509143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.509222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.509249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.509367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.509393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.509508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.509534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.509660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.509686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.509773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.509799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.509886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.509912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.510047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.510073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.510181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.992 [2024-11-26 20:55:45.510208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.992 qpair failed and we were unable to recover it. 00:25:41.992 [2024-11-26 20:55:45.510325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.510352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.510440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.510470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.510585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.510611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.510699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.510726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.510831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.510857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.510970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.510996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.511079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.511104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.511195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.511220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.511348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.511374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.511490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.511516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.511601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.511627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.511765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.511790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.511875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.511901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.511992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.512017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.512093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.512119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.512208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.512234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.512350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.512376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.512476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.512502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.512615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.512641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.512756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.512783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.512883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.512910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.513021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.513047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.513171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.513198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.513332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.513360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.513473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.513498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.513606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.513632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.513713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.513739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.513835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.513860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.513958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.513984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.514093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.514118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.514312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.514338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.514423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.514449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.514587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.514613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.514802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.514828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.514945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.514971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.515087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.515113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.515197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.515224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.515337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.515363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.515487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.515513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.515603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.515629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.515734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.515760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.515897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.515927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.516037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.516063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.516149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.516174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.516278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.516309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.516406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.516432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.516525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.516552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.516656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.516682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.516794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.516820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.516913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.516940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.517022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.517048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.517132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.517160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.517280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.517321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.517436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.517461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.517650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.517676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.517796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.517823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.517942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.517968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.518058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.518083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.518219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.518245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.518332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.518360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.518467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.518493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.518629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.518655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.518744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.518770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.518852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.518878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.518973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.518999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.519075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.519101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.519210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.519236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.519346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.519372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.519491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.519518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.519624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.519650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.519753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.519778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.519868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.519894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.993 [2024-11-26 20:55:45.520000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.993 [2024-11-26 20:55:45.520025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.993 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.520128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.520154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.520239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.520264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.520382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.520409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.520491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.520517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.520648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.520674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.520748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.520773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.520879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.520904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.521020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.521045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.521179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.521209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.521329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.521355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.521428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.521454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.521589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.521615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.521706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.521731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.521844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.521887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.522032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.522061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.522154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.522182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.522297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.522333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.522447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.522475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.522593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.522620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.522733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.522760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.522849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.522876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.522959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.522986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.523103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.523130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.523208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.523235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.523351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.523380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.523469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.523500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.523618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.523645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.523733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.523761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.523851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.523880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.523998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.524025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.524138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.524165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.524244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.524270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.524364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.524390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.524530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.524556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.524668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.524693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.524834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.524860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.525057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.525083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.525203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.525231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.525344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.525373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.525470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.525497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.525640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.525666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.525773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.525800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.525896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.525923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.526038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.526065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.526161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.526187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.526314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.526342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.526460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.526485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.526599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.526625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.526706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.526737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.526829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.526856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.526971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.526997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.527112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.527139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.527280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.527318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.527437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.527464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.527576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.527603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.527720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.527747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.527856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.527883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.527963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.527989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.528108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.528135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.528214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.528240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.528328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.528356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.528446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.528473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.528565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.528592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.528707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.528733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.528874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.528901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.529016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.529043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.529157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.529184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.529270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.529298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.529389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.529416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.529615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.529641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.529752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.529780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.529867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.529894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.994 [2024-11-26 20:55:45.529972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.994 [2024-11-26 20:55:45.529998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.994 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.530106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.530131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.530321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.530348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.530492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.530518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.530638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.530664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.530776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.530803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.530991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.531017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.531133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.531162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.531250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.531278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.531398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.531426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.531567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.531594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.531711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.531738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.531837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.531864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.531982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.532009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.532119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.532145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.532336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.532362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.532472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.532502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.532693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.532719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.532833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.532860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.532979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.533007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.533107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.533134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.533225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.533252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.533359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.533386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.533507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.533534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.533617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.533645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.533763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.533791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.533915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.533942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.534055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.534082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.534200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.534228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.534329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.534357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.534475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.534502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.534617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.534644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.534749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.534776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.534892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.534921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.535003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.535030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.535171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.535198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.535287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.535320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.535435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.535462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.535573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.535600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.535747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.535774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.535891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.535917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.536028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.536054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.536139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.536166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.536324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.536365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.536482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.536509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.536600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.536626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.536740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.536766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.536880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.536905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.537017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.537043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.537193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.537222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.537330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.537358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.537476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.537503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.537596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.537624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.537724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.537751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.537863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.537889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.538009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.538036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.538145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.538177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.538290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.538322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.538429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.538454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.538545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.538570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.538656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.538681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.538798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.538826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.538977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.539004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.539116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.539143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.539234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.539260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.539380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.539408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.539502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.539530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.995 [2024-11-26 20:55:45.539622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.995 [2024-11-26 20:55:45.539649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.995 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.539739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.539764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.539851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.539877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.540022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.540048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.540160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.540187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.540313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.540341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.540425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.540451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.540568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.540594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.540708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.540734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.540822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.540848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.540929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.540955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.541050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.541076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.541189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.541215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.541314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.541341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.541428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.541455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.541575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.541601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.541718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.541744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.541828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.541854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.541957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.541986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.542084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.542111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.542223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.542250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.542364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.542392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.542485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.542511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.542598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.542625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.542734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.542762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.542900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.542926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.543033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.543060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.543170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.543198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.543314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.543341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.543424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.543456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.543542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.543569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.543697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.543724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.543840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.543867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.543958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.543985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.544062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.544089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.544166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.544193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.544320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.544348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.544430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.544457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.544543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.544570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.544726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.544753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.544844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.544870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.544959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.544986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.545099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.545126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.545252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.545278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.545382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.545409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.545527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.545553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.545669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.545696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.545795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.545822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.545931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.545957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.546067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.546093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.546209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.546236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.546381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.546421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.546514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.546542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.546662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.546688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.546801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.546827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.546922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.546949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.547071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.547098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.547188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.547214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.547334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.547360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.547468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.547494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.547610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.547636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.547772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.547798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.547887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.547914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.548027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.548055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.548169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.548195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.548339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.548367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.548460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.548486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.548573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.548606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.548744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.548771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.548919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.548950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.549066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.549092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.549210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.549236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.549334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.549363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.996 [2024-11-26 20:55:45.549482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.996 [2024-11-26 20:55:45.549508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.996 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.549608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.549634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.549746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.549772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.549853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.549878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.549997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.550023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.550109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.550135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.550220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.550247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.550377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.550404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.550493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.550520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.550613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.550639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.550725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.550752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.550856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.550885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.550984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.551013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.551124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.551151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.551257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.551283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.551432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.551459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.551601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.551628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.551748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.551775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.551857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.551884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.551959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.551985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.552099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.552124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.552215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.552241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.552361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.552387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.552533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.552561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.552681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.552708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.552801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.552828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.552920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.552947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.553061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.553088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.553175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.553202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.553284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.553318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.553409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.553436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.553550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.553577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.553705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.553732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.553825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.553851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.553958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.553985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.554071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.554098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.554215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.554246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.554348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.554376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.554474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.554500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.554641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.554667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.554787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.554814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.554926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.554955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.555049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.555076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.555168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.555194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.555316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.555344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.555462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.555489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.555571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.555604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.555702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.555729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.555803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.555829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.555947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.555973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.556093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.556121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.556236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.556261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.556362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.556389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.556477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.556503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.556589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.556614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.556716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.556742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.556878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.556903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.556993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.557020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.557139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.557165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.557280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.557321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.557418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.557445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.557557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.557585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.557704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.557732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.557868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.557909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.558028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.558055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.558175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.558202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.558296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.997 [2024-11-26 20:55:45.558329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.997 qpair failed and we were unable to recover it. 00:25:41.997 [2024-11-26 20:55:45.558448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.558474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.558593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.558620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.558707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.558733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.558869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.558897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.558989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.559016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.559097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.559125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.559223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.559250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.559333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.559360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.559456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.559483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.559570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.559597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.559705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.559733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.559874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.559901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.559984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.560011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.560133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.560160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.560256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.560282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.560410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.560437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.560552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.560579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.560699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.560726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.560821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.560848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.560961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.560988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.561105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.561132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.561263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.561310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.561433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.561460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.561609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.561641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.561754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.561782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.561879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.561906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.561999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.562031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.562148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.562176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.562323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.562366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.562494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.562522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.562644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.562670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.562805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.562832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.562921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.562948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.563061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.563087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.563178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.563205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.563341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.563368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.563450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.563482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.563586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.563619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.563733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.563758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.563846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.563872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.563965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.563991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.564070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.564095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.564211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.564236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.564332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.564359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.564457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.564489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.564588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.564632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.564731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.564759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.564867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.564893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.564980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.565006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.565090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.565117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.565221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.565250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.565359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.565388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.565486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.565512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.565623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.565650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.565740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.565767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.565865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.565891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.565984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.566021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.566140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.566167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.566270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.566312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.566400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.566426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.566539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.566564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.566712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.566738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.566833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.566859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.566979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.567012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.567129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.567157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.567284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.567319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.567414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.567441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.567538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.567565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.567654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.567682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.567805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.567837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.567956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.567983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.568101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.568127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.568241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.998 [2024-11-26 20:55:45.568267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.998 qpair failed and we were unable to recover it. 00:25:41.998 [2024-11-26 20:55:45.568363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.568391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.568504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.568530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.568626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.568652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.568790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.568821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.568934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.568960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.569037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.569072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.569183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.569209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.569298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.569332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.569408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.569434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.569520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.569546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.569737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.569763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.569903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.569928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.570033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.570060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.570186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.570212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.570334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.570364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.570452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.570479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.570558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.570585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.570727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.570754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.570877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.570904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.570991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.571019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.571154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.571183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.571276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.571320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.571414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.571443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.571563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.571590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.571940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.571971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.572073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.572107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.572225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.572252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.572390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.572419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.572509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.572536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.572657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.572689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.572783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.572811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.572892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.572919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.573026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.573053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.573142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.573169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.573257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.573283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.573381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.573408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.573548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.573575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.573659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.573686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.573810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.573837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.573932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.573960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.574074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.574101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.574179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.574212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.574326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.574353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.574496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.574522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.574645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.574682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.574773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.574800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.574942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.574969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.575058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.575085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.575206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.575234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.575360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.575389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.575472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.575499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.575598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.575626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.575709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.575737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.575847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.575874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.575956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.575982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.576080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.576107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.576194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.576220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.576320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.576347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.576466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.576494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.576610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.576637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.576759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.576787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.576879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.576919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.577017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.577044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.577165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.577192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.577289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.577322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.577408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.577435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.577526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.577554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.577677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.577704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.577795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.577822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.577961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.577988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.578096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.578128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.578236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.578263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:41.999 qpair failed and we were unable to recover it. 00:25:41.999 [2024-11-26 20:55:45.578360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.999 [2024-11-26 20:55:45.578392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.578493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.578520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.578639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.578668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.578786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.578813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.578920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.578947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.579028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.579063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.579189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.579215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.579336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.579363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.579445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.579472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.579585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.579620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.579724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.579752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.579866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.579898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.580023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.580050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.580142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.580169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.580287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.580320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.580415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.580443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.580519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.580546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.580654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.580681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.580770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.580798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.580892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.580919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.581027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.581053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.581141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.581168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.581269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.581311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.581398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.581426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.581536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.581563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.581664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.581695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.581786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.581815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.581923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.581949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.582075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.582102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.582215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.582242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.582336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.582364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.582440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.582467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.582553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.582580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.582705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.582732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.582819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.582846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.582938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.582965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.583054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.583081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.583190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.583217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.583322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.583354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.583467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.583494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.583581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.583616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.583722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.583749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.583890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.583916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.583994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:42.000 [2024-11-26 20:55:45.584034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.584059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.584167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.584194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.584280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.584327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.584426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.584453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.584540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.584567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.584657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.584691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.584775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.584802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.584893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.584920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.585014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.585045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.585156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.585183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.585300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.585332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.585431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.585458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.585551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.585578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.585668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.585694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.585820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.585847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.585966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.585992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.586082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.586109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.586257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.586283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.586409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.586436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.586560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.586587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.586712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.586738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.586832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.586860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.586987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.000 [2024-11-26 20:55:45.587015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.000 qpair failed and we were unable to recover it. 00:25:42.000 [2024-11-26 20:55:45.587110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.587137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.587231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.587258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.587403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.587432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.587517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.587544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.587659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.587687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.587778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.587804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.587922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.587949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.588047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.588074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.588187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.588214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.588323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.588352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.588460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.588487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.588584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.588622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.588746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.588773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.588909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.588949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.589048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.589076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.589191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.589217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.589313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.589340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.589454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.589480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.589595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.589626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.589766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.589794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.589879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.589906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.589998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.590025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.590132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.590160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.590282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.590315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.590404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.590430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.590568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.590599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.590704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.590731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.590826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.590866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.590964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.590991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.591109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.591135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.591232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.591259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.591385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.591413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.591533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.591559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.591648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.591684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.591821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.591848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.591965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.591991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.592108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.592133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.592265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.592301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.592411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.592438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.592532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.592559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.592707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.592734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.592848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.592875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.593000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.593027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.593152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.593178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.593262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.593289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.593390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.593418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.593540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.593567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.593690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.593717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.593838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.593864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.593960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.593991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.594136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.594164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.594261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.594287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.594384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.594416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.594543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.594570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.594669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.594696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.594788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.594815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.594910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.594937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.595076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.595103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.595217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.595245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.595435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.595463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.595564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.595591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.595696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.595722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.595821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.595847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.595951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.595979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.596094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.596121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.596243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.596269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.596384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.596412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.596529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.596555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.596649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.596675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.596793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.596820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.596961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.596987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.597080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.597115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.001 [2024-11-26 20:55:45.597232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.001 [2024-11-26 20:55:45.597260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.001 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.597362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.597389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.597484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.597511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.597595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.597629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.597708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.597735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.597874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.597901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.597988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.598016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.598106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.598133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.598275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.598316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.598411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.598437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.598523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.598549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.598642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.598671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.598793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.598819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.598906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.598933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.599047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.599084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.599200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.599227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.599334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.599362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.599451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.599480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.599576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.599609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.599709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.599748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.599887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.599924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.600042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.600069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.600195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.600221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.600351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.600379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.600494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.600521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.600622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.600648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.600739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.600766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.600854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.600881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.600970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.601005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.601135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.601163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.601255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.601283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.601411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.601438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.601527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.601554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.601650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.601676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.601814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.601840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.601937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.601963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.602084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.602111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.602217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.602244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.602323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.602350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.602449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.602476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.602567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.602594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.602714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.602741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.602838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.602865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.602947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.602974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.603089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.603116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.603225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.603252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.603366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.603393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.603482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.603509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.603634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.603661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.603748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.603774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.603861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.603888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.603999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.604030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.604121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.604147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.604264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.604291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.604399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.604427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.604589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.604643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.604779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.604813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.604954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.604980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.605099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.605126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.605221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.605246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.605367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.605400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.605492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.605518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.605617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.605652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.605761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.605786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.605899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.605926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.606033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.606059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.606152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.606178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.606307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.002 [2024-11-26 20:55:45.606337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.002 qpair failed and we were unable to recover it. 00:25:42.002 [2024-11-26 20:55:45.606479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.606507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.606597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.606625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.606749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.606776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.606888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.606915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.607038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.607073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.607197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.607225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.607326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.607354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.607472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.607498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.607594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.607626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.607721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.607747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.607846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.607873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.608004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.608031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.608111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.608137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.608220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.608247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.608381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.608410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.608525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.608553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.608657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.608696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.608808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.608835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.608956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.608983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.609124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.609166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.609331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.609371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.609466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.609493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.609623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.609649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.609747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.609781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.609878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.609903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.609997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.610026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.610140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.610166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.610271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.610316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.610408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.610434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.610517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.610544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.610653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.610680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.610781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.610809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.610925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.610952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.611052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.611078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.611162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.611188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.611319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.611350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.611471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.611497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.611609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.611637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.611760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.611787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.611867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.611893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.611989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.612018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.612101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.612128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.612246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.612273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.612381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.612408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.612517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.612544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.612641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.612666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.612758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.612784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.612879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.612906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.612990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.613017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.613103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.613131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.613231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.613274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.613394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.613422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.613518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.613544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.613640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.613676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.613792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.613819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.613933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.613967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.614078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.614104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.614190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.614215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.614340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.614366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.614446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.614478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.614562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.614599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.614708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.614734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.614862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.614887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.614979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.615007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.615116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.615141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.615239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.615268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.615381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.615423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.615518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.615547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.615700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.615727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.615845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.615872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.615966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.615993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.616089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.616117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.616220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.003 [2024-11-26 20:55:45.616261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.003 qpair failed and we were unable to recover it. 00:25:42.003 [2024-11-26 20:55:45.616378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.616406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.616514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.616540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.616637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.616662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.616762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.616789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.616905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.616931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.617018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.617044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.617147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.617173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.617262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.617287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.617403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.617429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.617516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.617543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.617686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.617711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.617839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.617866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.617974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.617999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.618113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.618144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.618223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.618249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.618355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.618382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.618514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.618554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.618651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.618680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.618768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.618806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.618931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.618958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.619080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.619107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.619196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.619224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.619346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.619374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.619465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.619493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.619579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.619607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.619699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.619726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.619843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.619874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.619975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.620004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.620092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.620119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.620240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.620268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.620374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.620402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.620492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.620519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.620653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.620680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.620803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.620830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.620957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.620984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.621073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.621099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.621208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.621235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.621329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.621356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.621487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.621526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.621628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.621656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.621751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.621777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.621897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.621933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.622052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.622077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.622186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.622213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.622314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.622341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.622432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.622458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.622577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.622611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.622742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.622768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.622884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.622910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.622997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.623024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.623113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.623139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.623247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.623286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.623403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.623442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.623557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.623590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.623686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.623713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.623830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.623857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.623978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.624004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.624096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.624132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.624210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.624235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.624367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.624393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.624481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.624507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.624617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.624643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.624740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.624766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.624856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.624882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.624971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.624997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.625075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.625100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.625208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.625233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.004 [2024-11-26 20:55:45.625357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.004 [2024-11-26 20:55:45.625384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.004 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.625461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.625486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.625585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.625621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.625753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.625779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.625896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.625921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.626015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.626042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.626131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.626158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.626260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.626319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.626427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.626455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.626544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.626571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.626686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.626713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.626834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.626862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.626971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.626997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.627090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.627124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.627242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.627270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.627394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.627420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.627502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.627528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.627620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.627645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.627753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.627779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.627870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.627895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.627985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.628012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.628129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.628155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.628273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.628323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.628437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.628463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.628548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.628576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.628690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.628717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.628816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.628843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.628942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.628981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.629100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.629129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.629246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.629276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.629393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.629421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.629509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.629536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.629625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.629652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.629765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.629793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.629885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.629912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.630000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.630038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.630153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.630179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.630277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.630321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.630412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.630438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.630525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.630551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.630639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.630669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.630757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.630783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.630887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.630916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.631000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.631027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.631147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.631176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.631292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.631328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.631438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.631466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.631580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.631607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.631725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.631757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.631867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.631894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.631984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.632010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.632109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.632137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.632249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.632275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.632372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.632399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.632493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.632519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.632639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.632667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.632751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.632778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.632894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.632920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.633012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.633038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.633122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.633148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.633231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.633256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.633362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.633388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.633498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.633524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.633647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.633673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.633783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.633810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.633897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.633923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.634048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.634073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.634167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.634197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.634325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.634352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.634468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.634494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.634615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.634641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.634753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.634790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.634914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.634940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.005 [2024-11-26 20:55:45.635026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.005 [2024-11-26 20:55:45.635052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.005 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.635169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.635196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.635323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.635352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.635468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.635493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.635576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.635613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.635752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.635777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.635889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.635915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.636012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.636039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.636156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.636197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.636297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.636335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.636449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.636476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.636563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.636590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.636692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.636721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.636838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.636865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.636955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.636982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.637065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.637093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.637223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.637273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.637402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.637431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.637520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.637546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.637663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.637689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.637781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.637807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.637928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.637958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.638069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.638094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.638178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.638203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.638300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.638340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.638434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.638460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.638572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.638598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.638710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.638747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.638866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.638892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.638981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.639006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.639088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.639113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.639235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.639277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.639402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.639442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.639567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.639594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.639706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.639732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.639832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.639857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.639985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.640011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.640146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.640173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.640255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.640281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.640393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.640432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.640562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.640589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.640682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.640709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.640796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.640823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.640922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.640950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.641096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.641121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.641204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.641230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.641326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.641353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.641443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.641469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.641588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.641621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.641769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.641796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.641916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.641941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.642025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.642051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.642158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.642184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.642284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.642333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.642464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.642494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.642589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.642628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.642729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.642758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.642872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.642899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.642988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.643015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.643109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.643136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.643235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.643264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.643364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.643393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.643523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.643551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.643640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.643666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.643783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.643809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.643927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.643953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.644061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.644090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.006 [2024-11-26 20:55:45.644216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.006 [2024-11-26 20:55:45.644257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.006 qpair failed and we were unable to recover it. 00:25:42.007 [2024-11-26 20:55:45.644376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.007 [2024-11-26 20:55:45.644404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.007 qpair failed and we were unable to recover it. 00:25:42.007 [2024-11-26 20:55:45.644497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.007 [2024-11-26 20:55:45.644523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.007 qpair failed and we were unable to recover it. 00:25:42.007 [2024-11-26 20:55:45.644635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.007 [2024-11-26 20:55:45.644662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.007 qpair failed and we were unable to recover it. 00:25:42.007 [2024-11-26 20:55:45.644788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.007 [2024-11-26 20:55:45.644814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.007 qpair failed and we were unable to recover it. 00:25:42.007 [2024-11-26 20:55:45.644907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.007 [2024-11-26 20:55:45.644945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.007 qpair failed and we were unable to recover it. 00:25:42.007 [2024-11-26 20:55:45.645031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.007 [2024-11-26 20:55:45.645056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.007 qpair failed and we were unable to recover it. 00:25:42.007 [2024-11-26 20:55:45.645143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.007 [2024-11-26 20:55:45.645169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.007 qpair failed and we were unable to recover it. 00:25:42.007 [2024-11-26 20:55:45.645263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.007 [2024-11-26 20:55:45.645311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.007 qpair failed and we were unable to recover it. 00:25:42.007 [2024-11-26 20:55:45.645414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.007 [2024-11-26 20:55:45.645442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.007 qpair failed and we were unable to recover it. 00:25:42.007 [2024-11-26 20:55:45.645535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.007 [2024-11-26 20:55:45.645562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.007 qpair failed and we were unable to recover it. 00:25:42.007 [2024-11-26 20:55:45.645647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.007 [2024-11-26 20:55:45.645674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.007 qpair failed and we were unable to recover it. 00:25:42.007 [2024-11-26 20:55:45.645785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.007 [2024-11-26 20:55:45.645812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.007 qpair failed and we were unable to recover it. 00:25:42.007 [2024-11-26 20:55:45.645928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.007 [2024-11-26 20:55:45.645955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.007 qpair failed and we were unable to recover it. 00:25:42.007 [2024-11-26 20:55:45.646041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.007 [2024-11-26 20:55:45.646067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.007 qpair failed and we were unable to recover it. 00:25:42.007 [2024-11-26 20:55:45.646154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.007 [2024-11-26 20:55:45.646183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.007 qpair failed and we were unable to recover it. 00:25:42.007 [2024-11-26 20:55:45.646281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.007 [2024-11-26 20:55:45.646316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.007 qpair failed and we were unable to recover it. 00:25:42.007 [2024-11-26 20:55:45.646403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.007 [2024-11-26 20:55:45.646429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.007 qpair failed and we were unable to recover it. 00:25:42.007 [2024-11-26 20:55:45.646570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.007 [2024-11-26 20:55:45.646597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.007 qpair failed and we were unable to recover it. 00:25:42.007 [2024-11-26 20:55:45.646689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.007 [2024-11-26 20:55:45.646715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.007 qpair failed and we were unable to recover it. 00:25:42.007 [2024-11-26 20:55:45.646830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.007 [2024-11-26 20:55:45.646856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.007 qpair failed and we were unable to recover it. 00:25:42.007 [2024-11-26 20:55:45.646943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.007 [2024-11-26 20:55:45.646975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.007 qpair failed and we were unable to recover it. 00:25:42.007 [2024-11-26 20:55:45.647089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.007 [2024-11-26 20:55:45.647117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.007 qpair failed and we were unable to recover it. 00:25:42.007 [2024-11-26 20:55:45.647209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.007 [2024-11-26 20:55:45.647235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.007 qpair failed and we were unable to recover it. 00:25:42.007 [2024-11-26 20:55:45.647365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.007 [2024-11-26 20:55:45.647394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.007 qpair failed and we were unable to recover it. 00:25:42.007 [2024-11-26 20:55:45.647532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.007 [2024-11-26 20:55:45.647558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.007 qpair failed and we were unable to recover it. 00:25:42.007 [2024-11-26 20:55:45.647712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.007 [2024-11-26 20:55:45.647747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.007 qpair failed and we were unable to recover it. 00:25:42.007 [2024-11-26 20:55:45.647834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.007 [2024-11-26 20:55:45.647862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.007 qpair failed and we were unable to recover it. 00:25:42.007 [2024-11-26 20:55:45.647978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.007 [2024-11-26 20:55:45.648003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.007 qpair failed and we were unable to recover it. 00:25:42.007 [2024-11-26 20:55:45.648105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.007 [2024-11-26 20:55:45.648130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.007 qpair failed and we were unable to recover it. 00:25:42.007 [2024-11-26 20:55:45.648243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.007 [2024-11-26 20:55:45.648268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.007 qpair failed and we were unable to recover it. 00:25:42.007 [2024-11-26 20:55:45.648381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.007 [2024-11-26 20:55:45.648409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.007 qpair failed and we were unable to recover it. 00:25:42.007 [2024-11-26 20:55:45.648503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.007 [2024-11-26 20:55:45.648529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.007 qpair failed and we were unable to recover it. 00:25:42.007 [2024-11-26 20:55:45.648624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.007 [2024-11-26 20:55:45.648650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.007 qpair failed and we were unable to recover it. 00:25:42.007 [2024-11-26 20:55:45.648862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.007 [2024-11-26 20:55:45.648900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.007 qpair failed and we were unable to recover it. 00:25:42.007 [2024-11-26 20:55:45.648995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.007 [2024-11-26 20:55:45.649022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.007 qpair failed and we were unable to recover it. 00:25:42.007 [2024-11-26 20:55:45.649110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.007 [2024-11-26 20:55:45.649136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.007 qpair failed and we were unable to recover it. 00:25:42.007 [2024-11-26 20:55:45.649229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.007 [2024-11-26 20:55:45.649255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.007 qpair failed and we were unable to recover it. 00:25:42.007 [2024-11-26 20:55:45.649358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.007 [2024-11-26 20:55:45.649399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.007 qpair failed and we were unable to recover it. 00:25:42.007 [2024-11-26 20:55:45.649498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.007 [2024-11-26 20:55:45.649525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.007 qpair failed and we were unable to recover it. 00:25:42.007 [2024-11-26 20:55:45.649611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.007 [2024-11-26 20:55:45.649637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.007 qpair failed and we were unable to recover it. 00:25:42.007 [2024-11-26 20:55:45.649719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.007 [2024-11-26 20:55:45.649755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.007 qpair failed and we were unable to recover it. 00:25:42.007 [2024-11-26 20:55:45.649853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.007 [2024-11-26 20:55:45.649880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.007 qpair failed and we were unable to recover it. 00:25:42.007 [2024-11-26 20:55:45.649973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.007 [2024-11-26 20:55:45.650001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.007 qpair failed and we were unable to recover it. 00:25:42.007 [2024-11-26 20:55:45.650088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.007 [2024-11-26 20:55:45.650113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.007 qpair failed and we were unable to recover it. 00:25:42.007 [2024-11-26 20:55:45.650183] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:42.007 [2024-11-26 20:55:45.650218] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:42.007 [2024-11-26 20:55:45.650224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.007 [2024-11-26 20:55:45.650233] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:42.007 [2024-11-26 20:55:45.650245] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:42.007 [2024-11-26 20:55:45.650250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.007 [2024-11-26 20:55:45.650255] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:42.008 qpair failed and we were unable to recover it. 00:25:42.008 [2024-11-26 20:55:45.650344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.008 [2024-11-26 20:55:45.650375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.008 qpair failed and we were unable to recover it. 00:25:42.008 [2024-11-26 20:55:45.650470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.008 [2024-11-26 20:55:45.650496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.008 qpair failed and we were unable to recover it. 00:25:42.008 [2024-11-26 20:55:45.650583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.008 [2024-11-26 20:55:45.650612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.008 qpair failed and we were unable to recover it. 00:25:42.008 [2024-11-26 20:55:45.650697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.008 [2024-11-26 20:55:45.650722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.008 qpair failed and we were unable to recover it. 00:25:42.008 [2024-11-26 20:55:45.650812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.008 [2024-11-26 20:55:45.650837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.008 qpair failed and we were unable to recover it. 00:25:42.008 [2024-11-26 20:55:45.650958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.008 [2024-11-26 20:55:45.650984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.008 qpair failed and we were unable to recover it. 00:25:42.008 [2024-11-26 20:55:45.651104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.008 [2024-11-26 20:55:45.651132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.008 qpair failed and we were unable to recover it. 00:25:42.008 [2024-11-26 20:55:45.651255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.008 [2024-11-26 20:55:45.651282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.008 qpair failed and we were unable to recover it. 00:25:42.008 [2024-11-26 20:55:45.651395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.008 [2024-11-26 20:55:45.651423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.008 qpair failed and we were unable to recover it. 00:25:42.008 [2024-11-26 20:55:45.651538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.008 [2024-11-26 20:55:45.651564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.008 qpair failed and we were unable to recover it. 00:25:42.008 [2024-11-26 20:55:45.651680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.008 [2024-11-26 20:55:45.651706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.008 qpair failed and we were unable to recover it. 00:25:42.008 [2024-11-26 20:55:45.651793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.008 [2024-11-26 20:55:45.651820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.008 qpair failed and we were unable to recover it. 00:25:42.008 [2024-11-26 20:55:45.651923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.008 [2024-11-26 20:55:45.651951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.008 qpair failed and we were unable to recover it. 00:25:42.008 [2024-11-26 20:55:45.652038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.008 [2024-11-26 20:55:45.652064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.008 qpair failed and we were unable to recover it. 00:25:42.008 [2024-11-26 20:55:45.652152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.008 [2024-11-26 20:55:45.652178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.008 qpair failed and we were unable to recover it. 00:25:42.008 [2024-11-26 20:55:45.652289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.008 [2024-11-26 20:55:45.652346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.008 qpair failed and we were unable to recover it. 00:25:42.008 [2024-11-26 20:55:45.652436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.008 [2024-11-26 20:55:45.652462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.008 qpair failed and we were unable to recover it. 00:25:42.008 [2024-11-26 20:55:45.652575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.008 [2024-11-26 20:55:45.652601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.008 qpair failed and we were unable to recover it. 00:25:42.008 [2024-11-26 20:55:45.652692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.008 [2024-11-26 20:55:45.652719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.008 qpair failed and we were unable to recover it. 00:25:42.008 [2024-11-26 20:55:45.652857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.008 [2024-11-26 20:55:45.652882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.008 qpair failed and we were unable to recover it. 00:25:42.008 [2024-11-26 20:55:45.652978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.008 [2024-11-26 20:55:45.653005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.008 qpair failed and we were unable to recover it. 00:25:42.008 [2024-11-26 20:55:45.653094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.008 [2024-11-26 20:55:45.653119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.008 qpair failed and we were unable to recover it. 00:25:42.008 [2024-11-26 20:55:45.653212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.008 [2024-11-26 20:55:45.653238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.008 qpair failed and we were unable to recover it. 00:25:42.008 [2024-11-26 20:55:45.653364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.008 [2024-11-26 20:55:45.653389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.008 qpair failed and we were unable to recover it. 00:25:42.008 [2024-11-26 20:55:45.653507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.008 [2024-11-26 20:55:45.653532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.008 qpair failed and we were unable to recover it. 00:25:42.008 [2024-11-26 20:55:45.653621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.008 [2024-11-26 20:55:45.653646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.008 qpair failed and we were unable to recover it. 00:25:42.008 [2024-11-26 20:55:45.653771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.008 [2024-11-26 20:55:45.653797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.008 qpair failed and we were unable to recover it. 00:25:42.008 [2024-11-26 20:55:45.653892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.008 [2024-11-26 20:55:45.653921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.008 qpair failed and we were unable to recover it. 00:25:42.008 [2024-11-26 20:55:45.654037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.008 [2024-11-26 20:55:45.654062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.008 qpair failed and we were unable to recover it. 00:25:42.009 [2024-11-26 20:55:45.654171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.009 [2024-11-26 20:55:45.654196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.009 qpair failed and we were unable to recover it. 00:25:42.009 [2024-11-26 20:55:45.654284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.009 [2024-11-26 20:55:45.654318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.009 qpair failed and we were unable to recover it. 00:25:42.009 [2024-11-26 20:55:45.654332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:25:42.009 [2024-11-26 20:55:45.654407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.009 [2024-11-26 20:55:45.654434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.009 [2024-11-26 20:55:45.654379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:25:42.009 qpair failed and we were unable to recover it. 00:25:42.009 [2024-11-26 20:55:45.654407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:25:42.009 [2024-11-26 20:55:45.654412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:42.009 [2024-11-26 20:55:45.654552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.009 [2024-11-26 20:55:45.654576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.009 qpair failed and we were unable to recover it. 00:25:42.009 [2024-11-26 20:55:45.654670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.009 [2024-11-26 20:55:45.654695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.009 qpair failed and we were unable to recover it. 00:25:42.009 [2024-11-26 20:55:45.654837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.009 [2024-11-26 20:55:45.654863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.009 qpair failed and we were unable to recover it. 00:25:42.009 [2024-11-26 20:55:45.654947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.009 [2024-11-26 20:55:45.654983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.009 qpair failed and we were unable to recover it. 00:25:42.009 [2024-11-26 20:55:45.655074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.009 [2024-11-26 20:55:45.655098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.009 qpair failed and we were unable to recover it. 00:25:42.009 [2024-11-26 20:55:45.655221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.009 [2024-11-26 20:55:45.655245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.009 qpair failed and we were unable to recover it. 00:25:42.009 [2024-11-26 20:55:45.655366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.009 [2024-11-26 20:55:45.655392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.009 qpair failed and we were unable to recover it. 00:25:42.009 [2024-11-26 20:55:45.655476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.009 [2024-11-26 20:55:45.655501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.009 qpair failed and we were unable to recover it. 00:25:42.009 [2024-11-26 20:55:45.655587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.009 [2024-11-26 20:55:45.655610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.009 qpair failed and we were unable to recover it. 00:25:42.009 [2024-11-26 20:55:45.655697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.009 [2024-11-26 20:55:45.655721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.009 qpair failed and we were unable to recover it. 00:25:42.009 [2024-11-26 20:55:45.655800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.009 [2024-11-26 20:55:45.655824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.009 qpair failed and we were unable to recover it. 00:25:42.009 [2024-11-26 20:55:45.655941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.009 [2024-11-26 20:55:45.655968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.009 qpair failed and we were unable to recover it. 00:25:42.009 [2024-11-26 20:55:45.656066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.009 [2024-11-26 20:55:45.656092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.009 qpair failed and we were unable to recover it. 00:25:42.009 [2024-11-26 20:55:45.656203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.009 [2024-11-26 20:55:45.656245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.009 qpair failed and we were unable to recover it. 00:25:42.009 [2024-11-26 20:55:45.656394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.009 [2024-11-26 20:55:45.656434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.009 qpair failed and we were unable to recover it. 00:25:42.009 [2024-11-26 20:55:45.656525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.009 [2024-11-26 20:55:45.656554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.009 qpair failed and we were unable to recover it. 00:25:42.009 [2024-11-26 20:55:45.656649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.009 [2024-11-26 20:55:45.656676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.009 qpair failed and we were unable to recover it. 00:25:42.009 [2024-11-26 20:55:45.656817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.009 [2024-11-26 20:55:45.656844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.009 qpair failed and we were unable to recover it. 00:25:42.009 [2024-11-26 20:55:45.656930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.009 [2024-11-26 20:55:45.656957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.009 qpair failed and we were unable to recover it. 00:25:42.009 [2024-11-26 20:55:45.657051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.009 [2024-11-26 20:55:45.657086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.009 qpair failed and we were unable to recover it. 00:25:42.009 [2024-11-26 20:55:45.657225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.009 [2024-11-26 20:55:45.657251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.009 qpair failed and we were unable to recover it. 00:25:42.009 [2024-11-26 20:55:45.657371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.009 [2024-11-26 20:55:45.657413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.009 qpair failed and we were unable to recover it. 00:25:42.009 [2024-11-26 20:55:45.657508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.009 [2024-11-26 20:55:45.657535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.009 qpair failed and we were unable to recover it. 00:25:42.009 [2024-11-26 20:55:45.657645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.009 [2024-11-26 20:55:45.657670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.009 qpair failed and we were unable to recover it. 00:25:42.009 [2024-11-26 20:55:45.657783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.009 [2024-11-26 20:55:45.657814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.009 qpair failed and we were unable to recover it. 00:25:42.010 [2024-11-26 20:55:45.657911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.010 [2024-11-26 20:55:45.657938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.010 qpair failed and we were unable to recover it. 00:25:42.010 [2024-11-26 20:55:45.658053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.010 [2024-11-26 20:55:45.658080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.010 qpair failed and we were unable to recover it. 00:25:42.010 [2024-11-26 20:55:45.658170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.010 [2024-11-26 20:55:45.658195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.010 qpair failed and we were unable to recover it. 00:25:42.010 [2024-11-26 20:55:45.658285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.010 [2024-11-26 20:55:45.658321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.010 qpair failed and we were unable to recover it. 00:25:42.010 [2024-11-26 20:55:45.658408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.010 [2024-11-26 20:55:45.658433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.010 qpair failed and we were unable to recover it. 00:25:42.010 [2024-11-26 20:55:45.658516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.010 [2024-11-26 20:55:45.658541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.010 qpair failed and we were unable to recover it. 00:25:42.010 [2024-11-26 20:55:45.658663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.010 [2024-11-26 20:55:45.658689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.010 qpair failed and we were unable to recover it. 00:25:42.010 [2024-11-26 20:55:45.658805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.010 [2024-11-26 20:55:45.658830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.010 qpair failed and we were unable to recover it. 00:25:42.010 [2024-11-26 20:55:45.658913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.010 [2024-11-26 20:55:45.658939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.010 qpair failed and we were unable to recover it. 00:25:42.010 [2024-11-26 20:55:45.659027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.010 [2024-11-26 20:55:45.659053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.010 qpair failed and we were unable to recover it. 00:25:42.010 [2024-11-26 20:55:45.659163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.010 [2024-11-26 20:55:45.659189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.010 qpair failed and we were unable to recover it. 00:25:42.010 [2024-11-26 20:55:45.659277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.010 [2024-11-26 20:55:45.659328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.010 qpair failed and we were unable to recover it. 00:25:42.010 [2024-11-26 20:55:45.659418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.010 [2024-11-26 20:55:45.659443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.010 qpair failed and we were unable to recover it. 00:25:42.010 [2024-11-26 20:55:45.659559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.010 [2024-11-26 20:55:45.659585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.010 qpair failed and we were unable to recover it. 00:25:42.010 [2024-11-26 20:55:45.659694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.010 [2024-11-26 20:55:45.659719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.010 qpair failed and we were unable to recover it. 00:25:42.010 [2024-11-26 20:55:45.659806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.010 [2024-11-26 20:55:45.659832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.010 qpair failed and we were unable to recover it. 00:25:42.010 [2024-11-26 20:55:45.659911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.010 [2024-11-26 20:55:45.659937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.010 qpair failed and we were unable to recover it. 00:25:42.010 [2024-11-26 20:55:45.660039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.010 [2024-11-26 20:55:45.660079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.010 qpair failed and we were unable to recover it. 00:25:42.010 [2024-11-26 20:55:45.660196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.010 [2024-11-26 20:55:45.660236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.010 qpair failed and we were unable to recover it. 00:25:42.010 [2024-11-26 20:55:45.660331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.010 [2024-11-26 20:55:45.660361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.010 qpair failed and we were unable to recover it. 00:25:42.010 [2024-11-26 20:55:45.660445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.010 [2024-11-26 20:55:45.660472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.010 qpair failed and we were unable to recover it. 00:25:42.010 [2024-11-26 20:55:45.660560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.010 [2024-11-26 20:55:45.660588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.010 qpair failed and we were unable to recover it. 00:25:42.010 [2024-11-26 20:55:45.660705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.010 [2024-11-26 20:55:45.660732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.010 qpair failed and we were unable to recover it. 00:25:42.010 [2024-11-26 20:55:45.660823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.010 [2024-11-26 20:55:45.660850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.010 qpair failed and we were unable to recover it. 00:25:42.010 [2024-11-26 20:55:45.660952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.010 [2024-11-26 20:55:45.660987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.010 qpair failed and we were unable to recover it. 00:25:42.010 [2024-11-26 20:55:45.661088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.010 [2024-11-26 20:55:45.661131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.010 qpair failed and we were unable to recover it. 00:25:42.010 [2024-11-26 20:55:45.661247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.010 [2024-11-26 20:55:45.661275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.010 qpair failed and we were unable to recover it. 00:25:42.010 [2024-11-26 20:55:45.661382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.010 [2024-11-26 20:55:45.661408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.010 qpair failed and we were unable to recover it. 00:25:42.010 [2024-11-26 20:55:45.661500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.010 [2024-11-26 20:55:45.661527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.010 qpair failed and we were unable to recover it. 00:25:42.010 [2024-11-26 20:55:45.661647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.010 [2024-11-26 20:55:45.661675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.010 qpair failed and we were unable to recover it. 00:25:42.011 [2024-11-26 20:55:45.661781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.011 [2024-11-26 20:55:45.661807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.011 qpair failed and we were unable to recover it. 00:25:42.011 [2024-11-26 20:55:45.661894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.011 [2024-11-26 20:55:45.661921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.011 qpair failed and we were unable to recover it. 00:25:42.011 [2024-11-26 20:55:45.662009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.011 [2024-11-26 20:55:45.662034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.011 qpair failed and we were unable to recover it. 00:25:42.011 [2024-11-26 20:55:45.662127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.011 [2024-11-26 20:55:45.662167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.011 qpair failed and we were unable to recover it. 00:25:42.011 [2024-11-26 20:55:45.662264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.011 [2024-11-26 20:55:45.662292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.011 qpair failed and we were unable to recover it. 00:25:42.011 [2024-11-26 20:55:45.662396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.011 [2024-11-26 20:55:45.662427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.011 qpair failed and we were unable to recover it. 00:25:42.011 [2024-11-26 20:55:45.662524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.011 [2024-11-26 20:55:45.662558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.011 qpair failed and we were unable to recover it. 00:25:42.011 [2024-11-26 20:55:45.662664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.011 [2024-11-26 20:55:45.662692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.011 qpair failed and we were unable to recover it. 00:25:42.011 [2024-11-26 20:55:45.662803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.011 [2024-11-26 20:55:45.662831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.011 qpair failed and we were unable to recover it. 00:25:42.011 [2024-11-26 20:55:45.662926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.011 [2024-11-26 20:55:45.662955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.011 qpair failed and we were unable to recover it. 00:25:42.011 [2024-11-26 20:55:45.663048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.011 [2024-11-26 20:55:45.663075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.011 qpair failed and we were unable to recover it. 00:25:42.286 [2024-11-26 20:55:45.663158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.286 [2024-11-26 20:55:45.663187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.286 qpair failed and we were unable to recover it. 00:25:42.286 [2024-11-26 20:55:45.663276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.286 [2024-11-26 20:55:45.663309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.286 qpair failed and we were unable to recover it. 00:25:42.286 [2024-11-26 20:55:45.663397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.286 [2024-11-26 20:55:45.663424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.287 qpair failed and we were unable to recover it. 00:25:42.287 [2024-11-26 20:55:45.663520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.287 [2024-11-26 20:55:45.663546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.287 qpair failed and we were unable to recover it. 00:25:42.287 [2024-11-26 20:55:45.663666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.287 [2024-11-26 20:55:45.663693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.287 qpair failed and we were unable to recover it. 00:25:42.287 [2024-11-26 20:55:45.663804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.287 [2024-11-26 20:55:45.663830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.287 qpair failed and we were unable to recover it. 00:25:42.287 [2024-11-26 20:55:45.663915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.287 [2024-11-26 20:55:45.663942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.287 qpair failed and we were unable to recover it. 00:25:42.287 [2024-11-26 20:55:45.664037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.287 [2024-11-26 20:55:45.664066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.287 qpair failed and we were unable to recover it. 00:25:42.287 [2024-11-26 20:55:45.664147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.287 [2024-11-26 20:55:45.664173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.287 qpair failed and we were unable to recover it. 00:25:42.287 [2024-11-26 20:55:45.664272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.287 [2024-11-26 20:55:45.664298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.287 qpair failed and we were unable to recover it. 00:25:42.287 [2024-11-26 20:55:45.664395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.287 [2024-11-26 20:55:45.664421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.287 qpair failed and we were unable to recover it. 00:25:42.287 [2024-11-26 20:55:45.664511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.287 [2024-11-26 20:55:45.664538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.287 qpair failed and we were unable to recover it. 00:25:42.287 [2024-11-26 20:55:45.664646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.287 [2024-11-26 20:55:45.664671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.287 qpair failed and we were unable to recover it. 00:25:42.287 [2024-11-26 20:55:45.664759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.287 [2024-11-26 20:55:45.664784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.287 qpair failed and we were unable to recover it. 00:25:42.287 [2024-11-26 20:55:45.664903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.287 [2024-11-26 20:55:45.664927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.287 qpair failed and we were unable to recover it. 00:25:42.287 [2024-11-26 20:55:45.665037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.287 [2024-11-26 20:55:45.665063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.287 qpair failed and we were unable to recover it. 00:25:42.287 [2024-11-26 20:55:45.665139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.287 [2024-11-26 20:55:45.665164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.287 qpair failed and we were unable to recover it. 00:25:42.287 [2024-11-26 20:55:45.665241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.287 [2024-11-26 20:55:45.665266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.287 qpair failed and we were unable to recover it. 00:25:42.287 [2024-11-26 20:55:45.665399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.287 [2024-11-26 20:55:45.665425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.287 qpair failed and we were unable to recover it. 00:25:42.287 [2024-11-26 20:55:45.665534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.287 [2024-11-26 20:55:45.665559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.287 qpair failed and we were unable to recover it. 00:25:42.287 [2024-11-26 20:55:45.665667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.287 [2024-11-26 20:55:45.665692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.287 qpair failed and we were unable to recover it. 00:25:42.287 [2024-11-26 20:55:45.665783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.287 [2024-11-26 20:55:45.665807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.287 qpair failed and we were unable to recover it. 00:25:42.287 [2024-11-26 20:55:45.665898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.287 [2024-11-26 20:55:45.665928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.287 qpair failed and we were unable to recover it. 00:25:42.287 [2024-11-26 20:55:45.666035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.287 [2024-11-26 20:55:45.666060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.287 qpair failed and we were unable to recover it. 00:25:42.287 [2024-11-26 20:55:45.666140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.287 [2024-11-26 20:55:45.666165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.287 qpair failed and we were unable to recover it. 00:25:42.287 [2024-11-26 20:55:45.666246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.287 [2024-11-26 20:55:45.666275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.287 qpair failed and we were unable to recover it. 00:25:42.287 [2024-11-26 20:55:45.666395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.287 [2024-11-26 20:55:45.666435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.287 qpair failed and we were unable to recover it. 00:25:42.287 [2024-11-26 20:55:45.666534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.287 [2024-11-26 20:55:45.666562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.287 qpair failed and we were unable to recover it. 00:25:42.287 [2024-11-26 20:55:45.666671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.287 [2024-11-26 20:55:45.666698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.287 qpair failed and we were unable to recover it. 00:25:42.287 [2024-11-26 20:55:45.666776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.287 [2024-11-26 20:55:45.666803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.287 qpair failed and we were unable to recover it. 00:25:42.287 [2024-11-26 20:55:45.666891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.287 [2024-11-26 20:55:45.666932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.287 qpair failed and we were unable to recover it. 00:25:42.287 [2024-11-26 20:55:45.667054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.287 [2024-11-26 20:55:45.667081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.287 qpair failed and we were unable to recover it. 00:25:42.287 [2024-11-26 20:55:45.667175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.287 [2024-11-26 20:55:45.667203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.287 qpair failed and we were unable to recover it. 00:25:42.287 [2024-11-26 20:55:45.667326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.287 [2024-11-26 20:55:45.667353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.287 qpair failed and we were unable to recover it. 00:25:42.287 [2024-11-26 20:55:45.667441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.287 [2024-11-26 20:55:45.667467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.287 qpair failed and we were unable to recover it. 00:25:42.287 [2024-11-26 20:55:45.667550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.287 [2024-11-26 20:55:45.667577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.287 qpair failed and we were unable to recover it. 00:25:42.287 [2024-11-26 20:55:45.667736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.287 [2024-11-26 20:55:45.667763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.287 qpair failed and we were unable to recover it. 00:25:42.287 [2024-11-26 20:55:45.667867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.287 [2024-11-26 20:55:45.667893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.287 qpair failed and we were unable to recover it. 00:25:42.287 [2024-11-26 20:55:45.667990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.287 [2024-11-26 20:55:45.668017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.287 qpair failed and we were unable to recover it. 00:25:42.287 [2024-11-26 20:55:45.668104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.287 [2024-11-26 20:55:45.668131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.287 qpair failed and we were unable to recover it. 00:25:42.287 [2024-11-26 20:55:45.668249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.288 [2024-11-26 20:55:45.668282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.288 qpair failed and we were unable to recover it. 00:25:42.288 [2024-11-26 20:55:45.668384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.288 [2024-11-26 20:55:45.668410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.288 qpair failed and we were unable to recover it. 00:25:42.288 [2024-11-26 20:55:45.668506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.288 [2024-11-26 20:55:45.668533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.288 qpair failed and we were unable to recover it. 00:25:42.288 [2024-11-26 20:55:45.668631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.288 [2024-11-26 20:55:45.668658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.288 qpair failed and we were unable to recover it. 00:25:42.288 [2024-11-26 20:55:45.668746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.288 [2024-11-26 20:55:45.668785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.288 qpair failed and we were unable to recover it. 00:25:42.288 [2024-11-26 20:55:45.668909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.288 [2024-11-26 20:55:45.668935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.288 qpair failed and we were unable to recover it. 00:25:42.288 [2024-11-26 20:55:45.669050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.288 [2024-11-26 20:55:45.669076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.288 qpair failed and we were unable to recover it. 00:25:42.288 [2024-11-26 20:55:45.669169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.288 [2024-11-26 20:55:45.669194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.288 qpair failed and we were unable to recover it. 00:25:42.288 [2024-11-26 20:55:45.669270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.288 [2024-11-26 20:55:45.669314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.288 qpair failed and we were unable to recover it. 00:25:42.288 [2024-11-26 20:55:45.669415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.288 [2024-11-26 20:55:45.669457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.288 qpair failed and we were unable to recover it. 00:25:42.288 [2024-11-26 20:55:45.669549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.288 [2024-11-26 20:55:45.669578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.288 qpair failed and we were unable to recover it. 00:25:42.288 [2024-11-26 20:55:45.669722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.288 [2024-11-26 20:55:45.669760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.288 qpair failed and we were unable to recover it. 00:25:42.288 [2024-11-26 20:55:45.669889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.288 [2024-11-26 20:55:45.669917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.288 qpair failed and we were unable to recover it. 00:25:42.288 [2024-11-26 20:55:45.670002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.288 [2024-11-26 20:55:45.670030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.288 qpair failed and we were unable to recover it. 00:25:42.288 [2024-11-26 20:55:45.670108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.288 [2024-11-26 20:55:45.670134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.288 qpair failed and we were unable to recover it. 00:25:42.288 [2024-11-26 20:55:45.670231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.288 [2024-11-26 20:55:45.670258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.288 qpair failed and we were unable to recover it. 00:25:42.288 [2024-11-26 20:55:45.670376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.288 [2024-11-26 20:55:45.670403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.288 qpair failed and we were unable to recover it. 00:25:42.288 [2024-11-26 20:55:45.670491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.288 [2024-11-26 20:55:45.670518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.288 qpair failed and we were unable to recover it. 00:25:42.288 [2024-11-26 20:55:45.670608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.288 [2024-11-26 20:55:45.670634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.288 qpair failed and we were unable to recover it. 00:25:42.288 [2024-11-26 20:55:45.670723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.288 [2024-11-26 20:55:45.670750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.288 qpair failed and we were unable to recover it. 00:25:42.288 [2024-11-26 20:55:45.670854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.288 [2024-11-26 20:55:45.670881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.288 qpair failed and we were unable to recover it. 00:25:42.288 [2024-11-26 20:55:45.670971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.288 [2024-11-26 20:55:45.670999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.288 qpair failed and we were unable to recover it. 00:25:42.288 [2024-11-26 20:55:45.671120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.288 [2024-11-26 20:55:45.671168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.288 qpair failed and we were unable to recover it. 00:25:42.288 [2024-11-26 20:55:45.671269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.288 [2024-11-26 20:55:45.671314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.288 qpair failed and we were unable to recover it. 00:25:42.288 [2024-11-26 20:55:45.671406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.288 [2024-11-26 20:55:45.671433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.288 qpair failed and we were unable to recover it. 00:25:42.288 [2024-11-26 20:55:45.671518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.288 [2024-11-26 20:55:45.671546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.288 qpair failed and we were unable to recover it. 00:25:42.288 [2024-11-26 20:55:45.671669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.288 [2024-11-26 20:55:45.671697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.288 qpair failed and we were unable to recover it. 00:25:42.288 [2024-11-26 20:55:45.671797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.288 [2024-11-26 20:55:45.671824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.288 qpair failed and we were unable to recover it. 00:25:42.288 [2024-11-26 20:55:45.671919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.288 [2024-11-26 20:55:45.671947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.288 qpair failed and we were unable to recover it. 00:25:42.288 [2024-11-26 20:55:45.672039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.288 [2024-11-26 20:55:45.672066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.288 qpair failed and we were unable to recover it. 00:25:42.288 [2024-11-26 20:55:45.672191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.288 [2024-11-26 20:55:45.672217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.288 qpair failed and we were unable to recover it. 00:25:42.288 [2024-11-26 20:55:45.672300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.288 [2024-11-26 20:55:45.672339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.288 qpair failed and we were unable to recover it. 00:25:42.288 [2024-11-26 20:55:45.672428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.288 [2024-11-26 20:55:45.672454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.288 qpair failed and we were unable to recover it. 00:25:42.288 [2024-11-26 20:55:45.672545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.288 [2024-11-26 20:55:45.672571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.288 qpair failed and we were unable to recover it. 00:25:42.288 [2024-11-26 20:55:45.672687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.288 [2024-11-26 20:55:45.672713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.288 qpair failed and we were unable to recover it. 00:25:42.288 [2024-11-26 20:55:45.672822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.288 [2024-11-26 20:55:45.672849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.288 qpair failed and we were unable to recover it. 00:25:42.288 [2024-11-26 20:55:45.672941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.288 [2024-11-26 20:55:45.672968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.288 qpair failed and we were unable to recover it. 00:25:42.288 [2024-11-26 20:55:45.673050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.288 [2024-11-26 20:55:45.673075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.288 qpair failed and we were unable to recover it. 00:25:42.289 [2024-11-26 20:55:45.673196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.289 [2024-11-26 20:55:45.673236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.289 qpair failed and we were unable to recover it. 00:25:42.289 [2024-11-26 20:55:45.673343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.289 [2024-11-26 20:55:45.673373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.289 qpair failed and we were unable to recover it. 00:25:42.289 [2024-11-26 20:55:45.673458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.289 [2024-11-26 20:55:45.673485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.289 qpair failed and we were unable to recover it. 00:25:42.289 [2024-11-26 20:55:45.673567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.289 [2024-11-26 20:55:45.673594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.289 qpair failed and we were unable to recover it. 00:25:42.289 [2024-11-26 20:55:45.673713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.289 [2024-11-26 20:55:45.673740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.289 qpair failed and we were unable to recover it. 00:25:42.289 [2024-11-26 20:55:45.673831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.289 [2024-11-26 20:55:45.673857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.289 qpair failed and we were unable to recover it. 00:25:42.289 [2024-11-26 20:55:45.673942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.289 [2024-11-26 20:55:45.673969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.289 qpair failed and we were unable to recover it. 00:25:42.289 [2024-11-26 20:55:45.674056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.289 [2024-11-26 20:55:45.674084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.289 qpair failed and we were unable to recover it. 00:25:42.289 [2024-11-26 20:55:45.674164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.289 [2024-11-26 20:55:45.674193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.289 qpair failed and we were unable to recover it. 00:25:42.289 [2024-11-26 20:55:45.674281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.289 [2024-11-26 20:55:45.674318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.289 qpair failed and we were unable to recover it. 00:25:42.289 [2024-11-26 20:55:45.674409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.289 [2024-11-26 20:55:45.674436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.289 qpair failed and we were unable to recover it. 00:25:42.289 [2024-11-26 20:55:45.674517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.289 [2024-11-26 20:55:45.674549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.289 qpair failed and we were unable to recover it. 00:25:42.289 [2024-11-26 20:55:45.674641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.289 [2024-11-26 20:55:45.674668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.289 qpair failed and we were unable to recover it. 00:25:42.289 [2024-11-26 20:55:45.674746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.289 [2024-11-26 20:55:45.674772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.289 qpair failed and we were unable to recover it. 00:25:42.289 [2024-11-26 20:55:45.674857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.289 [2024-11-26 20:55:45.674884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.289 qpair failed and we were unable to recover it. 00:25:42.289 [2024-11-26 20:55:45.674971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.289 [2024-11-26 20:55:45.674999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.289 qpair failed and we were unable to recover it. 00:25:42.289 [2024-11-26 20:55:45.675108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.289 [2024-11-26 20:55:45.675146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.289 qpair failed and we were unable to recover it. 00:25:42.289 [2024-11-26 20:55:45.675237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.289 [2024-11-26 20:55:45.675266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.289 qpair failed and we were unable to recover it. 00:25:42.289 [2024-11-26 20:55:45.675391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.289 [2024-11-26 20:55:45.675419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.289 qpair failed and we were unable to recover it. 00:25:42.289 [2024-11-26 20:55:45.675511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.289 [2024-11-26 20:55:45.675540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.289 qpair failed and we were unable to recover it. 00:25:42.289 [2024-11-26 20:55:45.675674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.289 [2024-11-26 20:55:45.675701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.289 qpair failed and we were unable to recover it. 00:25:42.289 [2024-11-26 20:55:45.675813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.289 [2024-11-26 20:55:45.675841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.289 qpair failed and we were unable to recover it. 00:25:42.289 [2024-11-26 20:55:45.675934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.289 [2024-11-26 20:55:45.675962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.289 qpair failed and we were unable to recover it. 00:25:42.289 [2024-11-26 20:55:45.676045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.289 [2024-11-26 20:55:45.676071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.289 qpair failed and we were unable to recover it. 00:25:42.289 [2024-11-26 20:55:45.676175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.289 [2024-11-26 20:55:45.676201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.289 qpair failed and we were unable to recover it. 00:25:42.289 [2024-11-26 20:55:45.676321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.289 [2024-11-26 20:55:45.676349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.289 qpair failed and we were unable to recover it. 00:25:42.289 [2024-11-26 20:55:45.676432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.289 [2024-11-26 20:55:45.676458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.289 qpair failed and we were unable to recover it. 00:25:42.289 [2024-11-26 20:55:45.676572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.289 [2024-11-26 20:55:45.676611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.289 qpair failed and we were unable to recover it. 00:25:42.289 [2024-11-26 20:55:45.676698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.289 [2024-11-26 20:55:45.676725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.289 qpair failed and we were unable to recover it. 00:25:42.289 [2024-11-26 20:55:45.676805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.289 [2024-11-26 20:55:45.676830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.289 qpair failed and we were unable to recover it. 00:25:42.289 [2024-11-26 20:55:45.676939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.289 [2024-11-26 20:55:45.676965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.289 qpair failed and we were unable to recover it. 00:25:42.289 [2024-11-26 20:55:45.677046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.289 [2024-11-26 20:55:45.677072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.289 qpair failed and we were unable to recover it. 00:25:42.289 [2024-11-26 20:55:45.677152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.289 [2024-11-26 20:55:45.677179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.289 qpair failed and we were unable to recover it. 00:25:42.289 [2024-11-26 20:55:45.677262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.289 [2024-11-26 20:55:45.677288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.289 qpair failed and we were unable to recover it. 00:25:42.289 [2024-11-26 20:55:45.677412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.289 [2024-11-26 20:55:45.677440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.289 qpair failed and we were unable to recover it. 00:25:42.289 [2024-11-26 20:55:45.677522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.289 [2024-11-26 20:55:45.677549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.289 qpair failed and we were unable to recover it. 00:25:42.289 [2024-11-26 20:55:45.677649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.289 [2024-11-26 20:55:45.677675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.289 qpair failed and we were unable to recover it. 00:25:42.289 [2024-11-26 20:55:45.677758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.289 [2024-11-26 20:55:45.677784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.289 qpair failed and we were unable to recover it. 00:25:42.290 [2024-11-26 20:55:45.677900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.290 [2024-11-26 20:55:45.677926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.290 qpair failed and we were unable to recover it. 00:25:42.290 [2024-11-26 20:55:45.678010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.290 [2024-11-26 20:55:45.678039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.290 qpair failed and we were unable to recover it. 00:25:42.290 [2024-11-26 20:55:45.678127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.290 [2024-11-26 20:55:45.678156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.290 qpair failed and we were unable to recover it. 00:25:42.290 [2024-11-26 20:55:45.678239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.290 [2024-11-26 20:55:45.678267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.290 qpair failed and we were unable to recover it. 00:25:42.290 [2024-11-26 20:55:45.678379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.290 [2024-11-26 20:55:45.678407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.290 qpair failed and we were unable to recover it. 00:25:42.290 [2024-11-26 20:55:45.678487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.290 [2024-11-26 20:55:45.678515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.290 qpair failed and we were unable to recover it. 00:25:42.290 [2024-11-26 20:55:45.678618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.290 [2024-11-26 20:55:45.678658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.290 qpair failed and we were unable to recover it. 00:25:42.290 [2024-11-26 20:55:45.678774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.290 [2024-11-26 20:55:45.678802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.290 qpair failed and we were unable to recover it. 00:25:42.290 [2024-11-26 20:55:45.678911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.290 [2024-11-26 20:55:45.678940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.290 qpair failed and we were unable to recover it. 00:25:42.290 [2024-11-26 20:55:45.679054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.290 [2024-11-26 20:55:45.679080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.290 qpair failed and we were unable to recover it. 00:25:42.290 [2024-11-26 20:55:45.679169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.290 [2024-11-26 20:55:45.679196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.290 qpair failed and we were unable to recover it. 00:25:42.290 [2024-11-26 20:55:45.679277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.290 [2024-11-26 20:55:45.679319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.290 qpair failed and we were unable to recover it. 00:25:42.290 [2024-11-26 20:55:45.679410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.290 [2024-11-26 20:55:45.679437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.290 qpair failed and we were unable to recover it. 00:25:42.290 [2024-11-26 20:55:45.679517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.290 [2024-11-26 20:55:45.679548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.290 qpair failed and we were unable to recover it. 00:25:42.290 [2024-11-26 20:55:45.679642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.290 [2024-11-26 20:55:45.679668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.290 qpair failed and we were unable to recover it. 00:25:42.290 [2024-11-26 20:55:45.679776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.290 [2024-11-26 20:55:45.679802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.290 qpair failed and we were unable to recover it. 00:25:42.290 [2024-11-26 20:55:45.679889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.290 [2024-11-26 20:55:45.679915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.290 qpair failed and we were unable to recover it. 00:25:42.290 [2024-11-26 20:55:45.679998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.290 [2024-11-26 20:55:45.680024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.290 qpair failed and we were unable to recover it. 00:25:42.290 [2024-11-26 20:55:45.680106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.290 [2024-11-26 20:55:45.680135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.290 qpair failed and we were unable to recover it. 00:25:42.290 [2024-11-26 20:55:45.680222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.290 [2024-11-26 20:55:45.680249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.290 qpair failed and we were unable to recover it. 00:25:42.290 [2024-11-26 20:55:45.680349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.290 [2024-11-26 20:55:45.680377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.290 qpair failed and we were unable to recover it. 00:25:42.290 [2024-11-26 20:55:45.680457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.290 [2024-11-26 20:55:45.680484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.290 qpair failed and we were unable to recover it. 00:25:42.290 [2024-11-26 20:55:45.680582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.290 [2024-11-26 20:55:45.680620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.290 qpair failed and we were unable to recover it. 00:25:42.290 [2024-11-26 20:55:45.680698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.290 [2024-11-26 20:55:45.680725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.290 qpair failed and we were unable to recover it. 00:25:42.290 [2024-11-26 20:55:45.680812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.290 [2024-11-26 20:55:45.680840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.290 qpair failed and we were unable to recover it. 00:25:42.290 [2024-11-26 20:55:45.680923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.290 [2024-11-26 20:55:45.680950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.290 qpair failed and we were unable to recover it. 00:25:42.290 [2024-11-26 20:55:45.681030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.290 [2024-11-26 20:55:45.681057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.290 qpair failed and we were unable to recover it. 00:25:42.290 [2024-11-26 20:55:45.681152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.290 [2024-11-26 20:55:45.681180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.290 qpair failed and we were unable to recover it. 00:25:42.290 [2024-11-26 20:55:45.681292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.290 [2024-11-26 20:55:45.681335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.290 qpair failed and we were unable to recover it. 00:25:42.290 [2024-11-26 20:55:45.681434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.290 [2024-11-26 20:55:45.681461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.290 qpair failed and we were unable to recover it. 00:25:42.290 [2024-11-26 20:55:45.681569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.290 [2024-11-26 20:55:45.681595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.290 qpair failed and we were unable to recover it. 00:25:42.290 [2024-11-26 20:55:45.681684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.290 [2024-11-26 20:55:45.681710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.290 qpair failed and we were unable to recover it. 00:25:42.290 [2024-11-26 20:55:45.681789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.290 [2024-11-26 20:55:45.681815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.290 qpair failed and we were unable to recover it. 00:25:42.290 [2024-11-26 20:55:45.681894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.290 [2024-11-26 20:55:45.681922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.290 qpair failed and we were unable to recover it. 00:25:42.290 [2024-11-26 20:55:45.682003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.290 [2024-11-26 20:55:45.682030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.290 qpair failed and we were unable to recover it. 00:25:42.290 [2024-11-26 20:55:45.682110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.290 [2024-11-26 20:55:45.682137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.290 qpair failed and we were unable to recover it. 00:25:42.290 [2024-11-26 20:55:45.682215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.290 [2024-11-26 20:55:45.682242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.290 qpair failed and we were unable to recover it. 00:25:42.290 [2024-11-26 20:55:45.682345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.290 [2024-11-26 20:55:45.682374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.290 qpair failed and we were unable to recover it. 00:25:42.291 [2024-11-26 20:55:45.682458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.291 [2024-11-26 20:55:45.682486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.291 qpair failed and we were unable to recover it. 00:25:42.291 [2024-11-26 20:55:45.682565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.291 [2024-11-26 20:55:45.682593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.291 qpair failed and we were unable to recover it. 00:25:42.291 [2024-11-26 20:55:45.682710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.291 [2024-11-26 20:55:45.682737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.291 qpair failed and we were unable to recover it. 00:25:42.291 [2024-11-26 20:55:45.682843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.291 [2024-11-26 20:55:45.682870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.291 qpair failed and we were unable to recover it. 00:25:42.291 [2024-11-26 20:55:45.682950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.291 [2024-11-26 20:55:45.682978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.291 qpair failed and we were unable to recover it. 00:25:42.291 [2024-11-26 20:55:45.683066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.291 [2024-11-26 20:55:45.683093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.291 qpair failed and we were unable to recover it. 00:25:42.291 [2024-11-26 20:55:45.683181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.291 [2024-11-26 20:55:45.683208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.291 qpair failed and we were unable to recover it. 00:25:42.291 [2024-11-26 20:55:45.683317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.291 [2024-11-26 20:55:45.683344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.291 qpair failed and we were unable to recover it. 00:25:42.291 [2024-11-26 20:55:45.683437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.291 [2024-11-26 20:55:45.683464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.291 qpair failed and we were unable to recover it. 00:25:42.291 [2024-11-26 20:55:45.683543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.291 [2024-11-26 20:55:45.683569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.291 qpair failed and we were unable to recover it. 00:25:42.291 [2024-11-26 20:55:45.683657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.291 [2024-11-26 20:55:45.683683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.291 qpair failed and we were unable to recover it. 00:25:42.291 [2024-11-26 20:55:45.683830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.291 [2024-11-26 20:55:45.683857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.291 qpair failed and we were unable to recover it. 00:25:42.291 [2024-11-26 20:55:45.683953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.291 [2024-11-26 20:55:45.683979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.291 qpair failed and we were unable to recover it. 00:25:42.291 [2024-11-26 20:55:45.684060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.291 [2024-11-26 20:55:45.684089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.291 qpair failed and we were unable to recover it. 00:25:42.291 [2024-11-26 20:55:45.684202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.291 [2024-11-26 20:55:45.684242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.291 qpair failed and we were unable to recover it. 00:25:42.291 [2024-11-26 20:55:45.684368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.291 [2024-11-26 20:55:45.684414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.291 qpair failed and we were unable to recover it. 00:25:42.291 [2024-11-26 20:55:45.684508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.291 [2024-11-26 20:55:45.684538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.291 qpair failed and we were unable to recover it. 00:25:42.291 [2024-11-26 20:55:45.684628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.291 [2024-11-26 20:55:45.684655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.291 qpair failed and we were unable to recover it. 00:25:42.291 [2024-11-26 20:55:45.684746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.291 [2024-11-26 20:55:45.684772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.291 qpair failed and we were unable to recover it. 00:25:42.291 [2024-11-26 20:55:45.684860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.291 [2024-11-26 20:55:45.684888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.291 qpair failed and we were unable to recover it. 00:25:42.291 [2024-11-26 20:55:45.684977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.291 [2024-11-26 20:55:45.685004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.291 qpair failed and we were unable to recover it. 00:25:42.291 [2024-11-26 20:55:45.685126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.291 [2024-11-26 20:55:45.685166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.291 qpair failed and we were unable to recover it. 00:25:42.291 [2024-11-26 20:55:45.685250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.291 [2024-11-26 20:55:45.685277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.291 qpair failed and we were unable to recover it. 00:25:42.291 [2024-11-26 20:55:45.685388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.291 [2024-11-26 20:55:45.685418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.291 qpair failed and we were unable to recover it. 00:25:42.291 [2024-11-26 20:55:45.685510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.291 [2024-11-26 20:55:45.685537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.291 qpair failed and we were unable to recover it. 00:25:42.291 [2024-11-26 20:55:45.685630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.291 [2024-11-26 20:55:45.685657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.291 qpair failed and we were unable to recover it. 00:25:42.291 [2024-11-26 20:55:45.685743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.291 [2024-11-26 20:55:45.685770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.291 qpair failed and we were unable to recover it. 00:25:42.291 [2024-11-26 20:55:45.685880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.291 [2024-11-26 20:55:45.685908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.291 qpair failed and we were unable to recover it. 00:25:42.291 [2024-11-26 20:55:45.686014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.291 [2024-11-26 20:55:45.686040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.291 qpair failed and we were unable to recover it. 00:25:42.291 [2024-11-26 20:55:45.686131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.291 [2024-11-26 20:55:45.686160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.291 qpair failed and we were unable to recover it. 00:25:42.291 [2024-11-26 20:55:45.686281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.291 [2024-11-26 20:55:45.686315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.291 qpair failed and we were unable to recover it. 00:25:42.291 [2024-11-26 20:55:45.686406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.291 [2024-11-26 20:55:45.686434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.292 qpair failed and we were unable to recover it. 00:25:42.292 [2024-11-26 20:55:45.686522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.292 [2024-11-26 20:55:45.686549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.292 qpair failed and we were unable to recover it. 00:25:42.292 [2024-11-26 20:55:45.686643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.292 [2024-11-26 20:55:45.686669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.292 qpair failed and we were unable to recover it. 00:25:42.292 [2024-11-26 20:55:45.686758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.292 [2024-11-26 20:55:45.686784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.292 qpair failed and we were unable to recover it. 00:25:42.292 [2024-11-26 20:55:45.686925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.292 [2024-11-26 20:55:45.686951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.292 qpair failed and we were unable to recover it. 00:25:42.292 [2024-11-26 20:55:45.687035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.292 [2024-11-26 20:55:45.687062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.292 qpair failed and we were unable to recover it. 00:25:42.292 [2024-11-26 20:55:45.687167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.292 [2024-11-26 20:55:45.687207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.292 qpair failed and we were unable to recover it. 00:25:42.292 [2024-11-26 20:55:45.687313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.292 [2024-11-26 20:55:45.687341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.292 qpair failed and we were unable to recover it. 00:25:42.292 [2024-11-26 20:55:45.687423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.292 [2024-11-26 20:55:45.687449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.292 qpair failed and we were unable to recover it. 00:25:42.292 [2024-11-26 20:55:45.687528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.292 [2024-11-26 20:55:45.687553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.292 qpair failed and we were unable to recover it. 00:25:42.292 [2024-11-26 20:55:45.687647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.292 [2024-11-26 20:55:45.687672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.292 qpair failed and we were unable to recover it. 00:25:42.292 [2024-11-26 20:55:45.687760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.292 [2024-11-26 20:55:45.687786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.292 qpair failed and we were unable to recover it. 00:25:42.292 [2024-11-26 20:55:45.687868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.292 [2024-11-26 20:55:45.687896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.292 qpair failed and we were unable to recover it. 00:25:42.292 [2024-11-26 20:55:45.687978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.292 [2024-11-26 20:55:45.688007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.292 qpair failed and we were unable to recover it. 00:25:42.292 [2024-11-26 20:55:45.688103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.292 [2024-11-26 20:55:45.688144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.292 qpair failed and we were unable to recover it. 00:25:42.292 [2024-11-26 20:55:45.688234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.292 [2024-11-26 20:55:45.688261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.292 qpair failed and we were unable to recover it. 00:25:42.292 [2024-11-26 20:55:45.688377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.292 [2024-11-26 20:55:45.688405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.292 qpair failed and we were unable to recover it. 00:25:42.292 [2024-11-26 20:55:45.688489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.292 [2024-11-26 20:55:45.688515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.292 qpair failed and we were unable to recover it. 00:25:42.292 [2024-11-26 20:55:45.688633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.292 [2024-11-26 20:55:45.688658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.292 qpair failed and we were unable to recover it. 00:25:42.292 [2024-11-26 20:55:45.688747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.292 [2024-11-26 20:55:45.688774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.292 qpair failed and we were unable to recover it. 00:25:42.292 [2024-11-26 20:55:45.688860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.292 [2024-11-26 20:55:45.688887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.292 qpair failed and we were unable to recover it. 00:25:42.292 [2024-11-26 20:55:45.688999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.292 [2024-11-26 20:55:45.689026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.292 qpair failed and we were unable to recover it. 00:25:42.292 [2024-11-26 20:55:45.689140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.292 [2024-11-26 20:55:45.689166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.292 qpair failed and we were unable to recover it. 00:25:42.292 [2024-11-26 20:55:45.689254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.292 [2024-11-26 20:55:45.689280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.292 qpair failed and we were unable to recover it. 00:25:42.292 [2024-11-26 20:55:45.689430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.292 [2024-11-26 20:55:45.689462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.292 qpair failed and we were unable to recover it. 00:25:42.292 [2024-11-26 20:55:45.689576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.292 [2024-11-26 20:55:45.689617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.292 qpair failed and we were unable to recover it. 00:25:42.292 [2024-11-26 20:55:45.689740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.292 [2024-11-26 20:55:45.689769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.292 qpair failed and we were unable to recover it. 00:25:42.292 [2024-11-26 20:55:45.689852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.292 [2024-11-26 20:55:45.689879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.292 qpair failed and we were unable to recover it. 00:25:42.292 [2024-11-26 20:55:45.689967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.292 [2024-11-26 20:55:45.689994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.292 qpair failed and we were unable to recover it. 00:25:42.292 [2024-11-26 20:55:45.690097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.292 [2024-11-26 20:55:45.690137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.292 qpair failed and we were unable to recover it. 00:25:42.292 [2024-11-26 20:55:45.690232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.292 [2024-11-26 20:55:45.690259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.292 qpair failed and we were unable to recover it. 00:25:42.293 [2024-11-26 20:55:45.690376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.293 [2024-11-26 20:55:45.690407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.293 qpair failed and we were unable to recover it. 00:25:42.293 [2024-11-26 20:55:45.690492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.293 [2024-11-26 20:55:45.690519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.293 qpair failed and we were unable to recover it. 00:25:42.293 [2024-11-26 20:55:45.690641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.293 [2024-11-26 20:55:45.690667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.293 qpair failed and we were unable to recover it. 00:25:42.293 [2024-11-26 20:55:45.690763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.293 [2024-11-26 20:55:45.690792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.293 qpair failed and we were unable to recover it. 00:25:42.293 [2024-11-26 20:55:45.690872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.293 [2024-11-26 20:55:45.690899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.293 qpair failed and we were unable to recover it. 00:25:42.293 [2024-11-26 20:55:45.691006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.293 [2024-11-26 20:55:45.691033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.293 qpair failed and we were unable to recover it. 00:25:42.293 [2024-11-26 20:55:45.691120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.293 [2024-11-26 20:55:45.691146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.293 qpair failed and we were unable to recover it. 00:25:42.293 [2024-11-26 20:55:45.691236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.293 [2024-11-26 20:55:45.691263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.293 qpair failed and we were unable to recover it. 00:25:42.293 [2024-11-26 20:55:45.691390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.293 [2024-11-26 20:55:45.691421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.293 qpair failed and we were unable to recover it. 00:25:42.293 [2024-11-26 20:55:45.691540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.293 [2024-11-26 20:55:45.691566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.293 qpair failed and we were unable to recover it. 00:25:42.293 [2024-11-26 20:55:45.691682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.293 [2024-11-26 20:55:45.691707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.293 qpair failed and we were unable to recover it. 00:25:42.293 [2024-11-26 20:55:45.691793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.293 [2024-11-26 20:55:45.691819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.293 qpair failed and we were unable to recover it. 00:25:42.293 [2024-11-26 20:55:45.691909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.293 [2024-11-26 20:55:45.691934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.293 qpair failed and we were unable to recover it. 00:25:42.293 [2024-11-26 20:55:45.692048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.293 [2024-11-26 20:55:45.692074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.293 qpair failed and we were unable to recover it. 00:25:42.293 [2024-11-26 20:55:45.692176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.293 [2024-11-26 20:55:45.692216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.293 qpair failed and we were unable to recover it. 00:25:42.293 [2024-11-26 20:55:45.692323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.293 [2024-11-26 20:55:45.692353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.293 qpair failed and we were unable to recover it. 00:25:42.293 [2024-11-26 20:55:45.692439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.293 [2024-11-26 20:55:45.692467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.293 qpair failed and we were unable to recover it. 00:25:42.293 [2024-11-26 20:55:45.692556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.293 [2024-11-26 20:55:45.692582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.293 qpair failed and we were unable to recover it. 00:25:42.293 [2024-11-26 20:55:45.692698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.293 [2024-11-26 20:55:45.692726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.293 qpair failed and we were unable to recover it. 00:25:42.293 [2024-11-26 20:55:45.692813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.293 [2024-11-26 20:55:45.692840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.293 qpair failed and we were unable to recover it. 00:25:42.293 [2024-11-26 20:55:45.692952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.293 [2024-11-26 20:55:45.692984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.293 qpair failed and we were unable to recover it. 00:25:42.293 [2024-11-26 20:55:45.693065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.293 [2024-11-26 20:55:45.693093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.293 qpair failed and we were unable to recover it. 00:25:42.293 [2024-11-26 20:55:45.693190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.293 [2024-11-26 20:55:45.693230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.293 qpair failed and we were unable to recover it. 00:25:42.293 [2024-11-26 20:55:45.693336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.293 [2024-11-26 20:55:45.693365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.293 qpair failed and we were unable to recover it. 00:25:42.293 [2024-11-26 20:55:45.693477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.293 [2024-11-26 20:55:45.693504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.293 qpair failed and we were unable to recover it. 00:25:42.293 [2024-11-26 20:55:45.693582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.293 [2024-11-26 20:55:45.693617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.293 qpair failed and we were unable to recover it. 00:25:42.293 [2024-11-26 20:55:45.693696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.293 [2024-11-26 20:55:45.693721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.293 qpair failed and we were unable to recover it. 00:25:42.293 [2024-11-26 20:55:45.693805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.293 [2024-11-26 20:55:45.693831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.293 qpair failed and we were unable to recover it. 00:25:42.293 [2024-11-26 20:55:45.693922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.293 [2024-11-26 20:55:45.693948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.293 qpair failed and we were unable to recover it. 00:25:42.293 [2024-11-26 20:55:45.694059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.293 [2024-11-26 20:55:45.694085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.293 qpair failed and we were unable to recover it. 00:25:42.293 [2024-11-26 20:55:45.694182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.293 [2024-11-26 20:55:45.694221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.293 qpair failed and we were unable to recover it. 00:25:42.293 [2024-11-26 20:55:45.694322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.293 [2024-11-26 20:55:45.694351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.293 qpair failed and we were unable to recover it. 00:25:42.293 [2024-11-26 20:55:45.694441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.293 [2024-11-26 20:55:45.694469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.293 qpair failed and we were unable to recover it. 00:25:42.293 [2024-11-26 20:55:45.694586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.293 [2024-11-26 20:55:45.694620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.293 qpair failed and we were unable to recover it. 00:25:42.293 [2024-11-26 20:55:45.694716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.293 [2024-11-26 20:55:45.694742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.293 qpair failed and we were unable to recover it. 00:25:42.293 [2024-11-26 20:55:45.694882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.293 [2024-11-26 20:55:45.694909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.293 qpair failed and we were unable to recover it. 00:25:42.293 [2024-11-26 20:55:45.694991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.293 [2024-11-26 20:55:45.695019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.293 qpair failed and we were unable to recover it. 00:25:42.293 [2024-11-26 20:55:45.695111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.293 [2024-11-26 20:55:45.695141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.293 qpair failed and we were unable to recover it. 00:25:42.293 [2024-11-26 20:55:45.695234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.293 [2024-11-26 20:55:45.695275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.293 qpair failed and we were unable to recover it. 00:25:42.293 [2024-11-26 20:55:45.695380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.293 [2024-11-26 20:55:45.695409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.293 qpair failed and we were unable to recover it. 00:25:42.293 [2024-11-26 20:55:45.695490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.293 [2024-11-26 20:55:45.695517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.293 qpair failed and we were unable to recover it. 00:25:42.293 [2024-11-26 20:55:45.695621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.293 [2024-11-26 20:55:45.695647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.293 qpair failed and we were unable to recover it. 00:25:42.293 [2024-11-26 20:55:45.695734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.293 [2024-11-26 20:55:45.695763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.293 qpair failed and we were unable to recover it. 00:25:42.293 [2024-11-26 20:55:45.695852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.293 [2024-11-26 20:55:45.695880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.293 qpair failed and we were unable to recover it. 00:25:42.293 [2024-11-26 20:55:45.695958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.293 [2024-11-26 20:55:45.695983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.293 qpair failed and we were unable to recover it. 00:25:42.293 [2024-11-26 20:55:45.696101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.293 [2024-11-26 20:55:45.696126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.293 qpair failed and we were unable to recover it. 00:25:42.293 [2024-11-26 20:55:45.696203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.293 [2024-11-26 20:55:45.696228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.293 qpair failed and we were unable to recover it. 00:25:42.293 [2024-11-26 20:55:45.696362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.293 [2024-11-26 20:55:45.696407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.293 qpair failed and we were unable to recover it. 00:25:42.293 [2024-11-26 20:55:45.696496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.293 [2024-11-26 20:55:45.696525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.293 qpair failed and we were unable to recover it. 00:25:42.293 [2024-11-26 20:55:45.696607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.293 [2024-11-26 20:55:45.696634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.293 qpair failed and we were unable to recover it. 00:25:42.293 [2024-11-26 20:55:45.696723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.696750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.696841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.696869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.696955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.696981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.697120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.697147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.697262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.697290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.697403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.697432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.697524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.697553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.697647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.697673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.697757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.697783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.697865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.697890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.697996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.698022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.698114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.698139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.698227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.698256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.698365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.698405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.698503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.698533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.698635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.698662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.698770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.698797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.698884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.698911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.698995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.699022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.699137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.699162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.699245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.699271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.699375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.699402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.699478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.699503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.699591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.699616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.699733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.699761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.699845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.699871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.699960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.699987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.700077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.700105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.700223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.700263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.700395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.700424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.700543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.700570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.700671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.700698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.700786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.700812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.700892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.700920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.701004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.701033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.701128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.701155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.701262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.701297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.701397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.701429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.701510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.701537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.701630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.701658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.701738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.701765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.701893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.701933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.702028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.702056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.702164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.702204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.702338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.702368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.702452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.702479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.702559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.702597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.702687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.702713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.702806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.702846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.702965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.702992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.703111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.703138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.703259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.703296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.703396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.703423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.703512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.703538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.703670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.703696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.703793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.703822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.703936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.703962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.704044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.704082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.704168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.704193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.704293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.704326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.704416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.704442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.704583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.704611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.704700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.704727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.704819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.704847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.704941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.704987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.705078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.705108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.705187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.705214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.705343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.705370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.705459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.705486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.705573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.705609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.705727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.705755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.705845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.705872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.294 qpair failed and we were unable to recover it. 00:25:42.294 [2024-11-26 20:55:45.705982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.294 [2024-11-26 20:55:45.706010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.295 qpair failed and we were unable to recover it. 00:25:42.295 [2024-11-26 20:55:45.706117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.295 [2024-11-26 20:55:45.706158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.295 qpair failed and we were unable to recover it. 00:25:42.295 [2024-11-26 20:55:45.706252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.295 [2024-11-26 20:55:45.706279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.295 qpair failed and we were unable to recover it. 00:25:42.295 [2024-11-26 20:55:45.706418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.295 [2024-11-26 20:55:45.706445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.295 qpair failed and we were unable to recover it. 00:25:42.295 [2024-11-26 20:55:45.706540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.295 [2024-11-26 20:55:45.706567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.295 qpair failed and we were unable to recover it. 00:25:42.295 [2024-11-26 20:55:45.706686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.295 [2024-11-26 20:55:45.706712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.295 qpair failed and we were unable to recover it. 00:25:42.295 [2024-11-26 20:55:45.706808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.295 [2024-11-26 20:55:45.706835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.295 qpair failed and we were unable to recover it. 00:25:42.295 [2024-11-26 20:55:45.706915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.295 [2024-11-26 20:55:45.706941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.295 qpair failed and we were unable to recover it. 00:25:42.295 [2024-11-26 20:55:45.707024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.295 [2024-11-26 20:55:45.707050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.295 qpair failed and we were unable to recover it. 00:25:42.295 [2024-11-26 20:55:45.707136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.295 [2024-11-26 20:55:45.707165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.295 qpair failed and we were unable to recover it. 00:25:42.295 [2024-11-26 20:55:45.707246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.295 [2024-11-26 20:55:45.707275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.295 qpair failed and we were unable to recover it. 00:25:42.295 [2024-11-26 20:55:45.707393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.295 [2024-11-26 20:55:45.707420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.295 qpair failed and we were unable to recover it. 00:25:42.295 [2024-11-26 20:55:45.707508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.295 [2024-11-26 20:55:45.707534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.295 qpair failed and we were unable to recover it. 00:25:42.295 [2024-11-26 20:55:45.707644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.295 [2024-11-26 20:55:45.707670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.295 qpair failed and we were unable to recover it. 00:25:42.295 [2024-11-26 20:55:45.707760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.295 [2024-11-26 20:55:45.707786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.295 qpair failed and we were unable to recover it. 00:25:42.295 [2024-11-26 20:55:45.707869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.295 [2024-11-26 20:55:45.707896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.295 qpair failed and we were unable to recover it. 00:25:42.295 [2024-11-26 20:55:45.707971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.295 [2024-11-26 20:55:45.707996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.295 qpair failed and we were unable to recover it. 00:25:42.295 [2024-11-26 20:55:45.708091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.295 [2024-11-26 20:55:45.708130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.295 qpair failed and we were unable to recover it. 00:25:42.295 [2024-11-26 20:55:45.708231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.295 [2024-11-26 20:55:45.708260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.295 qpair failed and we were unable to recover it. 00:25:42.295 [2024-11-26 20:55:45.708396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.295 [2024-11-26 20:55:45.708427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.295 qpair failed and we were unable to recover it. 00:25:42.295 [2024-11-26 20:55:45.708536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.295 [2024-11-26 20:55:45.708564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.295 qpair failed and we were unable to recover it. 00:25:42.295 [2024-11-26 20:55:45.708643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.295 [2024-11-26 20:55:45.708670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.295 qpair failed and we were unable to recover it. 00:25:42.295 [2024-11-26 20:55:45.708772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.295 [2024-11-26 20:55:45.708798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.295 qpair failed and we were unable to recover it. 00:25:42.295 [2024-11-26 20:55:45.708889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.295 [2024-11-26 20:55:45.708916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.295 qpair failed and we were unable to recover it. 00:25:42.295 [2024-11-26 20:55:45.709005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.295 [2024-11-26 20:55:45.709044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.295 qpair failed and we were unable to recover it. 00:25:42.295 [2024-11-26 20:55:45.709140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.295 [2024-11-26 20:55:45.709171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.295 qpair failed and we were unable to recover it. 00:25:42.295 [2024-11-26 20:55:45.709261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.295 [2024-11-26 20:55:45.709295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.295 qpair failed and we were unable to recover it. 00:25:42.295 [2024-11-26 20:55:45.709389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.295 [2024-11-26 20:55:45.709416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.295 qpair failed and we were unable to recover it. 00:25:42.295 [2024-11-26 20:55:45.709526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.295 [2024-11-26 20:55:45.709552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.295 qpair failed and we were unable to recover it. 00:25:42.295 [2024-11-26 20:55:45.709642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.295 [2024-11-26 20:55:45.709669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.295 qpair failed and we were unable to recover it. 00:25:42.295 [2024-11-26 20:55:45.709755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.295 [2024-11-26 20:55:45.709783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.295 qpair failed and we were unable to recover it. 00:25:42.295 [2024-11-26 20:55:45.709868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.295 [2024-11-26 20:55:45.709895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.295 qpair failed and we were unable to recover it. 00:25:42.295 [2024-11-26 20:55:45.709979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.295 [2024-11-26 20:55:45.710008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.295 qpair failed and we were unable to recover it. 00:25:42.295 [2024-11-26 20:55:45.710116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.295 [2024-11-26 20:55:45.710155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.295 qpair failed and we were unable to recover it. 00:25:42.295 [2024-11-26 20:55:45.710255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.295 [2024-11-26 20:55:45.710283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.295 qpair failed and we were unable to recover it. 00:25:42.295 [2024-11-26 20:55:45.710389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.295 [2024-11-26 20:55:45.710416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.295 qpair failed and we were unable to recover it. 00:25:42.295 [2024-11-26 20:55:45.710528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.295 [2024-11-26 20:55:45.710556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.295 qpair failed and we were unable to recover it. 00:25:42.295 [2024-11-26 20:55:45.710638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.295 [2024-11-26 20:55:45.710665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.295 qpair failed and we were unable to recover it. 00:25:42.295 [2024-11-26 20:55:45.710777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.295 [2024-11-26 20:55:45.710804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.295 qpair failed and we were unable to recover it. 00:25:42.295 [2024-11-26 20:55:45.710889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.295 [2024-11-26 20:55:45.710916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.295 qpair failed and we were unable to recover it. 00:25:42.295 [2024-11-26 20:55:45.711013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.295 [2024-11-26 20:55:45.711053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.295 qpair failed and we were unable to recover it. 00:25:42.295 [2024-11-26 20:55:45.711144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.295 [2024-11-26 20:55:45.711171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.295 qpair failed and we were unable to recover it. 00:25:42.295 [2024-11-26 20:55:45.711260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.295 [2024-11-26 20:55:45.711296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.295 qpair failed and we were unable to recover it. 00:25:42.295 [2024-11-26 20:55:45.711392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.295 [2024-11-26 20:55:45.711419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.295 qpair failed and we were unable to recover it. 00:25:42.295 [2024-11-26 20:55:45.711497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.295 [2024-11-26 20:55:45.711523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.295 qpair failed and we were unable to recover it. 00:25:42.295 [2024-11-26 20:55:45.711612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.295 [2024-11-26 20:55:45.711637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.295 qpair failed and we were unable to recover it. 00:25:42.295 [2024-11-26 20:55:45.711725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.295 [2024-11-26 20:55:45.711753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.295 qpair failed and we were unable to recover it. 00:25:42.295 [2024-11-26 20:55:45.711838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.295 [2024-11-26 20:55:45.711865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.295 qpair failed and we were unable to recover it. 00:25:42.295 [2024-11-26 20:55:45.711943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.295 [2024-11-26 20:55:45.711970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.295 qpair failed and we were unable to recover it. 00:25:42.295 [2024-11-26 20:55:45.712083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.295 [2024-11-26 20:55:45.712109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.295 qpair failed and we were unable to recover it. 00:25:42.295 [2024-11-26 20:55:45.712195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.295 [2024-11-26 20:55:45.712221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.295 qpair failed and we were unable to recover it. 00:25:42.295 [2024-11-26 20:55:45.712344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.295 [2024-11-26 20:55:45.712373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.295 qpair failed and we were unable to recover it. 00:25:42.295 [2024-11-26 20:55:45.712451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.295 [2024-11-26 20:55:45.712479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.295 qpair failed and we were unable to recover it. 00:25:42.295 [2024-11-26 20:55:45.712565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.295 [2024-11-26 20:55:45.712602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.295 qpair failed and we were unable to recover it. 00:25:42.295 [2024-11-26 20:55:45.712680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.295 [2024-11-26 20:55:45.712707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.295 qpair failed and we were unable to recover it. 00:25:42.295 [2024-11-26 20:55:45.712786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.295 [2024-11-26 20:55:45.712813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.295 qpair failed and we were unable to recover it. 00:25:42.295 [2024-11-26 20:55:45.712890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.295 [2024-11-26 20:55:45.712917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.295 qpair failed and we were unable to recover it. 00:25:42.295 [2024-11-26 20:55:45.713001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.295 [2024-11-26 20:55:45.713029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.296 qpair failed and we were unable to recover it. 00:25:42.296 [2024-11-26 20:55:45.713113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.296 [2024-11-26 20:55:45.713141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.296 qpair failed and we were unable to recover it. 00:25:42.296 [2024-11-26 20:55:45.713231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.296 [2024-11-26 20:55:45.713262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.296 qpair failed and we were unable to recover it. 00:25:42.296 [2024-11-26 20:55:45.713359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.296 [2024-11-26 20:55:45.713387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.296 qpair failed and we were unable to recover it. 00:25:42.296 [2024-11-26 20:55:45.713467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.296 [2024-11-26 20:55:45.713495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.296 qpair failed and we were unable to recover it. 00:25:42.296 [2024-11-26 20:55:45.713587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.296 [2024-11-26 20:55:45.713620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.296 qpair failed and we were unable to recover it. 00:25:42.296 [2024-11-26 20:55:45.713737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.296 [2024-11-26 20:55:45.713765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.296 qpair failed and we were unable to recover it. 00:25:42.296 [2024-11-26 20:55:45.713851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.296 [2024-11-26 20:55:45.713878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.296 qpair failed and we were unable to recover it. 00:25:42.296 [2024-11-26 20:55:45.713983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.296 [2024-11-26 20:55:45.714010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.296 qpair failed and we were unable to recover it. 00:25:42.296 [2024-11-26 20:55:45.714088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.296 [2024-11-26 20:55:45.714114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.296 qpair failed and we were unable to recover it. 00:25:42.296 [2024-11-26 20:55:45.714229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.296 [2024-11-26 20:55:45.714258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.296 qpair failed and we were unable to recover it. 00:25:42.296 [2024-11-26 20:55:45.714378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.296 [2024-11-26 20:55:45.714406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.296 qpair failed and we were unable to recover it. 00:25:42.296 [2024-11-26 20:55:45.714497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.296 [2024-11-26 20:55:45.714536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.296 qpair failed and we were unable to recover it. 00:25:42.296 [2024-11-26 20:55:45.714741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.296 [2024-11-26 20:55:45.714769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.296 qpair failed and we were unable to recover it. 00:25:42.296 [2024-11-26 20:55:45.714858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.296 [2024-11-26 20:55:45.714884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.296 qpair failed and we were unable to recover it. 00:25:42.296 [2024-11-26 20:55:45.714965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.296 [2024-11-26 20:55:45.714991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.296 qpair failed and we were unable to recover it. 00:25:42.296 [2024-11-26 20:55:45.715077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.296 [2024-11-26 20:55:45.715103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.296 qpair failed and we were unable to recover it. 00:25:42.296 [2024-11-26 20:55:45.715214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.296 [2024-11-26 20:55:45.715256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.296 qpair failed and we were unable to recover it. 00:25:42.296 [2024-11-26 20:55:45.715393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.296 [2024-11-26 20:55:45.715421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.296 qpair failed and we were unable to recover it. 00:25:42.296 [2024-11-26 20:55:45.715510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.296 [2024-11-26 20:55:45.715538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.296 qpair failed and we were unable to recover it. 00:25:42.296 [2024-11-26 20:55:45.715629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.296 [2024-11-26 20:55:45.715655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.296 qpair failed and we were unable to recover it. 00:25:42.296 [2024-11-26 20:55:45.715748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.296 [2024-11-26 20:55:45.715774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.296 qpair failed and we were unable to recover it. 00:25:42.296 [2024-11-26 20:55:45.715855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.296 [2024-11-26 20:55:45.715882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.296 qpair failed and we were unable to recover it. 00:25:42.296 [2024-11-26 20:55:45.715970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.296 [2024-11-26 20:55:45.715997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.296 qpair failed and we were unable to recover it. 00:25:42.296 [2024-11-26 20:55:45.716125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.296 [2024-11-26 20:55:45.716165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.296 qpair failed and we were unable to recover it. 00:25:42.296 [2024-11-26 20:55:45.716261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.296 [2024-11-26 20:55:45.716297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.296 qpair failed and we were unable to recover it. 00:25:42.296 [2024-11-26 20:55:45.716396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.296 [2024-11-26 20:55:45.716423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.296 qpair failed and we were unable to recover it. 00:25:42.296 [2024-11-26 20:55:45.716519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.296 [2024-11-26 20:55:45.716545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.296 qpair failed and we were unable to recover it. 00:25:42.296 [2024-11-26 20:55:45.716637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.296 [2024-11-26 20:55:45.716663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.296 qpair failed and we were unable to recover it. 00:25:42.296 [2024-11-26 20:55:45.716757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.296 [2024-11-26 20:55:45.716785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.296 qpair failed and we were unable to recover it. 00:25:42.296 [2024-11-26 20:55:45.716867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.296 [2024-11-26 20:55:45.716895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.296 qpair failed and we were unable to recover it. 00:25:42.296 [2024-11-26 20:55:45.716985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.296 [2024-11-26 20:55:45.717013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.296 qpair failed and we were unable to recover it. 00:25:42.296 [2024-11-26 20:55:45.717102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.296 [2024-11-26 20:55:45.717129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.296 qpair failed and we were unable to recover it. 00:25:42.296 [2024-11-26 20:55:45.717205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.296 [2024-11-26 20:55:45.717231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.296 qpair failed and we were unable to recover it. 00:25:42.296 [2024-11-26 20:55:45.717329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.296 [2024-11-26 20:55:45.717357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.296 qpair failed and we were unable to recover it. 00:25:42.296 [2024-11-26 20:55:45.717444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.296 [2024-11-26 20:55:45.717470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.296 qpair failed and we were unable to recover it. 00:25:42.296 [2024-11-26 20:55:45.717552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.296 [2024-11-26 20:55:45.717579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.296 qpair failed and we were unable to recover it. 00:25:42.296 [2024-11-26 20:55:45.717664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.296 [2024-11-26 20:55:45.717690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.297 qpair failed and we were unable to recover it. 00:25:42.297 [2024-11-26 20:55:45.717809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.297 [2024-11-26 20:55:45.717835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.297 qpair failed and we were unable to recover it. 00:25:42.297 [2024-11-26 20:55:45.717912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.297 [2024-11-26 20:55:45.717938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.297 qpair failed and we were unable to recover it. 00:25:42.297 [2024-11-26 20:55:45.718015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.297 [2024-11-26 20:55:45.718041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.297 qpair failed and we were unable to recover it. 00:25:42.297 [2024-11-26 20:55:45.718125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.297 [2024-11-26 20:55:45.718151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.297 qpair failed and we were unable to recover it. 00:25:42.297 [2024-11-26 20:55:45.718243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.297 [2024-11-26 20:55:45.718275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.297 qpair failed and we were unable to recover it. 00:25:42.297 [2024-11-26 20:55:45.718400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.297 [2024-11-26 20:55:45.718441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.297 qpair failed and we were unable to recover it. 00:25:42.297 [2024-11-26 20:55:45.718539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.297 [2024-11-26 20:55:45.718567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.297 qpair failed and we were unable to recover it. 00:25:42.297 [2024-11-26 20:55:45.718686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.297 [2024-11-26 20:55:45.718713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.297 qpair failed and we were unable to recover it. 00:25:42.297 [2024-11-26 20:55:45.718809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.297 [2024-11-26 20:55:45.718836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.297 qpair failed and we were unable to recover it. 00:25:42.297 [2024-11-26 20:55:45.718924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.297 [2024-11-26 20:55:45.718949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.297 qpair failed and we were unable to recover it. 00:25:42.297 [2024-11-26 20:55:45.719064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.297 [2024-11-26 20:55:45.719090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.297 qpair failed and we were unable to recover it. 00:25:42.297 [2024-11-26 20:55:45.719173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.297 [2024-11-26 20:55:45.719200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.297 qpair failed and we were unable to recover it. 00:25:42.297 [2024-11-26 20:55:45.719294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.297 [2024-11-26 20:55:45.719328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.297 qpair failed and we were unable to recover it. 00:25:42.297 [2024-11-26 20:55:45.719406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.297 [2024-11-26 20:55:45.719432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.297 qpair failed and we were unable to recover it. 00:25:42.297 [2024-11-26 20:55:45.719521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.297 [2024-11-26 20:55:45.719547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.297 qpair failed and we were unable to recover it. 00:25:42.297 [2024-11-26 20:55:45.719633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.297 [2024-11-26 20:55:45.719659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.297 qpair failed and we were unable to recover it. 00:25:42.297 [2024-11-26 20:55:45.719737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.297 [2024-11-26 20:55:45.719764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.297 qpair failed and we were unable to recover it. 00:25:42.297 [2024-11-26 20:55:45.719882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.297 [2024-11-26 20:55:45.719908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.297 qpair failed and we were unable to recover it. 00:25:42.297 [2024-11-26 20:55:45.720025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.297 [2024-11-26 20:55:45.720051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.297 qpair failed and we were unable to recover it. 00:25:42.297 [2024-11-26 20:55:45.720130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.297 [2024-11-26 20:55:45.720157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.297 qpair failed and we were unable to recover it. 00:25:42.297 [2024-11-26 20:55:45.720242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.297 [2024-11-26 20:55:45.720268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.297 qpair failed and we were unable to recover it. 00:25:42.297 [2024-11-26 20:55:45.720356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.297 [2024-11-26 20:55:45.720383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.297 qpair failed and we were unable to recover it. 00:25:42.297 [2024-11-26 20:55:45.720468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.297 [2024-11-26 20:55:45.720496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.297 qpair failed and we were unable to recover it. 00:25:42.297 [2024-11-26 20:55:45.720583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.297 [2024-11-26 20:55:45.720609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.297 qpair failed and we were unable to recover it. 00:25:42.297 [2024-11-26 20:55:45.720694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.297 [2024-11-26 20:55:45.720720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.297 qpair failed and we were unable to recover it. 00:25:42.297 [2024-11-26 20:55:45.720836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.297 [2024-11-26 20:55:45.720862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.297 qpair failed and we were unable to recover it. 00:25:42.297 [2024-11-26 20:55:45.720938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.297 [2024-11-26 20:55:45.720963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.297 qpair failed and we were unable to recover it. 00:25:42.297 [2024-11-26 20:55:45.721039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.297 [2024-11-26 20:55:45.721064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.297 qpair failed and we were unable to recover it. 00:25:42.297 [2024-11-26 20:55:45.721192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.297 [2024-11-26 20:55:45.721233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.297 qpair failed and we were unable to recover it. 00:25:42.297 [2024-11-26 20:55:45.721337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.297 [2024-11-26 20:55:45.721369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.297 qpair failed and we were unable to recover it. 00:25:42.297 [2024-11-26 20:55:45.721469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.297 [2024-11-26 20:55:45.721496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.297 qpair failed and we were unable to recover it. 00:25:42.297 [2024-11-26 20:55:45.721585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.297 [2024-11-26 20:55:45.721613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.297 qpair failed and we were unable to recover it. 00:25:42.297 [2024-11-26 20:55:45.721730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.297 [2024-11-26 20:55:45.721756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.297 qpair failed and we were unable to recover it. 00:25:42.297 [2024-11-26 20:55:45.721865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.297 [2024-11-26 20:55:45.721891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.297 qpair failed and we were unable to recover it. 00:25:42.297 [2024-11-26 20:55:45.721970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.297 [2024-11-26 20:55:45.721996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.297 qpair failed and we were unable to recover it. 00:25:42.297 [2024-11-26 20:55:45.722086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.297 [2024-11-26 20:55:45.722116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.297 qpair failed and we were unable to recover it. 00:25:42.297 [2024-11-26 20:55:45.722214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.297 [2024-11-26 20:55:45.722254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.297 qpair failed and we were unable to recover it. 00:25:42.297 [2024-11-26 20:55:45.722361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.297 [2024-11-26 20:55:45.722390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.297 qpair failed and we were unable to recover it. 00:25:42.298 [2024-11-26 20:55:45.722478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.298 [2024-11-26 20:55:45.722507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.298 qpair failed and we were unable to recover it. 00:25:42.298 [2024-11-26 20:55:45.722591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.298 [2024-11-26 20:55:45.722618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.298 qpair failed and we were unable to recover it. 00:25:42.298 [2024-11-26 20:55:45.722733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.298 [2024-11-26 20:55:45.722759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.298 qpair failed and we were unable to recover it. 00:25:42.298 [2024-11-26 20:55:45.722845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.298 [2024-11-26 20:55:45.722872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.298 qpair failed and we were unable to recover it. 00:25:42.298 [2024-11-26 20:55:45.722957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.298 [2024-11-26 20:55:45.722984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.298 qpair failed and we were unable to recover it. 00:25:42.298 [2024-11-26 20:55:45.723077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.298 [2024-11-26 20:55:45.723117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.298 qpair failed and we were unable to recover it. 00:25:42.298 [2024-11-26 20:55:45.723238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.298 [2024-11-26 20:55:45.723271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.298 qpair failed and we were unable to recover it. 00:25:42.298 [2024-11-26 20:55:45.723373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.298 [2024-11-26 20:55:45.723402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.298 qpair failed and we were unable to recover it. 00:25:42.298 [2024-11-26 20:55:45.723497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.298 [2024-11-26 20:55:45.723525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.298 qpair failed and we were unable to recover it. 00:25:42.298 [2024-11-26 20:55:45.723618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.298 [2024-11-26 20:55:45.723645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.298 qpair failed and we were unable to recover it. 00:25:42.298 [2024-11-26 20:55:45.723726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.298 [2024-11-26 20:55:45.723752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.298 qpair failed and we were unable to recover it. 00:25:42.298 [2024-11-26 20:55:45.723873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.298 [2024-11-26 20:55:45.723901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.298 qpair failed and we were unable to recover it. 00:25:42.298 [2024-11-26 20:55:45.723980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.298 [2024-11-26 20:55:45.724007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.298 qpair failed and we were unable to recover it. 00:25:42.298 [2024-11-26 20:55:45.724093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.298 [2024-11-26 20:55:45.724121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.298 qpair failed and we were unable to recover it. 00:25:42.298 [2024-11-26 20:55:45.724215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.298 [2024-11-26 20:55:45.724241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.298 qpair failed and we were unable to recover it. 00:25:42.298 [2024-11-26 20:55:45.724324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.298 [2024-11-26 20:55:45.724355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.298 qpair failed and we were unable to recover it. 00:25:42.298 [2024-11-26 20:55:45.724470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.298 [2024-11-26 20:55:45.724496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.298 qpair failed and we were unable to recover it. 00:25:42.298 [2024-11-26 20:55:45.724585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.298 [2024-11-26 20:55:45.724613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.298 qpair failed and we were unable to recover it. 00:25:42.298 [2024-11-26 20:55:45.724705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.298 [2024-11-26 20:55:45.724732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.298 qpair failed and we were unable to recover it. 00:25:42.298 [2024-11-26 20:55:45.724828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.298 [2024-11-26 20:55:45.724853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.298 qpair failed and we were unable to recover it. 00:25:42.298 [2024-11-26 20:55:45.724943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.298 [2024-11-26 20:55:45.724969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.298 qpair failed and we were unable to recover it. 00:25:42.298 [2024-11-26 20:55:45.725086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.298 [2024-11-26 20:55:45.725111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.298 qpair failed and we were unable to recover it. 00:25:42.298 [2024-11-26 20:55:45.725221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.298 [2024-11-26 20:55:45.725250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.298 qpair failed and we were unable to recover it. 00:25:42.298 [2024-11-26 20:55:45.725343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.298 [2024-11-26 20:55:45.725372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.298 qpair failed and we were unable to recover it. 00:25:42.298 [2024-11-26 20:55:45.725458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.298 [2024-11-26 20:55:45.725485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.298 qpair failed and we were unable to recover it. 00:25:42.298 [2024-11-26 20:55:45.725577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.298 [2024-11-26 20:55:45.725603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.298 qpair failed and we were unable to recover it. 00:25:42.298 [2024-11-26 20:55:45.725695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.298 [2024-11-26 20:55:45.725721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.298 qpair failed and we were unable to recover it. 00:25:42.298 [2024-11-26 20:55:45.725820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.298 [2024-11-26 20:55:45.725847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.298 qpair failed and we were unable to recover it. 00:25:42.298 [2024-11-26 20:55:45.725936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.298 [2024-11-26 20:55:45.725965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.298 qpair failed and we were unable to recover it. 00:25:42.298 [2024-11-26 20:55:45.726055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.298 [2024-11-26 20:55:45.726084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.298 qpair failed and we were unable to recover it. 00:25:42.298 [2024-11-26 20:55:45.726175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.298 [2024-11-26 20:55:45.726203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.298 qpair failed and we were unable to recover it. 00:25:42.298 [2024-11-26 20:55:45.726287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.298 [2024-11-26 20:55:45.726325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.298 qpair failed and we were unable to recover it. 00:25:42.298 [2024-11-26 20:55:45.726414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.298 [2024-11-26 20:55:45.726442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.298 qpair failed and we were unable to recover it. 00:25:42.298 [2024-11-26 20:55:45.726520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.298 [2024-11-26 20:55:45.726553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.298 qpair failed and we were unable to recover it. 00:25:42.298 [2024-11-26 20:55:45.726636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.298 [2024-11-26 20:55:45.726663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.298 qpair failed and we were unable to recover it. 00:25:42.298 [2024-11-26 20:55:45.726746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.298 [2024-11-26 20:55:45.726773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.298 qpair failed and we were unable to recover it. 00:25:42.298 [2024-11-26 20:55:45.726856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.298 [2024-11-26 20:55:45.726882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.299 qpair failed and we were unable to recover it. 00:25:42.299 [2024-11-26 20:55:45.727022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.299 [2024-11-26 20:55:45.727050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.299 qpair failed and we were unable to recover it. 00:25:42.299 [2024-11-26 20:55:45.727172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.299 [2024-11-26 20:55:45.727213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.299 qpair failed and we were unable to recover it. 00:25:42.299 [2024-11-26 20:55:45.727313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.299 [2024-11-26 20:55:45.727353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.299 qpair failed and we were unable to recover it. 00:25:42.299 [2024-11-26 20:55:45.727444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.299 [2024-11-26 20:55:45.727472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.299 qpair failed and we were unable to recover it. 00:25:42.299 [2024-11-26 20:55:45.727673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.299 [2024-11-26 20:55:45.727700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.299 qpair failed and we were unable to recover it. 00:25:42.299 [2024-11-26 20:55:45.727785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.299 [2024-11-26 20:55:45.727812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.299 qpair failed and we were unable to recover it. 00:25:42.299 [2024-11-26 20:55:45.727929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.299 [2024-11-26 20:55:45.727957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.299 qpair failed and we were unable to recover it. 00:25:42.299 [2024-11-26 20:55:45.728051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.299 [2024-11-26 20:55:45.728079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.299 qpair failed and we were unable to recover it. 00:25:42.299 [2024-11-26 20:55:45.728175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.299 [2024-11-26 20:55:45.728215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.299 qpair failed and we were unable to recover it. 00:25:42.299 [2024-11-26 20:55:45.728314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.299 [2024-11-26 20:55:45.728344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.299 qpair failed and we were unable to recover it. 00:25:42.299 [2024-11-26 20:55:45.728471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.299 [2024-11-26 20:55:45.728499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.299 qpair failed and we were unable to recover it. 00:25:42.299 [2024-11-26 20:55:45.728608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.299 [2024-11-26 20:55:45.728635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.299 qpair failed and we were unable to recover it. 00:25:42.299 [2024-11-26 20:55:45.728721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.299 [2024-11-26 20:55:45.728747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.299 qpair failed and we were unable to recover it. 00:25:42.299 [2024-11-26 20:55:45.728828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.299 [2024-11-26 20:55:45.728854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.299 qpair failed and we were unable to recover it. 00:25:42.299 [2024-11-26 20:55:45.728962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.299 [2024-11-26 20:55:45.728989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.299 qpair failed and we were unable to recover it. 00:25:42.299 [2024-11-26 20:55:45.729074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.299 [2024-11-26 20:55:45.729100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.299 qpair failed and we were unable to recover it. 00:25:42.299 [2024-11-26 20:55:45.729179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.299 [2024-11-26 20:55:45.729208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.299 qpair failed and we were unable to recover it. 00:25:42.299 [2024-11-26 20:55:45.729296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.299 [2024-11-26 20:55:45.729328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.299 qpair failed and we were unable to recover it. 00:25:42.299 [2024-11-26 20:55:45.729407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.299 [2024-11-26 20:55:45.729434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.299 qpair failed and we were unable to recover it. 00:25:42.299 [2024-11-26 20:55:45.729511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.299 [2024-11-26 20:55:45.729538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.299 qpair failed and we were unable to recover it. 00:25:42.299 [2024-11-26 20:55:45.729616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.299 [2024-11-26 20:55:45.729643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.299 qpair failed and we were unable to recover it. 00:25:42.299 [2024-11-26 20:55:45.729751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.299 [2024-11-26 20:55:45.729778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.299 qpair failed and we were unable to recover it. 00:25:42.299 [2024-11-26 20:55:45.729861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.299 [2024-11-26 20:55:45.729888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.299 qpair failed and we were unable to recover it. 00:25:42.299 [2024-11-26 20:55:45.729987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.299 [2024-11-26 20:55:45.730015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.299 qpair failed and we were unable to recover it. 00:25:42.299 [2024-11-26 20:55:45.730102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.299 [2024-11-26 20:55:45.730128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.299 qpair failed and we were unable to recover it. 00:25:42.299 [2024-11-26 20:55:45.730205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.299 [2024-11-26 20:55:45.730231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.299 qpair failed and we were unable to recover it. 00:25:42.299 [2024-11-26 20:55:45.730367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.299 [2024-11-26 20:55:45.730395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.299 qpair failed and we were unable to recover it. 00:25:42.299 [2024-11-26 20:55:45.730537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.299 [2024-11-26 20:55:45.730562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.299 qpair failed and we were unable to recover it. 00:25:42.300 [2024-11-26 20:55:45.730649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.300 [2024-11-26 20:55:45.730675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.300 qpair failed and we were unable to recover it. 00:25:42.300 [2024-11-26 20:55:45.730797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.300 [2024-11-26 20:55:45.730822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.300 qpair failed and we were unable to recover it. 00:25:42.300 [2024-11-26 20:55:45.730909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.300 [2024-11-26 20:55:45.730933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.300 qpair failed and we were unable to recover it. 00:25:42.300 [2024-11-26 20:55:45.731045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.300 [2024-11-26 20:55:45.731072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.300 qpair failed and we were unable to recover it. 00:25:42.300 [2024-11-26 20:55:45.731156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.300 [2024-11-26 20:55:45.731182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.300 qpair failed and we were unable to recover it. 00:25:42.300 [2024-11-26 20:55:45.731290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.300 [2024-11-26 20:55:45.731321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.300 qpair failed and we were unable to recover it. 00:25:42.300 [2024-11-26 20:55:45.731413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.300 [2024-11-26 20:55:45.731438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.300 qpair failed and we were unable to recover it. 00:25:42.300 [2024-11-26 20:55:45.731548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.300 [2024-11-26 20:55:45.731574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.300 qpair failed and we were unable to recover it. 00:25:42.300 [2024-11-26 20:55:45.731670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.300 [2024-11-26 20:55:45.731695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.300 qpair failed and we were unable to recover it. 00:25:42.300 [2024-11-26 20:55:45.731785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.300 [2024-11-26 20:55:45.731811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.300 qpair failed and we were unable to recover it. 00:25:42.300 [2024-11-26 20:55:45.731910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.300 [2024-11-26 20:55:45.731935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.300 qpair failed and we were unable to recover it. 00:25:42.300 [2024-11-26 20:55:45.732149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.300 [2024-11-26 20:55:45.732188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.300 qpair failed and we were unable to recover it. 00:25:42.300 [2024-11-26 20:55:45.732278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.300 [2024-11-26 20:55:45.732313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.300 qpair failed and we were unable to recover it. 00:25:42.300 [2024-11-26 20:55:45.732403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.300 [2024-11-26 20:55:45.732430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.300 qpair failed and we were unable to recover it. 00:25:42.300 [2024-11-26 20:55:45.732520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.300 [2024-11-26 20:55:45.732546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.300 qpair failed and we were unable to recover it. 00:25:42.300 [2024-11-26 20:55:45.732663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.300 [2024-11-26 20:55:45.732689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.300 qpair failed and we were unable to recover it. 00:25:42.300 [2024-11-26 20:55:45.732771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.300 [2024-11-26 20:55:45.732797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.300 qpair failed and we were unable to recover it. 00:25:42.300 [2024-11-26 20:55:45.732882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.300 [2024-11-26 20:55:45.732909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.300 qpair failed and we were unable to recover it. 00:25:42.300 [2024-11-26 20:55:45.732985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.300 [2024-11-26 20:55:45.733010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.300 qpair failed and we were unable to recover it. 00:25:42.300 [2024-11-26 20:55:45.733110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.300 [2024-11-26 20:55:45.733150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.300 qpair failed and we were unable to recover it. 00:25:42.300 [2024-11-26 20:55:45.733242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.300 [2024-11-26 20:55:45.733270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.300 qpair failed and we were unable to recover it. 00:25:42.300 [2024-11-26 20:55:45.733364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.300 [2024-11-26 20:55:45.733393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.300 qpair failed and we were unable to recover it. 00:25:42.300 [2024-11-26 20:55:45.733475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.300 [2024-11-26 20:55:45.733502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.300 qpair failed and we were unable to recover it. 00:25:42.300 [2024-11-26 20:55:45.733581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.300 [2024-11-26 20:55:45.733606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.300 qpair failed and we were unable to recover it. 00:25:42.300 [2024-11-26 20:55:45.733691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.300 [2024-11-26 20:55:45.733717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.300 qpair failed and we were unable to recover it. 00:25:42.300 [2024-11-26 20:55:45.733833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.300 [2024-11-26 20:55:45.733858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.300 qpair failed and we were unable to recover it. 00:25:42.300 [2024-11-26 20:55:45.733948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.300 [2024-11-26 20:55:45.733975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.300 qpair failed and we were unable to recover it. 00:25:42.300 [2024-11-26 20:55:45.734059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.300 [2024-11-26 20:55:45.734083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.300 qpair failed and we were unable to recover it. 00:25:42.300 [2024-11-26 20:55:45.734208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.300 [2024-11-26 20:55:45.734236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.300 qpair failed and we were unable to recover it. 00:25:42.300 [2024-11-26 20:55:45.734334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.300 [2024-11-26 20:55:45.734363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.300 qpair failed and we were unable to recover it. 00:25:42.300 [2024-11-26 20:55:45.734461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.300 [2024-11-26 20:55:45.734501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.300 qpair failed and we were unable to recover it. 00:25:42.300 [2024-11-26 20:55:45.734625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.300 [2024-11-26 20:55:45.734654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.300 qpair failed and we were unable to recover it. 00:25:42.300 [2024-11-26 20:55:45.734768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.300 [2024-11-26 20:55:45.734794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.300 qpair failed and we were unable to recover it. 00:25:42.300 [2024-11-26 20:55:45.734888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.300 [2024-11-26 20:55:45.734913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.300 qpair failed and we were unable to recover it. 00:25:42.300 [2024-11-26 20:55:45.734999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.300 [2024-11-26 20:55:45.735025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.300 qpair failed and we were unable to recover it. 00:25:42.300 [2024-11-26 20:55:45.735114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.300 [2024-11-26 20:55:45.735140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.300 qpair failed and we were unable to recover it. 00:25:42.300 [2024-11-26 20:55:45.735229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.300 [2024-11-26 20:55:45.735257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.300 qpair failed and we were unable to recover it. 00:25:42.300 [2024-11-26 20:55:45.735390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.300 [2024-11-26 20:55:45.735421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.300 qpair failed and we were unable to recover it. 00:25:42.300 [2024-11-26 20:55:45.735501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.301 [2024-11-26 20:55:45.735528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.301 qpair failed and we were unable to recover it. 00:25:42.301 [2024-11-26 20:55:45.735648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.301 [2024-11-26 20:55:45.735675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.301 qpair failed and we were unable to recover it. 00:25:42.301 [2024-11-26 20:55:45.735770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.301 [2024-11-26 20:55:45.735799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.301 qpair failed and we were unable to recover it. 00:25:42.301 [2024-11-26 20:55:45.735886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.301 [2024-11-26 20:55:45.735914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.301 qpair failed and we were unable to recover it. 00:25:42.301 [2024-11-26 20:55:45.736013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.301 [2024-11-26 20:55:45.736039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.301 qpair failed and we were unable to recover it. 00:25:42.301 [2024-11-26 20:55:45.736117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.301 [2024-11-26 20:55:45.736142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.301 qpair failed and we were unable to recover it. 00:25:42.301 [2024-11-26 20:55:45.736238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.301 [2024-11-26 20:55:45.736278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.301 qpair failed and we were unable to recover it. 00:25:42.301 [2024-11-26 20:55:45.736380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.301 [2024-11-26 20:55:45.736408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.301 qpair failed and we were unable to recover it. 00:25:42.301 [2024-11-26 20:55:45.736494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.301 [2024-11-26 20:55:45.736521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.301 qpair failed and we were unable to recover it. 00:25:42.301 [2024-11-26 20:55:45.736611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.301 [2024-11-26 20:55:45.736639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.301 qpair failed and we were unable to recover it. 00:25:42.301 [2024-11-26 20:55:45.736726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.301 [2024-11-26 20:55:45.736753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.301 qpair failed and we were unable to recover it. 00:25:42.301 [2024-11-26 20:55:45.736850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.301 [2024-11-26 20:55:45.736878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.301 qpair failed and we were unable to recover it. 00:25:42.301 [2024-11-26 20:55:45.736989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.301 [2024-11-26 20:55:45.737016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.301 qpair failed and we were unable to recover it. 00:25:42.301 [2024-11-26 20:55:45.737104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.301 [2024-11-26 20:55:45.737130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.301 qpair failed and we were unable to recover it. 00:25:42.301 [2024-11-26 20:55:45.737221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.301 [2024-11-26 20:55:45.737260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.301 qpair failed and we were unable to recover it. 00:25:42.301 [2024-11-26 20:55:45.737357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.301 [2024-11-26 20:55:45.737385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.301 qpair failed and we were unable to recover it. 00:25:42.301 [2024-11-26 20:55:45.737463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.301 [2024-11-26 20:55:45.737491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.301 qpair failed and we were unable to recover it. 00:25:42.301 [2024-11-26 20:55:45.737629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.301 [2024-11-26 20:55:45.737656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.301 qpair failed and we were unable to recover it. 00:25:42.301 [2024-11-26 20:55:45.737743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.301 [2024-11-26 20:55:45.737769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.301 qpair failed and we were unable to recover it. 00:25:42.301 [2024-11-26 20:55:45.737854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.301 [2024-11-26 20:55:45.737882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.301 qpair failed and we were unable to recover it. 00:25:42.301 [2024-11-26 20:55:45.737965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.301 [2024-11-26 20:55:45.737992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.301 qpair failed and we were unable to recover it. 00:25:42.301 [2024-11-26 20:55:45.738116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.301 [2024-11-26 20:55:45.738156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.301 qpair failed and we were unable to recover it. 00:25:42.301 [2024-11-26 20:55:45.738270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.301 [2024-11-26 20:55:45.738298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.301 qpair failed and we were unable to recover it. 00:25:42.301 [2024-11-26 20:55:45.738393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.301 [2024-11-26 20:55:45.738419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.301 qpair failed and we were unable to recover it. 00:25:42.301 [2024-11-26 20:55:45.738511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.301 [2024-11-26 20:55:45.738543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.301 qpair failed and we were unable to recover it. 00:25:42.301 [2024-11-26 20:55:45.738635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.301 [2024-11-26 20:55:45.738661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.301 qpair failed and we were unable to recover it. 00:25:42.301 [2024-11-26 20:55:45.738749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.301 [2024-11-26 20:55:45.738778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.301 qpair failed and we were unable to recover it. 00:25:42.301 [2024-11-26 20:55:45.738875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.301 [2024-11-26 20:55:45.738903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.301 qpair failed and we were unable to recover it. 00:25:42.301 [2024-11-26 20:55:45.739013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.301 [2024-11-26 20:55:45.739040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.301 qpair failed and we were unable to recover it. 00:25:42.301 [2024-11-26 20:55:45.739122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.301 [2024-11-26 20:55:45.739149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.301 qpair failed and we were unable to recover it. 00:25:42.301 [2024-11-26 20:55:45.739240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.301 [2024-11-26 20:55:45.739267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.301 qpair failed and we were unable to recover it. 00:25:42.301 [2024-11-26 20:55:45.739369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.301 [2024-11-26 20:55:45.739410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.301 qpair failed and we were unable to recover it. 00:25:42.301 [2024-11-26 20:55:45.739506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.301 [2024-11-26 20:55:45.739533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.301 qpair failed and we were unable to recover it. 00:25:42.301 [2024-11-26 20:55:45.739621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.301 [2024-11-26 20:55:45.739647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.301 qpair failed and we were unable to recover it. 00:25:42.301 [2024-11-26 20:55:45.739792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.301 [2024-11-26 20:55:45.739819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.301 qpair failed and we were unable to recover it. 00:25:42.301 [2024-11-26 20:55:45.739896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.301 [2024-11-26 20:55:45.739922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.301 qpair failed and we were unable to recover it. 00:25:42.301 [2024-11-26 20:55:45.740003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.301 [2024-11-26 20:55:45.740028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.301 qpair failed and we were unable to recover it. 00:25:42.301 [2024-11-26 20:55:45.740120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.301 [2024-11-26 20:55:45.740146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.301 qpair failed and we were unable to recover it. 00:25:42.301 [2024-11-26 20:55:45.740240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.302 [2024-11-26 20:55:45.740266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.302 qpair failed and we were unable to recover it. 00:25:42.302 [2024-11-26 20:55:45.740352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.302 [2024-11-26 20:55:45.740378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.302 qpair failed and we were unable to recover it. 00:25:42.302 [2024-11-26 20:55:45.740488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.302 [2024-11-26 20:55:45.740514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.302 qpair failed and we were unable to recover it. 00:25:42.302 [2024-11-26 20:55:45.740591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.302 [2024-11-26 20:55:45.740617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.302 qpair failed and we were unable to recover it. 00:25:42.302 [2024-11-26 20:55:45.740697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.302 [2024-11-26 20:55:45.740723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.302 qpair failed and we were unable to recover it. 00:25:42.302 [2024-11-26 20:55:45.740862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.302 [2024-11-26 20:55:45.740888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.302 qpair failed and we were unable to recover it. 00:25:42.302 [2024-11-26 20:55:45.740964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.302 [2024-11-26 20:55:45.740989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.302 qpair failed and we were unable to recover it. 00:25:42.302 [2024-11-26 20:55:45.741090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.302 [2024-11-26 20:55:45.741130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.302 qpair failed and we were unable to recover it. 00:25:42.302 [2024-11-26 20:55:45.741222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.302 [2024-11-26 20:55:45.741251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.302 qpair failed and we were unable to recover it. 00:25:42.302 [2024-11-26 20:55:45.741369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.302 [2024-11-26 20:55:45.741397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.302 qpair failed and we were unable to recover it. 00:25:42.302 [2024-11-26 20:55:45.741484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.302 [2024-11-26 20:55:45.741511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.302 qpair failed and we were unable to recover it. 00:25:42.302 [2024-11-26 20:55:45.741593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.302 [2024-11-26 20:55:45.741620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.302 qpair failed and we were unable to recover it. 00:25:42.302 [2024-11-26 20:55:45.741706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.302 [2024-11-26 20:55:45.741732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.302 qpair failed and we were unable to recover it. 00:25:42.302 [2024-11-26 20:55:45.741823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.302 [2024-11-26 20:55:45.741852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.302 qpair failed and we were unable to recover it. 00:25:42.302 [2024-11-26 20:55:45.741973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.302 [2024-11-26 20:55:45.741999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.302 qpair failed and we were unable to recover it. 00:25:42.302 [2024-11-26 20:55:45.742098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.302 [2024-11-26 20:55:45.742138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.302 qpair failed and we were unable to recover it. 00:25:42.302 [2024-11-26 20:55:45.742241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.302 [2024-11-26 20:55:45.742269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.302 qpair failed and we were unable to recover it. 00:25:42.302 [2024-11-26 20:55:45.742357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.302 [2024-11-26 20:55:45.742384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.302 qpair failed and we were unable to recover it. 00:25:42.302 [2024-11-26 20:55:45.742505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.302 [2024-11-26 20:55:45.742533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.302 qpair failed and we were unable to recover it. 00:25:42.302 [2024-11-26 20:55:45.742618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.302 [2024-11-26 20:55:45.742646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.302 qpair failed and we were unable to recover it. 00:25:42.302 [2024-11-26 20:55:45.742756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.302 [2024-11-26 20:55:45.742782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.302 qpair failed and we were unable to recover it. 00:25:42.302 [2024-11-26 20:55:45.742898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.302 [2024-11-26 20:55:45.742924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.302 qpair failed and we were unable to recover it. 00:25:42.302 [2024-11-26 20:55:45.743010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.302 [2024-11-26 20:55:45.743039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.302 qpair failed and we were unable to recover it. 00:25:42.302 [2024-11-26 20:55:45.743135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.302 [2024-11-26 20:55:45.743175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.302 qpair failed and we were unable to recover it. 00:25:42.302 [2024-11-26 20:55:45.743293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.302 [2024-11-26 20:55:45.743330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.302 qpair failed and we were unable to recover it. 00:25:42.302 [2024-11-26 20:55:45.743410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.302 [2024-11-26 20:55:45.743437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.302 qpair failed and we were unable to recover it. 00:25:42.302 [2024-11-26 20:55:45.743518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.302 [2024-11-26 20:55:45.743549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.302 qpair failed and we were unable to recover it. 00:25:42.302 [2024-11-26 20:55:45.743643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.302 [2024-11-26 20:55:45.743669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.302 qpair failed and we were unable to recover it. 00:25:42.302 [2024-11-26 20:55:45.743749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.302 [2024-11-26 20:55:45.743776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.302 qpair failed and we were unable to recover it. 00:25:42.302 [2024-11-26 20:55:45.743861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.302 [2024-11-26 20:55:45.743889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.302 qpair failed and we were unable to recover it. 00:25:42.302 [2024-11-26 20:55:45.743970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.302 [2024-11-26 20:55:45.743997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.302 qpair failed and we were unable to recover it. 00:25:42.302 [2024-11-26 20:55:45.744109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.302 [2024-11-26 20:55:45.744135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.302 qpair failed and we were unable to recover it. 00:25:42.302 [2024-11-26 20:55:45.744220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.302 [2024-11-26 20:55:45.744247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.302 qpair failed and we were unable to recover it. 00:25:42.302 [2024-11-26 20:55:45.744386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.302 [2024-11-26 20:55:45.744425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.302 qpair failed and we were unable to recover it. 00:25:42.302 [2024-11-26 20:55:45.744510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.302 [2024-11-26 20:55:45.744537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.302 qpair failed and we were unable to recover it. 00:25:42.302 [2024-11-26 20:55:45.744619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.302 [2024-11-26 20:55:45.744645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.302 qpair failed and we were unable to recover it. 00:25:42.302 [2024-11-26 20:55:45.744734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.302 [2024-11-26 20:55:45.744759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.302 qpair failed and we were unable to recover it. 00:25:42.302 [2024-11-26 20:55:45.744869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.302 [2024-11-26 20:55:45.744894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.302 qpair failed and we were unable to recover it. 00:25:42.302 [2024-11-26 20:55:45.744975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.303 [2024-11-26 20:55:45.745000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.303 qpair failed and we were unable to recover it. 00:25:42.303 [2024-11-26 20:55:45.745088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.303 [2024-11-26 20:55:45.745113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.303 qpair failed and we were unable to recover it. 00:25:42.303 [2024-11-26 20:55:45.745217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.303 [2024-11-26 20:55:45.745246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.303 qpair failed and we were unable to recover it. 00:25:42.303 [2024-11-26 20:55:45.745335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.303 [2024-11-26 20:55:45.745363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.303 qpair failed and we were unable to recover it. 00:25:42.303 [2024-11-26 20:55:45.745448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.303 [2024-11-26 20:55:45.745475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.303 qpair failed and we were unable to recover it. 00:25:42.303 [2024-11-26 20:55:45.745561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.303 [2024-11-26 20:55:45.745588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.303 qpair failed and we were unable to recover it. 00:25:42.303 [2024-11-26 20:55:45.745696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.303 [2024-11-26 20:55:45.745722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.303 qpair failed and we were unable to recover it. 00:25:42.303 [2024-11-26 20:55:45.745846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.303 [2024-11-26 20:55:45.745875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.303 qpair failed and we were unable to recover it. 00:25:42.303 [2024-11-26 20:55:45.745991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.303 [2024-11-26 20:55:45.746017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.303 qpair failed and we were unable to recover it. 00:25:42.303 [2024-11-26 20:55:45.746106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.303 [2024-11-26 20:55:45.746134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.303 qpair failed and we were unable to recover it. 00:25:42.303 [2024-11-26 20:55:45.746221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.303 [2024-11-26 20:55:45.746246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.303 qpair failed and we were unable to recover it. 00:25:42.303 [2024-11-26 20:55:45.746333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.303 [2024-11-26 20:55:45.746360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.303 qpair failed and we were unable to recover it. 00:25:42.303 [2024-11-26 20:55:45.746442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.303 [2024-11-26 20:55:45.746467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.303 qpair failed and we were unable to recover it. 00:25:42.303 [2024-11-26 20:55:45.746577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.303 [2024-11-26 20:55:45.746603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.303 qpair failed and we were unable to recover it. 00:25:42.303 [2024-11-26 20:55:45.746712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.303 [2024-11-26 20:55:45.746738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.303 qpair failed and we were unable to recover it. 00:25:42.303 [2024-11-26 20:55:45.746827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.303 [2024-11-26 20:55:45.746859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.303 qpair failed and we were unable to recover it. 00:25:42.303 [2024-11-26 20:55:45.746939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.303 [2024-11-26 20:55:45.746965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.303 qpair failed and we were unable to recover it. 00:25:42.303 [2024-11-26 20:55:45.747064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.303 [2024-11-26 20:55:45.747106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.303 qpair failed and we were unable to recover it. 00:25:42.303 [2024-11-26 20:55:45.747202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.303 [2024-11-26 20:55:45.747243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.303 qpair failed and we were unable to recover it. 00:25:42.303 [2024-11-26 20:55:45.747341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.303 [2024-11-26 20:55:45.747370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.303 qpair failed and we were unable to recover it. 00:25:42.303 [2024-11-26 20:55:45.747455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.303 [2024-11-26 20:55:45.747482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.303 qpair failed and we were unable to recover it. 00:25:42.303 [2024-11-26 20:55:45.747564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.303 [2024-11-26 20:55:45.747591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.303 qpair failed and we were unable to recover it. 00:25:42.303 [2024-11-26 20:55:45.747706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.303 [2024-11-26 20:55:45.747732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.303 qpair failed and we were unable to recover it. 00:25:42.303 [2024-11-26 20:55:45.747839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.303 [2024-11-26 20:55:45.747865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.303 qpair failed and we were unable to recover it. 00:25:42.303 [2024-11-26 20:55:45.747951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.303 [2024-11-26 20:55:45.747977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.303 qpair failed and we were unable to recover it. 00:25:42.303 [2024-11-26 20:55:45.748084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.303 [2024-11-26 20:55:45.748127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.303 qpair failed and we were unable to recover it. 00:25:42.303 [2024-11-26 20:55:45.748252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.303 [2024-11-26 20:55:45.748279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.303 qpair failed and we were unable to recover it. 00:25:42.303 [2024-11-26 20:55:45.748364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.303 [2024-11-26 20:55:45.748390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.303 qpair failed and we were unable to recover it. 00:25:42.303 [2024-11-26 20:55:45.748473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.303 [2024-11-26 20:55:45.748500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.303 qpair failed and we were unable to recover it. 00:25:42.303 [2024-11-26 20:55:45.748590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.303 [2024-11-26 20:55:45.748616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.303 qpair failed and we were unable to recover it. 00:25:42.303 [2024-11-26 20:55:45.748725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.303 [2024-11-26 20:55:45.748751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.303 qpair failed and we were unable to recover it. 00:25:42.303 [2024-11-26 20:55:45.748864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.303 [2024-11-26 20:55:45.748891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.303 qpair failed and we were unable to recover it. 00:25:42.303 [2024-11-26 20:55:45.748977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.303 [2024-11-26 20:55:45.749004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.304 qpair failed and we were unable to recover it. 00:25:42.304 [2024-11-26 20:55:45.749099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.304 [2024-11-26 20:55:45.749140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.304 qpair failed and we were unable to recover it. 00:25:42.304 [2024-11-26 20:55:45.749258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.304 [2024-11-26 20:55:45.749285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.304 qpair failed and we were unable to recover it. 00:25:42.304 [2024-11-26 20:55:45.749380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.304 [2024-11-26 20:55:45.749410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.304 qpair failed and we were unable to recover it. 00:25:42.304 [2024-11-26 20:55:45.749529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.304 [2024-11-26 20:55:45.749554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.304 qpair failed and we were unable to recover it. 00:25:42.304 [2024-11-26 20:55:45.749644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.304 [2024-11-26 20:55:45.749670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.304 qpair failed and we were unable to recover it. 00:25:42.304 [2024-11-26 20:55:45.749765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.304 [2024-11-26 20:55:45.749789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.304 qpair failed and we were unable to recover it. 00:25:42.304 [2024-11-26 20:55:45.749899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.304 [2024-11-26 20:55:45.749926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.304 qpair failed and we were unable to recover it. 00:25:42.304 [2024-11-26 20:55:45.750043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.304 [2024-11-26 20:55:45.750069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.304 qpair failed and we were unable to recover it. 00:25:42.304 [2024-11-26 20:55:45.750159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.304 [2024-11-26 20:55:45.750186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.304 qpair failed and we were unable to recover it. 00:25:42.304 [2024-11-26 20:55:45.750271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.304 [2024-11-26 20:55:45.750298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.304 qpair failed and we were unable to recover it. 00:25:42.304 [2024-11-26 20:55:45.750417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.304 [2024-11-26 20:55:45.750458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.304 qpair failed and we were unable to recover it. 00:25:42.304 [2024-11-26 20:55:45.750547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.304 [2024-11-26 20:55:45.750576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.304 qpair failed and we were unable to recover it. 00:25:42.304 [2024-11-26 20:55:45.750658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.304 [2024-11-26 20:55:45.750685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.304 qpair failed and we were unable to recover it. 00:25:42.304 [2024-11-26 20:55:45.750771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.304 [2024-11-26 20:55:45.750799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.304 qpair failed and we were unable to recover it. 00:25:42.304 [2024-11-26 20:55:45.750915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.304 [2024-11-26 20:55:45.750942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.304 qpair failed and we were unable to recover it. 00:25:42.304 [2024-11-26 20:55:45.751054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.304 [2024-11-26 20:55:45.751081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.304 qpair failed and we were unable to recover it. 00:25:42.304 [2024-11-26 20:55:45.751173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.304 [2024-11-26 20:55:45.751200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.304 qpair failed and we were unable to recover it. 00:25:42.304 [2024-11-26 20:55:45.751327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.304 [2024-11-26 20:55:45.751354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.304 qpair failed and we were unable to recover it. 00:25:42.304 [2024-11-26 20:55:45.751474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.304 [2024-11-26 20:55:45.751499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.304 qpair failed and we were unable to recover it. 00:25:42.304 [2024-11-26 20:55:45.751581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.304 [2024-11-26 20:55:45.751606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.304 qpair failed and we were unable to recover it. 00:25:42.304 [2024-11-26 20:55:45.751720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.304 [2024-11-26 20:55:45.751746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.304 qpair failed and we were unable to recover it. 00:25:42.304 [2024-11-26 20:55:45.751864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.304 [2024-11-26 20:55:45.751889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.304 qpair failed and we were unable to recover it. 00:25:42.304 [2024-11-26 20:55:45.751999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.304 [2024-11-26 20:55:45.752032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.304 qpair failed and we were unable to recover it. 00:25:42.304 [2024-11-26 20:55:45.752149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.304 [2024-11-26 20:55:45.752178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.304 qpair failed and we were unable to recover it. 00:25:42.304 [2024-11-26 20:55:45.752286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.304 [2024-11-26 20:55:45.752334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.304 qpair failed and we were unable to recover it. 00:25:42.304 [2024-11-26 20:55:45.752426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.304 [2024-11-26 20:55:45.752453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.304 qpair failed and we were unable to recover it. 00:25:42.304 [2024-11-26 20:55:45.752567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.304 [2024-11-26 20:55:45.752592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.304 qpair failed and we were unable to recover it. 00:25:42.304 [2024-11-26 20:55:45.752668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.304 [2024-11-26 20:55:45.752695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.304 qpair failed and we were unable to recover it. 00:25:42.304 [2024-11-26 20:55:45.752774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.304 [2024-11-26 20:55:45.752799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.304 qpair failed and we were unable to recover it. 00:25:42.304 [2024-11-26 20:55:45.752913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.304 [2024-11-26 20:55:45.752938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.304 qpair failed and we were unable to recover it. 00:25:42.304 [2024-11-26 20:55:45.753027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.304 [2024-11-26 20:55:45.753055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.304 qpair failed and we were unable to recover it. 00:25:42.304 [2024-11-26 20:55:45.753152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.304 [2024-11-26 20:55:45.753178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.304 qpair failed and we were unable to recover it. 00:25:42.304 [2024-11-26 20:55:45.753289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.304 [2024-11-26 20:55:45.753326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.304 qpair failed and we were unable to recover it. 00:25:42.304 [2024-11-26 20:55:45.753440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.304 [2024-11-26 20:55:45.753467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.304 qpair failed and we were unable to recover it. 00:25:42.304 [2024-11-26 20:55:45.753555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.304 [2024-11-26 20:55:45.753582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.304 qpair failed and we were unable to recover it. 00:25:42.304 [2024-11-26 20:55:45.753689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.304 [2024-11-26 20:55:45.753715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.304 qpair failed and we were unable to recover it. 00:25:42.304 [2024-11-26 20:55:45.753806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.304 [2024-11-26 20:55:45.753832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.304 qpair failed and we were unable to recover it. 00:25:42.304 [2024-11-26 20:55:45.753915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.304 [2024-11-26 20:55:45.753940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.305 qpair failed and we were unable to recover it. 00:25:42.305 [2024-11-26 20:55:45.754022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.305 [2024-11-26 20:55:45.754049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.305 qpair failed and we were unable to recover it. 00:25:42.305 [2024-11-26 20:55:45.754140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.305 [2024-11-26 20:55:45.754167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.305 qpair failed and we were unable to recover it. 00:25:42.305 [2024-11-26 20:55:45.754316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.305 [2024-11-26 20:55:45.754356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.305 qpair failed and we were unable to recover it. 00:25:42.305 [2024-11-26 20:55:45.754450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.305 [2024-11-26 20:55:45.754477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.305 qpair failed and we were unable to recover it. 00:25:42.305 [2024-11-26 20:55:45.754572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.305 [2024-11-26 20:55:45.754598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.305 qpair failed and we were unable to recover it. 00:25:42.305 [2024-11-26 20:55:45.754685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.305 [2024-11-26 20:55:45.754711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.305 qpair failed and we were unable to recover it. 00:25:42.305 [2024-11-26 20:55:45.754808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.305 [2024-11-26 20:55:45.754835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.305 qpair failed and we were unable to recover it. 00:25:42.305 [2024-11-26 20:55:45.754934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.305 [2024-11-26 20:55:45.754962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.305 qpair failed and we were unable to recover it. 00:25:42.305 [2024-11-26 20:55:45.755058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.305 [2024-11-26 20:55:45.755101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.305 qpair failed and we were unable to recover it. 00:25:42.305 [2024-11-26 20:55:45.755300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.305 [2024-11-26 20:55:45.755334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.305 qpair failed and we were unable to recover it. 00:25:42.305 [2024-11-26 20:55:45.755425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.305 [2024-11-26 20:55:45.755451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.305 qpair failed and we were unable to recover it. 00:25:42.305 [2024-11-26 20:55:45.755538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.305 [2024-11-26 20:55:45.755570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.305 qpair failed and we were unable to recover it. 00:25:42.305 [2024-11-26 20:55:45.755656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.305 [2024-11-26 20:55:45.755682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.305 qpair failed and we were unable to recover it. 00:25:42.305 [2024-11-26 20:55:45.755763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.305 [2024-11-26 20:55:45.755791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.305 qpair failed and we were unable to recover it. 00:25:42.305 [2024-11-26 20:55:45.755871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.305 [2024-11-26 20:55:45.755896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.305 qpair failed and we were unable to recover it. 00:25:42.305 [2024-11-26 20:55:45.755987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.305 [2024-11-26 20:55:45.756014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.305 qpair failed and we were unable to recover it. 00:25:42.305 [2024-11-26 20:55:45.756097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.305 [2024-11-26 20:55:45.756123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.305 qpair failed and we were unable to recover it. 00:25:42.305 [2024-11-26 20:55:45.756202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.305 [2024-11-26 20:55:45.756227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.305 qpair failed and we were unable to recover it. 00:25:42.305 [2024-11-26 20:55:45.756312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.305 [2024-11-26 20:55:45.756340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.305 qpair failed and we were unable to recover it. 00:25:42.305 [2024-11-26 20:55:45.756428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.305 [2024-11-26 20:55:45.756454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.305 qpair failed and we were unable to recover it. 00:25:42.305 [2024-11-26 20:55:45.756539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.305 [2024-11-26 20:55:45.756564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.305 qpair failed and we were unable to recover it. 00:25:42.305 [2024-11-26 20:55:45.756653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.305 [2024-11-26 20:55:45.756679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.305 qpair failed and we were unable to recover it. 00:25:42.305 [2024-11-26 20:55:45.756756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.305 [2024-11-26 20:55:45.756782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.305 qpair failed and we were unable to recover it. 00:25:42.305 [2024-11-26 20:55:45.756868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.305 [2024-11-26 20:55:45.756893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.305 qpair failed and we were unable to recover it. 00:25:42.305 [2024-11-26 20:55:45.756989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.305 [2024-11-26 20:55:45.757016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.305 qpair failed and we were unable to recover it. 00:25:42.305 [2024-11-26 20:55:45.757108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.305 [2024-11-26 20:55:45.757136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.305 qpair failed and we were unable to recover it. 00:25:42.305 [2024-11-26 20:55:45.757221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.305 [2024-11-26 20:55:45.757251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.305 qpair failed and we were unable to recover it. 00:25:42.305 [2024-11-26 20:55:45.757344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.305 [2024-11-26 20:55:45.757372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.305 qpair failed and we were unable to recover it. 00:25:42.305 [2024-11-26 20:55:45.757453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.305 [2024-11-26 20:55:45.757480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.305 qpair failed and we were unable to recover it. 00:25:42.305 [2024-11-26 20:55:45.757563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.305 [2024-11-26 20:55:45.757590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.305 qpair failed and we were unable to recover it. 00:25:42.305 [2024-11-26 20:55:45.757713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.305 [2024-11-26 20:55:45.757741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.305 qpair failed and we were unable to recover it. 00:25:42.305 [2024-11-26 20:55:45.757827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.305 [2024-11-26 20:55:45.757854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.305 qpair failed and we were unable to recover it. 00:25:42.305 [2024-11-26 20:55:45.757971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.305 [2024-11-26 20:55:45.757997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.305 qpair failed and we were unable to recover it. 00:25:42.305 [2024-11-26 20:55:45.758115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.305 [2024-11-26 20:55:45.758141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.305 qpair failed and we were unable to recover it. 00:25:42.305 [2024-11-26 20:55:45.758225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.305 [2024-11-26 20:55:45.758250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.305 qpair failed and we were unable to recover it. 00:25:42.305 [2024-11-26 20:55:45.758334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.305 [2024-11-26 20:55:45.758361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.305 qpair failed and we were unable to recover it. 00:25:42.305 [2024-11-26 20:55:45.758473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.305 [2024-11-26 20:55:45.758499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.305 qpair failed and we were unable to recover it. 00:25:42.305 [2024-11-26 20:55:45.758584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.306 [2024-11-26 20:55:45.758610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.306 qpair failed and we were unable to recover it. 00:25:42.306 [2024-11-26 20:55:45.758705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.306 [2024-11-26 20:55:45.758734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.306 qpair failed and we were unable to recover it. 00:25:42.306 [2024-11-26 20:55:45.758852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.306 [2024-11-26 20:55:45.758881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.306 qpair failed and we were unable to recover it. 00:25:42.306 [2024-11-26 20:55:45.758970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.306 [2024-11-26 20:55:45.758997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.306 qpair failed and we were unable to recover it. 00:25:42.306 [2024-11-26 20:55:45.759093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.306 [2024-11-26 20:55:45.759120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.306 qpair failed and we were unable to recover it. 00:25:42.306 [2024-11-26 20:55:45.759206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.306 [2024-11-26 20:55:45.759233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.306 qpair failed and we were unable to recover it. 00:25:42.306 [2024-11-26 20:55:45.759348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.306 [2024-11-26 20:55:45.759376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.306 qpair failed and we were unable to recover it. 00:25:42.306 [2024-11-26 20:55:45.759457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.306 [2024-11-26 20:55:45.759484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.306 qpair failed and we were unable to recover it. 00:25:42.306 [2024-11-26 20:55:45.759569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.306 [2024-11-26 20:55:45.759596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.306 qpair failed and we were unable to recover it. 00:25:42.306 [2024-11-26 20:55:45.759680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.306 [2024-11-26 20:55:45.759707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.306 qpair failed and we were unable to recover it. 00:25:42.306 [2024-11-26 20:55:45.759794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.306 [2024-11-26 20:55:45.759822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.306 qpair failed and we were unable to recover it. 00:25:42.306 [2024-11-26 20:55:45.759927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.306 [2024-11-26 20:55:45.759955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.306 qpair failed and we were unable to recover it. 00:25:42.306 [2024-11-26 20:55:45.760056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.306 [2024-11-26 20:55:45.760094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.306 qpair failed and we were unable to recover it. 00:25:42.306 [2024-11-26 20:55:45.760196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.306 [2024-11-26 20:55:45.760225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.306 qpair failed and we were unable to recover it. 00:25:42.306 [2024-11-26 20:55:45.760309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.306 [2024-11-26 20:55:45.760341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.306 qpair failed and we were unable to recover it. 00:25:42.306 [2024-11-26 20:55:45.760428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.306 [2024-11-26 20:55:45.760456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.306 qpair failed and we were unable to recover it. 00:25:42.306 [2024-11-26 20:55:45.760534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.306 [2024-11-26 20:55:45.760560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.306 qpair failed and we were unable to recover it. 00:25:42.306 [2024-11-26 20:55:45.760654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.306 [2024-11-26 20:55:45.760682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.306 qpair failed and we were unable to recover it. 00:25:42.306 [2024-11-26 20:55:45.760762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.306 [2024-11-26 20:55:45.760789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.306 qpair failed and we were unable to recover it. 00:25:42.306 [2024-11-26 20:55:45.760882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.306 [2024-11-26 20:55:45.760909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.306 qpair failed and we were unable to recover it. 00:25:42.306 [2024-11-26 20:55:45.761021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.306 [2024-11-26 20:55:45.761048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.306 qpair failed and we were unable to recover it. 00:25:42.306 [2024-11-26 20:55:45.761135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.306 [2024-11-26 20:55:45.761164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.306 qpair failed and we were unable to recover it. 00:25:42.306 [2024-11-26 20:55:45.761257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.306 [2024-11-26 20:55:45.761283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.306 qpair failed and we were unable to recover it. 00:25:42.306 [2024-11-26 20:55:45.761382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.306 [2024-11-26 20:55:45.761408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.306 qpair failed and we were unable to recover it. 00:25:42.306 [2024-11-26 20:55:45.761504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.306 [2024-11-26 20:55:45.761530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.306 qpair failed and we were unable to recover it. 00:25:42.306 [2024-11-26 20:55:45.761609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.306 [2024-11-26 20:55:45.761635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.306 qpair failed and we were unable to recover it. 00:25:42.306 [2024-11-26 20:55:45.761717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.306 [2024-11-26 20:55:45.761742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.306 qpair failed and we were unable to recover it. 00:25:42.306 [2024-11-26 20:55:45.761822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.306 [2024-11-26 20:55:45.761848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.306 qpair failed and we were unable to recover it. 00:25:42.306 [2024-11-26 20:55:45.761933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.306 [2024-11-26 20:55:45.761960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.306 qpair failed and we were unable to recover it. 00:25:42.306 [2024-11-26 20:55:45.762076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.306 [2024-11-26 20:55:45.762103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.306 qpair failed and we were unable to recover it. 00:25:42.306 [2024-11-26 20:55:45.762190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.306 [2024-11-26 20:55:45.762219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.306 qpair failed and we were unable to recover it. 00:25:42.306 [2024-11-26 20:55:45.762315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.306 [2024-11-26 20:55:45.762343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.306 qpair failed and we were unable to recover it. 00:25:42.306 [2024-11-26 20:55:45.762425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.306 [2024-11-26 20:55:45.762452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.306 qpair failed and we were unable to recover it. 00:25:42.306 [2024-11-26 20:55:45.762540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.306 [2024-11-26 20:55:45.762567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.306 qpair failed and we were unable to recover it. 00:25:42.306 [2024-11-26 20:55:45.762649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.306 [2024-11-26 20:55:45.762676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.306 qpair failed and we were unable to recover it. 00:25:42.306 [2024-11-26 20:55:45.762787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.306 [2024-11-26 20:55:45.762814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.306 qpair failed and we were unable to recover it. 00:25:42.306 [2024-11-26 20:55:45.762898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.306 [2024-11-26 20:55:45.762924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.306 qpair failed and we were unable to recover it. 00:25:42.306 [2024-11-26 20:55:45.763004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.306 [2024-11-26 20:55:45.763031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.306 qpair failed and we were unable to recover it. 00:25:42.306 [2024-11-26 20:55:45.763118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.307 [2024-11-26 20:55:45.763146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.307 qpair failed and we were unable to recover it. 00:25:42.307 [2024-11-26 20:55:45.763236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.307 [2024-11-26 20:55:45.763263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.307 qpair failed and we were unable to recover it. 00:25:42.307 [2024-11-26 20:55:45.763373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.307 [2024-11-26 20:55:45.763413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.307 qpair failed and we were unable to recover it. 00:25:42.307 [2024-11-26 20:55:45.763506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.307 [2024-11-26 20:55:45.763534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.307 qpair failed and we were unable to recover it. 00:25:42.307 [2024-11-26 20:55:45.763629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.307 [2024-11-26 20:55:45.763657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.307 qpair failed and we were unable to recover it. 00:25:42.307 [2024-11-26 20:55:45.763769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.307 [2024-11-26 20:55:45.763796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.307 qpair failed and we were unable to recover it. 00:25:42.307 [2024-11-26 20:55:45.763879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.307 [2024-11-26 20:55:45.763906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.307 qpair failed and we were unable to recover it. 00:25:42.307 [2024-11-26 20:55:45.764018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.307 [2024-11-26 20:55:45.764045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.307 qpair failed and we were unable to recover it. 00:25:42.307 [2024-11-26 20:55:45.764158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.307 [2024-11-26 20:55:45.764185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.307 qpair failed and we were unable to recover it. 00:25:42.307 [2024-11-26 20:55:45.764270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.307 [2024-11-26 20:55:45.764298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.307 qpair failed and we were unable to recover it. 00:25:42.307 [2024-11-26 20:55:45.764392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.307 [2024-11-26 20:55:45.764420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.307 qpair failed and we were unable to recover it. 00:25:42.307 [2024-11-26 20:55:45.764497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.307 [2024-11-26 20:55:45.764524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.307 qpair failed and we were unable to recover it. 00:25:42.307 [2024-11-26 20:55:45.764611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.307 [2024-11-26 20:55:45.764636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.307 qpair failed and we were unable to recover it. 00:25:42.307 [2024-11-26 20:55:45.764729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.307 [2024-11-26 20:55:45.764756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.307 qpair failed and we were unable to recover it. 00:25:42.307 [2024-11-26 20:55:45.764833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.307 [2024-11-26 20:55:45.764859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.307 qpair failed and we were unable to recover it. 00:25:42.307 [2024-11-26 20:55:45.764947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.307 [2024-11-26 20:55:45.764974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.307 qpair failed and we were unable to recover it. 00:25:42.307 [2024-11-26 20:55:45.765057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.307 [2024-11-26 20:55:45.765088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.307 qpair failed and we were unable to recover it. 00:25:42.307 [2024-11-26 20:55:45.765206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.307 [2024-11-26 20:55:45.765235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.307 qpair failed and we were unable to recover it. 00:25:42.307 [2024-11-26 20:55:45.765338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.307 [2024-11-26 20:55:45.765378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.307 qpair failed and we were unable to recover it. 00:25:42.307 [2024-11-26 20:55:45.765483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.307 [2024-11-26 20:55:45.765522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.307 qpair failed and we were unable to recover it. 00:25:42.307 [2024-11-26 20:55:45.765605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.307 [2024-11-26 20:55:45.765633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.307 qpair failed and we were unable to recover it. 00:25:42.307 [2024-11-26 20:55:45.765760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.307 [2024-11-26 20:55:45.765786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.307 qpair failed and we were unable to recover it. 00:25:42.307 [2024-11-26 20:55:45.765873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.307 [2024-11-26 20:55:45.765900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.307 qpair failed and we were unable to recover it. 00:25:42.307 [2024-11-26 20:55:45.766010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.307 [2024-11-26 20:55:45.766038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.307 qpair failed and we were unable to recover it. 00:25:42.307 [2024-11-26 20:55:45.766138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.307 [2024-11-26 20:55:45.766178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.307 qpair failed and we were unable to recover it. 00:25:42.307 [2024-11-26 20:55:45.766281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.307 [2024-11-26 20:55:45.766327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.307 qpair failed and we were unable to recover it. 00:25:42.307 [2024-11-26 20:55:45.766421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.307 [2024-11-26 20:55:45.766448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.307 qpair failed and we were unable to recover it. 00:25:42.307 [2024-11-26 20:55:45.766529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.307 [2024-11-26 20:55:45.766556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.307 qpair failed and we were unable to recover it. 00:25:42.307 [2024-11-26 20:55:45.766639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.307 [2024-11-26 20:55:45.766664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.307 qpair failed and we were unable to recover it. 00:25:42.307 [2024-11-26 20:55:45.766748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.307 [2024-11-26 20:55:45.766774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.307 qpair failed and we were unable to recover it. 00:25:42.307 [2024-11-26 20:55:45.766891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.307 [2024-11-26 20:55:45.766916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.307 qpair failed and we were unable to recover it. 00:25:42.307 [2024-11-26 20:55:45.767011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.307 [2024-11-26 20:55:45.767052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.307 qpair failed and we were unable to recover it. 00:25:42.307 [2024-11-26 20:55:45.767148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.307 [2024-11-26 20:55:45.767176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.307 qpair failed and we were unable to recover it. 00:25:42.307 [2024-11-26 20:55:45.767300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.307 [2024-11-26 20:55:45.767334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.307 qpair failed and we were unable to recover it. 00:25:42.307 [2024-11-26 20:55:45.767417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.307 [2024-11-26 20:55:45.767445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.307 qpair failed and we were unable to recover it. 00:25:42.307 [2024-11-26 20:55:45.767531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.307 [2024-11-26 20:55:45.767558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.307 qpair failed and we were unable to recover it. 00:25:42.307 [2024-11-26 20:55:45.767668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.307 [2024-11-26 20:55:45.767695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.307 qpair failed and we were unable to recover it. 00:25:42.307 [2024-11-26 20:55:45.767773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.307 [2024-11-26 20:55:45.767800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.307 qpair failed and we were unable to recover it. 00:25:42.307 [2024-11-26 20:55:45.767915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.308 [2024-11-26 20:55:45.767942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.308 qpair failed and we were unable to recover it. 00:25:42.308 [2024-11-26 20:55:45.768035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.308 [2024-11-26 20:55:45.768062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.308 qpair failed and we were unable to recover it. 00:25:42.308 [2024-11-26 20:55:45.768147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.308 [2024-11-26 20:55:45.768175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.308 qpair failed and we were unable to recover it. 00:25:42.308 [2024-11-26 20:55:45.768268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.308 [2024-11-26 20:55:45.768298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.308 qpair failed and we were unable to recover it. 00:25:42.308 [2024-11-26 20:55:45.768395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.308 [2024-11-26 20:55:45.768421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.308 qpair failed and we were unable to recover it. 00:25:42.308 [2024-11-26 20:55:45.768508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.308 [2024-11-26 20:55:45.768540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.308 qpair failed and we were unable to recover it. 00:25:42.308 [2024-11-26 20:55:45.768627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.308 [2024-11-26 20:55:45.768653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.308 qpair failed and we were unable to recover it. 00:25:42.308 [2024-11-26 20:55:45.768733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.308 [2024-11-26 20:55:45.768758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.308 qpair failed and we were unable to recover it. 00:25:42.308 [2024-11-26 20:55:45.768839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.308 [2024-11-26 20:55:45.768864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.308 qpair failed and we were unable to recover it. 00:25:42.308 [2024-11-26 20:55:45.768956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.308 [2024-11-26 20:55:45.768983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.308 qpair failed and we were unable to recover it. 00:25:42.308 [2024-11-26 20:55:45.769067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.308 [2024-11-26 20:55:45.769096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.308 qpair failed and we were unable to recover it. 00:25:42.308 [2024-11-26 20:55:45.769192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.308 [2024-11-26 20:55:45.769218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.308 qpair failed and we were unable to recover it. 00:25:42.308 [2024-11-26 20:55:45.769312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.308 [2024-11-26 20:55:45.769340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.308 qpair failed and we were unable to recover it. 00:25:42.308 [2024-11-26 20:55:45.769434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.308 [2024-11-26 20:55:45.769461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.308 qpair failed and we were unable to recover it. 00:25:42.308 [2024-11-26 20:55:45.769540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.308 [2024-11-26 20:55:45.769566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.308 qpair failed and we were unable to recover it. 00:25:42.308 [2024-11-26 20:55:45.769678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.308 [2024-11-26 20:55:45.769703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.308 qpair failed and we were unable to recover it. 00:25:42.308 [2024-11-26 20:55:45.769781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.308 [2024-11-26 20:55:45.769809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.308 qpair failed and we were unable to recover it. 00:25:42.308 [2024-11-26 20:55:45.769898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.308 [2024-11-26 20:55:45.769927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.308 qpair failed and we were unable to recover it. 00:25:42.308 [2024-11-26 20:55:45.770011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.308 [2024-11-26 20:55:45.770039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.308 qpair failed and we were unable to recover it. 00:25:42.308 [2024-11-26 20:55:45.770138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.308 [2024-11-26 20:55:45.770165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.308 qpair failed and we were unable to recover it. 00:25:42.308 [2024-11-26 20:55:45.770255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.308 [2024-11-26 20:55:45.770282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.308 qpair failed and we were unable to recover it. 00:25:42.308 [2024-11-26 20:55:45.770374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.308 [2024-11-26 20:55:45.770400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.308 qpair failed and we were unable to recover it. 00:25:42.308 [2024-11-26 20:55:45.770511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.308 [2024-11-26 20:55:45.770537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.308 qpair failed and we were unable to recover it. 00:25:42.308 [2024-11-26 20:55:45.770614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.308 [2024-11-26 20:55:45.770640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.308 qpair failed and we were unable to recover it. 00:25:42.308 [2024-11-26 20:55:45.770724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.308 [2024-11-26 20:55:45.770751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.308 qpair failed and we were unable to recover it. 00:25:42.308 [2024-11-26 20:55:45.770838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.308 [2024-11-26 20:55:45.770865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.308 qpair failed and we were unable to recover it. 00:25:42.308 [2024-11-26 20:55:45.770975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.308 [2024-11-26 20:55:45.771001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.308 qpair failed and we were unable to recover it. 00:25:42.308 [2024-11-26 20:55:45.771081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.308 [2024-11-26 20:55:45.771106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.308 qpair failed and we were unable to recover it. 00:25:42.308 [2024-11-26 20:55:45.771214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.308 [2024-11-26 20:55:45.771254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.308 qpair failed and we were unable to recover it. 00:25:42.308 [2024-11-26 20:55:45.771376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.308 [2024-11-26 20:55:45.771405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.308 qpair failed and we were unable to recover it. 00:25:42.308 [2024-11-26 20:55:45.771492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.308 [2024-11-26 20:55:45.771518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.308 qpair failed and we were unable to recover it. 00:25:42.308 [2024-11-26 20:55:45.771631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.308 [2024-11-26 20:55:45.771657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.308 qpair failed and we were unable to recover it. 00:25:42.308 [2024-11-26 20:55:45.771777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.308 [2024-11-26 20:55:45.771803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.308 qpair failed and we were unable to recover it. 00:25:42.308 [2024-11-26 20:55:45.771894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.308 [2024-11-26 20:55:45.771920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.308 qpair failed and we were unable to recover it. 00:25:42.309 [2024-11-26 20:55:45.771998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.309 [2024-11-26 20:55:45.772023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.309 qpair failed and we were unable to recover it. 00:25:42.309 [2024-11-26 20:55:45.772138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.309 [2024-11-26 20:55:45.772164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.309 qpair failed and we were unable to recover it. 00:25:42.309 [2024-11-26 20:55:45.772263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.309 [2024-11-26 20:55:45.772313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.309 qpair failed and we were unable to recover it. 00:25:42.309 [2024-11-26 20:55:45.772406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.309 [2024-11-26 20:55:45.772434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.309 qpair failed and we were unable to recover it. 00:25:42.309 [2024-11-26 20:55:45.772527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.309 [2024-11-26 20:55:45.772553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.309 qpair failed and we were unable to recover it. 00:25:42.309 [2024-11-26 20:55:45.772635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.309 [2024-11-26 20:55:45.772660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.309 qpair failed and we were unable to recover it. 00:25:42.309 [2024-11-26 20:55:45.772748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.309 [2024-11-26 20:55:45.772776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.309 qpair failed and we were unable to recover it. 00:25:42.309 [2024-11-26 20:55:45.772861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.309 [2024-11-26 20:55:45.772888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.309 qpair failed and we were unable to recover it. 00:25:42.309 [2024-11-26 20:55:45.772997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.309 [2024-11-26 20:55:45.773026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.309 qpair failed and we were unable to recover it. 00:25:42.309 [2024-11-26 20:55:45.773115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.309 [2024-11-26 20:55:45.773141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.309 qpair failed and we were unable to recover it. 00:25:42.309 [2024-11-26 20:55:45.773240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.309 [2024-11-26 20:55:45.773280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.309 qpair failed and we were unable to recover it. 00:25:42.309 [2024-11-26 20:55:45.773405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.309 [2024-11-26 20:55:45.773437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.309 qpair failed and we were unable to recover it. 00:25:42.309 [2024-11-26 20:55:45.773524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.309 [2024-11-26 20:55:45.773549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.309 qpair failed and we were unable to recover it. 00:25:42.309 [2024-11-26 20:55:45.773625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.309 [2024-11-26 20:55:45.773650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.309 qpair failed and we were unable to recover it. 00:25:42.309 [2024-11-26 20:55:45.773727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.309 [2024-11-26 20:55:45.773752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.309 qpair failed and we were unable to recover it. 00:25:42.309 [2024-11-26 20:55:45.773843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.309 [2024-11-26 20:55:45.773867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.309 qpair failed and we were unable to recover it. 00:25:42.309 [2024-11-26 20:55:45.773948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.309 [2024-11-26 20:55:45.773977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.309 qpair failed and we were unable to recover it. 00:25:42.309 [2024-11-26 20:55:45.774064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.309 [2024-11-26 20:55:45.774090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.309 qpair failed and we were unable to recover it. 00:25:42.309 [2024-11-26 20:55:45.774175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.309 [2024-11-26 20:55:45.774201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.309 qpair failed and we were unable to recover it. 00:25:42.309 [2024-11-26 20:55:45.774286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.309 [2024-11-26 20:55:45.774319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.309 qpair failed and we were unable to recover it. 00:25:42.309 [2024-11-26 20:55:45.774407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.309 [2024-11-26 20:55:45.774433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.309 qpair failed and we were unable to recover it. 00:25:42.309 [2024-11-26 20:55:45.774516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.309 [2024-11-26 20:55:45.774542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.309 qpair failed and we were unable to recover it. 00:25:42.309 [2024-11-26 20:55:45.774624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.309 [2024-11-26 20:55:45.774650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.309 qpair failed and we were unable to recover it. 00:25:42.309 [2024-11-26 20:55:45.774758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.309 [2024-11-26 20:55:45.774783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.309 qpair failed and we were unable to recover it. 00:25:42.309 [2024-11-26 20:55:45.774861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.309 [2024-11-26 20:55:45.774887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.309 qpair failed and we were unable to recover it. 00:25:42.309 [2024-11-26 20:55:45.775063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.309 [2024-11-26 20:55:45.775088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.309 qpair failed and we were unable to recover it. 00:25:42.309 [2024-11-26 20:55:45.775172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.309 [2024-11-26 20:55:45.775198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.309 qpair failed and we were unable to recover it. 00:25:42.309 [2024-11-26 20:55:45.775283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.309 [2024-11-26 20:55:45.775317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.309 qpair failed and we were unable to recover it. 00:25:42.309 [2024-11-26 20:55:45.775402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.309 [2024-11-26 20:55:45.775428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.309 qpair failed and we were unable to recover it. 00:25:42.309 [2024-11-26 20:55:45.775517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.309 [2024-11-26 20:55:45.775543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.309 qpair failed and we were unable to recover it. 00:25:42.309 [2024-11-26 20:55:45.775631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.309 [2024-11-26 20:55:45.775657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.309 qpair failed and we were unable to recover it. 00:25:42.309 [2024-11-26 20:55:45.775737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.309 [2024-11-26 20:55:45.775763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.309 qpair failed and we were unable to recover it. 00:25:42.309 [2024-11-26 20:55:45.775847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.309 [2024-11-26 20:55:45.775873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.309 qpair failed and we were unable to recover it. 00:25:42.309 [2024-11-26 20:55:45.775954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.309 [2024-11-26 20:55:45.775982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.309 qpair failed and we were unable to recover it. 00:25:42.309 [2024-11-26 20:55:45.776061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.309 [2024-11-26 20:55:45.776086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.309 qpair failed and we were unable to recover it. 00:25:42.309 [2024-11-26 20:55:45.776196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.309 [2024-11-26 20:55:45.776222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.309 qpair failed and we were unable to recover it. 00:25:42.309 [2024-11-26 20:55:45.776319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.309 [2024-11-26 20:55:45.776345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.309 qpair failed and we were unable to recover it. 00:25:42.309 20:55:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:42.309 [2024-11-26 20:55:45.776426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.310 [2024-11-26 20:55:45.776453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.310 qpair failed and we were unable to recover it. 00:25:42.310 20:55:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:25:42.310 [2024-11-26 20:55:45.776575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.310 [2024-11-26 20:55:45.776601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.310 qpair failed and we were unable to recover it. 00:25:42.310 [2024-11-26 20:55:45.776685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.310 20:55:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:42.310 [2024-11-26 20:55:45.776712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.310 qpair failed and we were unable to recover it. 00:25:42.310 [2024-11-26 20:55:45.776798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.310 [2024-11-26 20:55:45.776823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.310 20:55:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:42.310 qpair failed and we were unable to recover it. 00:25:42.310 [2024-11-26 20:55:45.776903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.310 [2024-11-26 20:55:45.776928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.310 qpair failed and we were unable to recover it. 00:25:42.310 20:55:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:42.310 [2024-11-26 20:55:45.777006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.310 [2024-11-26 20:55:45.777031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.310 qpair failed and we were unable to recover it. 00:25:42.310 [2024-11-26 20:55:45.777137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.310 [2024-11-26 20:55:45.777162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.310 qpair failed and we were unable to recover it. 00:25:42.310 [2024-11-26 20:55:45.777267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.310 [2024-11-26 20:55:45.777295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.310 qpair failed and we were unable to recover it. 00:25:42.310 [2024-11-26 20:55:45.777382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.310 [2024-11-26 20:55:45.777407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.310 qpair failed and we were unable to recover it. 00:25:42.310 [2024-11-26 20:55:45.777490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.310 [2024-11-26 20:55:45.777516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.310 qpair failed and we were unable to recover it. 00:25:42.310 [2024-11-26 20:55:45.777606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.310 [2024-11-26 20:55:45.777632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.310 qpair failed and we were unable to recover it. 00:25:42.310 [2024-11-26 20:55:45.777742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.310 [2024-11-26 20:55:45.777766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.310 qpair failed and we were unable to recover it. 00:25:42.310 [2024-11-26 20:55:45.777869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.310 [2024-11-26 20:55:45.777898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.310 qpair failed and we were unable to recover it. 00:25:42.310 [2024-11-26 20:55:45.777986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.310 [2024-11-26 20:55:45.778012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.310 qpair failed and we were unable to recover it. 00:25:42.310 [2024-11-26 20:55:45.778090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.310 [2024-11-26 20:55:45.778117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.310 qpair failed and we were unable to recover it. 00:25:42.310 [2024-11-26 20:55:45.778203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.310 [2024-11-26 20:55:45.778229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.310 qpair failed and we were unable to recover it. 00:25:42.310 [2024-11-26 20:55:45.778319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.310 [2024-11-26 20:55:45.778345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.310 qpair failed and we were unable to recover it. 00:25:42.310 [2024-11-26 20:55:45.778441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.310 [2024-11-26 20:55:45.778467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.310 qpair failed and we were unable to recover it. 00:25:42.310 [2024-11-26 20:55:45.778547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.310 [2024-11-26 20:55:45.778573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.310 qpair failed and we were unable to recover it. 00:25:42.310 [2024-11-26 20:55:45.778657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.310 [2024-11-26 20:55:45.778682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.310 qpair failed and we were unable to recover it. 00:25:42.310 [2024-11-26 20:55:45.778763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.310 [2024-11-26 20:55:45.778789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.310 qpair failed and we were unable to recover it. 00:25:42.310 [2024-11-26 20:55:45.778872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.310 [2024-11-26 20:55:45.778900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.310 qpair failed and we were unable to recover it. 00:25:42.310 [2024-11-26 20:55:45.779000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.310 [2024-11-26 20:55:45.779043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.310 qpair failed and we were unable to recover it. 00:25:42.310 [2024-11-26 20:55:45.779144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.310 [2024-11-26 20:55:45.779174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.310 qpair failed and we were unable to recover it. 00:25:42.310 [2024-11-26 20:55:45.779264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.310 [2024-11-26 20:55:45.779291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.310 qpair failed and we were unable to recover it. 00:25:42.310 [2024-11-26 20:55:45.779392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.310 [2024-11-26 20:55:45.779420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.310 qpair failed and we were unable to recover it. 00:25:42.310 [2024-11-26 20:55:45.779519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.310 [2024-11-26 20:55:45.779554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.310 qpair failed and we were unable to recover it. 00:25:42.310 [2024-11-26 20:55:45.779672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.310 [2024-11-26 20:55:45.779700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.310 qpair failed and we were unable to recover it. 00:25:42.310 [2024-11-26 20:55:45.779789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.310 [2024-11-26 20:55:45.779817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.310 qpair failed and we were unable to recover it. 00:25:42.310 [2024-11-26 20:55:45.779911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.310 [2024-11-26 20:55:45.779938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.310 qpair failed and we were unable to recover it. 00:25:42.310 [2024-11-26 20:55:45.780031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.310 [2024-11-26 20:55:45.780057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.310 qpair failed and we were unable to recover it. 00:25:42.310 [2024-11-26 20:55:45.780165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.310 [2024-11-26 20:55:45.780191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.310 qpair failed and we were unable to recover it. 00:25:42.310 [2024-11-26 20:55:45.780321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.310 [2024-11-26 20:55:45.780347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.310 qpair failed and we were unable to recover it. 00:25:42.310 [2024-11-26 20:55:45.780461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.310 [2024-11-26 20:55:45.780487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.310 qpair failed and we were unable to recover it. 00:25:42.310 [2024-11-26 20:55:45.780564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.310 [2024-11-26 20:55:45.780601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.310 qpair failed and we were unable to recover it. 00:25:42.310 [2024-11-26 20:55:45.780682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.310 [2024-11-26 20:55:45.780709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.310 qpair failed and we were unable to recover it. 00:25:42.310 [2024-11-26 20:55:45.780795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.310 [2024-11-26 20:55:45.780834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.310 qpair failed and we were unable to recover it. 00:25:42.310 [2024-11-26 20:55:45.780927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.311 [2024-11-26 20:55:45.780956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.311 qpair failed and we were unable to recover it. 00:25:42.311 [2024-11-26 20:55:45.781045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.311 [2024-11-26 20:55:45.781084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.311 qpair failed and we were unable to recover it. 00:25:42.311 [2024-11-26 20:55:45.781175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.311 [2024-11-26 20:55:45.781202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.311 qpair failed and we were unable to recover it. 00:25:42.311 [2024-11-26 20:55:45.781288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.311 [2024-11-26 20:55:45.781323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.311 qpair failed and we were unable to recover it. 00:25:42.311 [2024-11-26 20:55:45.781408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.311 [2024-11-26 20:55:45.781435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.311 qpair failed and we were unable to recover it. 00:25:42.311 [2024-11-26 20:55:45.781518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.311 [2024-11-26 20:55:45.781545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.311 qpair failed and we were unable to recover it. 00:25:42.311 [2024-11-26 20:55:45.781633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.311 [2024-11-26 20:55:45.781659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.311 qpair failed and we were unable to recover it. 00:25:42.311 [2024-11-26 20:55:45.781743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.311 [2024-11-26 20:55:45.781770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.311 qpair failed and we were unable to recover it. 00:25:42.311 [2024-11-26 20:55:45.781857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.311 [2024-11-26 20:55:45.781884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.311 qpair failed and we were unable to recover it. 00:25:42.311 [2024-11-26 20:55:45.781970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.311 [2024-11-26 20:55:45.781998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.311 qpair failed and we were unable to recover it. 00:25:42.311 [2024-11-26 20:55:45.782118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.311 [2024-11-26 20:55:45.782144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.311 qpair failed and we were unable to recover it. 00:25:42.311 [2024-11-26 20:55:45.782259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.311 [2024-11-26 20:55:45.782297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.311 qpair failed and we were unable to recover it. 00:25:42.311 [2024-11-26 20:55:45.782388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.311 [2024-11-26 20:55:45.782415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.311 qpair failed and we were unable to recover it. 00:25:42.311 [2024-11-26 20:55:45.782500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.311 [2024-11-26 20:55:45.782527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.311 qpair failed and we were unable to recover it. 00:25:42.311 [2024-11-26 20:55:45.782647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.311 [2024-11-26 20:55:45.782674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.311 qpair failed and we were unable to recover it. 00:25:42.311 [2024-11-26 20:55:45.782756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.311 [2024-11-26 20:55:45.782783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.311 qpair failed and we were unable to recover it. 00:25:42.311 [2024-11-26 20:55:45.782870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.311 [2024-11-26 20:55:45.782898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.311 qpair failed and we were unable to recover it. 00:25:42.311 [2024-11-26 20:55:45.782988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.311 [2024-11-26 20:55:45.783018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.311 qpair failed and we were unable to recover it. 00:25:42.311 [2024-11-26 20:55:45.783139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.311 [2024-11-26 20:55:45.783179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.311 qpair failed and we were unable to recover it. 00:25:42.311 [2024-11-26 20:55:45.783265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.311 [2024-11-26 20:55:45.783310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.311 qpair failed and we were unable to recover it. 00:25:42.311 [2024-11-26 20:55:45.783403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.311 [2024-11-26 20:55:45.783430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.311 qpair failed and we were unable to recover it. 00:25:42.311 [2024-11-26 20:55:45.783511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.311 [2024-11-26 20:55:45.783537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.311 qpair failed and we were unable to recover it. 00:25:42.311 [2024-11-26 20:55:45.783637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.311 [2024-11-26 20:55:45.783664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.311 qpair failed and we were unable to recover it. 00:25:42.311 [2024-11-26 20:55:45.783753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.311 [2024-11-26 20:55:45.783781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.311 qpair failed and we were unable to recover it. 00:25:42.311 [2024-11-26 20:55:45.783867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.311 [2024-11-26 20:55:45.783895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.311 qpair failed and we were unable to recover it. 00:25:42.311 [2024-11-26 20:55:45.783981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.311 [2024-11-26 20:55:45.784009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.311 qpair failed and we were unable to recover it. 00:25:42.311 [2024-11-26 20:55:45.784119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.311 [2024-11-26 20:55:45.784146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.311 qpair failed and we were unable to recover it. 00:25:42.311 [2024-11-26 20:55:45.784227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.311 [2024-11-26 20:55:45.784254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.311 qpair failed and we were unable to recover it. 00:25:42.311 [2024-11-26 20:55:45.784381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.311 [2024-11-26 20:55:45.784412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.311 qpair failed and we were unable to recover it. 00:25:42.311 [2024-11-26 20:55:45.784498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.311 [2024-11-26 20:55:45.784531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.311 qpair failed and we were unable to recover it. 00:25:42.311 [2024-11-26 20:55:45.784653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.311 [2024-11-26 20:55:45.784679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.311 qpair failed and we were unable to recover it. 00:25:42.311 [2024-11-26 20:55:45.784807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.311 [2024-11-26 20:55:45.784833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.311 qpair failed and we were unable to recover it. 00:25:42.311 [2024-11-26 20:55:45.784921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.311 [2024-11-26 20:55:45.784946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.311 qpair failed and we were unable to recover it. 00:25:42.311 [2024-11-26 20:55:45.785033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.311 [2024-11-26 20:55:45.785057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.311 qpair failed and we were unable to recover it. 00:25:42.311 [2024-11-26 20:55:45.785171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.311 [2024-11-26 20:55:45.785197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.311 qpair failed and we were unable to recover it. 00:25:42.311 [2024-11-26 20:55:45.785274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.311 [2024-11-26 20:55:45.785320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.311 qpair failed and we were unable to recover it. 00:25:42.311 [2024-11-26 20:55:45.785407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.311 [2024-11-26 20:55:45.785433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.311 qpair failed and we were unable to recover it. 00:25:42.311 [2024-11-26 20:55:45.785513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.311 [2024-11-26 20:55:45.785538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.311 qpair failed and we were unable to recover it. 00:25:42.311 [2024-11-26 20:55:45.785628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.312 [2024-11-26 20:55:45.785652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.312 qpair failed and we were unable to recover it. 00:25:42.312 [2024-11-26 20:55:45.785740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.312 [2024-11-26 20:55:45.785766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.312 qpair failed and we were unable to recover it. 00:25:42.312 [2024-11-26 20:55:45.785847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.312 [2024-11-26 20:55:45.785872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.312 qpair failed and we were unable to recover it. 00:25:42.312 [2024-11-26 20:55:45.785989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.312 [2024-11-26 20:55:45.786014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.312 qpair failed and we were unable to recover it. 00:25:42.312 [2024-11-26 20:55:45.786098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.312 [2024-11-26 20:55:45.786123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.312 qpair failed and we were unable to recover it. 00:25:42.312 [2024-11-26 20:55:45.786209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.312 [2024-11-26 20:55:45.786248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.312 qpair failed and we were unable to recover it. 00:25:42.312 [2024-11-26 20:55:45.786344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.312 [2024-11-26 20:55:45.786371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.312 qpair failed and we were unable to recover it. 00:25:42.312 [2024-11-26 20:55:45.786481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.312 [2024-11-26 20:55:45.786508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.312 qpair failed and we were unable to recover it. 00:25:42.312 [2024-11-26 20:55:45.786589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.312 [2024-11-26 20:55:45.786624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.312 qpair failed and we were unable to recover it. 00:25:42.312 [2024-11-26 20:55:45.786733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.312 [2024-11-26 20:55:45.786760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.312 qpair failed and we were unable to recover it. 00:25:42.312 [2024-11-26 20:55:45.786848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.312 [2024-11-26 20:55:45.786874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.312 qpair failed and we were unable to recover it. 00:25:42.312 [2024-11-26 20:55:45.786970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.312 [2024-11-26 20:55:45.786998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.312 qpair failed and we were unable to recover it. 00:25:42.312 [2024-11-26 20:55:45.787091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.312 [2024-11-26 20:55:45.787130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.312 qpair failed and we were unable to recover it. 00:25:42.312 [2024-11-26 20:55:45.787222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.312 [2024-11-26 20:55:45.787252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.312 qpair failed and we were unable to recover it. 00:25:42.312 [2024-11-26 20:55:45.787388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.312 [2024-11-26 20:55:45.787415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.312 qpair failed and we were unable to recover it. 00:25:42.312 [2024-11-26 20:55:45.787503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.312 [2024-11-26 20:55:45.787530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.312 qpair failed and we were unable to recover it. 00:25:42.312 [2024-11-26 20:55:45.787623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.312 [2024-11-26 20:55:45.787648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.312 qpair failed and we were unable to recover it. 00:25:42.312 [2024-11-26 20:55:45.787744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.312 [2024-11-26 20:55:45.787771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.312 qpair failed and we were unable to recover it. 00:25:42.312 [2024-11-26 20:55:45.787880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.312 [2024-11-26 20:55:45.787911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b4000b90 with addr=10.0.0.2, port=4420 00:25:42.312 qpair failed and we were unable to recover it. 00:25:42.312 [2024-11-26 20:55:45.788013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.312 [2024-11-26 20:55:45.788042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27bc000b90 with addr=10.0.0.2, port=4420 00:25:42.312 qpair failed and we were unable to recover it. 00:25:42.312 [2024-11-26 20:55:45.788175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.312 [2024-11-26 20:55:45.788203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.312 qpair failed and we were unable to recover it. 00:25:42.312 [2024-11-26 20:55:45.788290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.312 [2024-11-26 20:55:45.788325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.312 qpair failed and we were unable to recover it. 00:25:42.312 [2024-11-26 20:55:45.788436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.312 [2024-11-26 20:55:45.788463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.312 qpair failed and we were unable to recover it. 00:25:42.312 [2024-11-26 20:55:45.788544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.312 [2024-11-26 20:55:45.788570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.312 qpair failed and we were unable to recover it. 00:25:42.312 [2024-11-26 20:55:45.788663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.312 [2024-11-26 20:55:45.788690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.312 qpair failed and we were unable to recover it. 00:25:42.312 [2024-11-26 20:55:45.788768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.312 [2024-11-26 20:55:45.788801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.312 qpair failed and we were unable to recover it. 00:25:42.312 [2024-11-26 20:55:45.788888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.312 [2024-11-26 20:55:45.788915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.312 qpair failed and we were unable to recover it. 00:25:42.312 [2024-11-26 20:55:45.789023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.312 [2024-11-26 20:55:45.789049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.312 qpair failed and we were unable to recover it. 00:25:42.312 [2024-11-26 20:55:45.789135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.312 [2024-11-26 20:55:45.789163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.312 qpair failed and we were unable to recover it. 00:25:42.312 [2024-11-26 20:55:45.789247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.312 [2024-11-26 20:55:45.789273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.312 qpair failed and we were unable to recover it. 00:25:42.312 [2024-11-26 20:55:45.789370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.312 [2024-11-26 20:55:45.789397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.312 qpair failed and we were unable to recover it. 00:25:42.312 [2024-11-26 20:55:45.789506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.312 [2024-11-26 20:55:45.789532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.312 qpair failed and we were unable to recover it. 00:25:42.312 [2024-11-26 20:55:45.789626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.312 [2024-11-26 20:55:45.789664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.312 qpair failed and we were unable to recover it. 00:25:42.312 [2024-11-26 20:55:45.789775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.312 [2024-11-26 20:55:45.789802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.313 qpair failed and we were unable to recover it. 00:25:42.313 [2024-11-26 20:55:45.789890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.313 [2024-11-26 20:55:45.789916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.313 qpair failed and we were unable to recover it. 00:25:42.313 [2024-11-26 20:55:45.789993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.313 [2024-11-26 20:55:45.790020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.313 qpair failed and we were unable to recover it. 00:25:42.313 [2024-11-26 20:55:45.790098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.313 [2024-11-26 20:55:45.790124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f27b0000b90 with addr=10.0.0.2, port=4420 00:25:42.313 qpair failed and we were unable to recover it. 00:25:42.313 [2024-11-26 20:55:45.790249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.313 [2024-11-26 20:55:45.790289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.313 qpair failed and we were unable to recover it. 00:25:42.313 [2024-11-26 20:55:45.790415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.313 [2024-11-26 20:55:45.790443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.313 qpair failed and we were unable to recover it. 00:25:42.313 [2024-11-26 20:55:45.790522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.313 [2024-11-26 20:55:45.790548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.313 qpair failed and we were unable to recover it. 00:25:42.313 [2024-11-26 20:55:45.790646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.313 [2024-11-26 20:55:45.790671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.313 qpair failed and we were unable to recover it. 00:25:42.313 [2024-11-26 20:55:45.790790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.313 [2024-11-26 20:55:45.790816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f0dfa0 with addr=10.0.0.2, port=4420 00:25:42.313 qpair failed and we were unable to recover it. 00:25:42.313 A controller has encountered a failure and is being reset. 00:25:42.313 [2024-11-26 20:55:45.790979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:42.313 [2024-11-26 20:55:45.791026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1bf30 with addr=10.0.0.2, port=4420 00:25:42.313 [2024-11-26 20:55:45.791053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1bf30 is same with the state(6) to be set 00:25:42.313 [2024-11-26 20:55:45.791080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f1bf30 (9): Bad file descriptor 00:25:42.313 [2024-11-26 20:55:45.791100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:25:42.313 [2024-11-26 20:55:45.791115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:25:42.313 [2024-11-26 20:55:45.791138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:25:42.313 Unable to reset the controller. 00:25:42.313 20:55:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:42.313 20:55:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:42.313 20:55:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.313 20:55:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:42.313 Malloc0 00:25:42.313 20:55:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.313 20:55:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:42.313 20:55:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.313 20:55:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:42.313 [2024-11-26 20:55:45.841608] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:42.313 20:55:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.313 20:55:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:42.313 20:55:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.313 20:55:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:42.313 20:55:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.313 20:55:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:42.313 20:55:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.313 20:55:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:42.313 20:55:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.313 20:55:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:42.313 20:55:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.313 20:55:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:42.313 [2024-11-26 20:55:45.869857] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:42.313 20:55:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.313 20:55:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:42.313 20:55:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.313 20:55:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:42.313 20:55:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.313 20:55:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1772254 00:25:43.250 Controller properly reset. 00:25:48.516 Initializing NVMe Controllers 00:25:48.516 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:48.516 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:48.516 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:25:48.516 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:25:48.516 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:25:48.516 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:25:48.516 Initialization complete. Launching workers. 00:25:48.516 Starting thread on core 1 00:25:48.516 Starting thread on core 2 00:25:48.516 Starting thread on core 3 00:25:48.516 Starting thread on core 0 00:25:48.516 20:55:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:25:48.516 00:25:48.516 real 0m10.757s 00:25:48.516 user 0m34.282s 00:25:48.516 sys 0m7.182s 00:25:48.516 20:55:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:48.516 20:55:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:48.516 ************************************ 00:25:48.516 END TEST nvmf_target_disconnect_tc2 00:25:48.516 ************************************ 00:25:48.516 20:55:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:25:48.516 20:55:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:25:48.516 20:55:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:25:48.516 20:55:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:48.516 20:55:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:25:48.516 20:55:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:48.516 20:55:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:25:48.516 20:55:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:48.516 20:55:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:48.516 rmmod nvme_tcp 00:25:48.516 rmmod nvme_fabrics 00:25:48.516 rmmod nvme_keyring 00:25:48.516 20:55:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:48.516 20:55:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:25:48.516 20:55:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:25:48.516 20:55:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 1772664 ']' 00:25:48.516 20:55:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 1772664 00:25:48.516 20:55:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1772664 ']' 00:25:48.516 20:55:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 1772664 00:25:48.516 20:55:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:25:48.516 20:55:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:48.516 20:55:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1772664 00:25:48.516 20:55:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:25:48.516 20:55:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:25:48.516 20:55:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1772664' 00:25:48.516 killing process with pid 1772664 00:25:48.516 20:55:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 1772664 00:25:48.516 20:55:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 1772664 00:25:48.516 20:55:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:48.516 20:55:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:48.516 20:55:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:48.516 20:55:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:25:48.516 20:55:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:25:48.517 20:55:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:48.517 20:55:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:25:48.517 20:55:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:48.517 20:55:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:48.517 20:55:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:48.517 20:55:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:48.517 20:55:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:51.054 20:55:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:51.054 00:25:51.054 real 0m15.928s 00:25:51.054 user 1m0.498s 00:25:51.054 sys 0m9.737s 00:25:51.054 20:55:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:51.054 20:55:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:51.054 ************************************ 00:25:51.054 END TEST nvmf_target_disconnect 00:25:51.054 ************************************ 00:25:51.054 20:55:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:25:51.054 00:25:51.054 real 5m3.972s 00:25:51.054 user 11m3.330s 00:25:51.054 sys 1m15.350s 00:25:51.054 20:55:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:51.054 20:55:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.054 ************************************ 00:25:51.054 END TEST nvmf_host 00:25:51.054 ************************************ 00:25:51.054 20:55:54 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:25:51.054 20:55:54 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:25:51.054 20:55:54 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:25:51.054 20:55:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:25:51.054 20:55:54 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:51.054 20:55:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:51.054 ************************************ 00:25:51.054 START TEST nvmf_target_core_interrupt_mode 00:25:51.054 ************************************ 00:25:51.054 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:25:51.054 * Looking for test storage... 00:25:51.054 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:25:51.054 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:51.054 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:25:51.054 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:51.054 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:51.054 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:51.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:51.055 --rc genhtml_branch_coverage=1 00:25:51.055 --rc genhtml_function_coverage=1 00:25:51.055 --rc genhtml_legend=1 00:25:51.055 --rc geninfo_all_blocks=1 00:25:51.055 --rc geninfo_unexecuted_blocks=1 00:25:51.055 00:25:51.055 ' 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:51.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:51.055 --rc genhtml_branch_coverage=1 00:25:51.055 --rc genhtml_function_coverage=1 00:25:51.055 --rc genhtml_legend=1 00:25:51.055 --rc geninfo_all_blocks=1 00:25:51.055 --rc geninfo_unexecuted_blocks=1 00:25:51.055 00:25:51.055 ' 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:51.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:51.055 --rc genhtml_branch_coverage=1 00:25:51.055 --rc genhtml_function_coverage=1 00:25:51.055 --rc genhtml_legend=1 00:25:51.055 --rc geninfo_all_blocks=1 00:25:51.055 --rc geninfo_unexecuted_blocks=1 00:25:51.055 00:25:51.055 ' 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:51.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:51.055 --rc genhtml_branch_coverage=1 00:25:51.055 --rc genhtml_function_coverage=1 00:25:51.055 --rc genhtml_legend=1 00:25:51.055 --rc geninfo_all_blocks=1 00:25:51.055 --rc geninfo_unexecuted_blocks=1 00:25:51.055 00:25:51.055 ' 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:51.055 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:25:51.055 ************************************ 00:25:51.056 START TEST nvmf_abort 00:25:51.056 ************************************ 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:25:51.056 * Looking for test storage... 00:25:51.056 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:51.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:51.056 --rc genhtml_branch_coverage=1 00:25:51.056 --rc genhtml_function_coverage=1 00:25:51.056 --rc genhtml_legend=1 00:25:51.056 --rc geninfo_all_blocks=1 00:25:51.056 --rc geninfo_unexecuted_blocks=1 00:25:51.056 00:25:51.056 ' 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:51.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:51.056 --rc genhtml_branch_coverage=1 00:25:51.056 --rc genhtml_function_coverage=1 00:25:51.056 --rc genhtml_legend=1 00:25:51.056 --rc geninfo_all_blocks=1 00:25:51.056 --rc geninfo_unexecuted_blocks=1 00:25:51.056 00:25:51.056 ' 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:51.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:51.056 --rc genhtml_branch_coverage=1 00:25:51.056 --rc genhtml_function_coverage=1 00:25:51.056 --rc genhtml_legend=1 00:25:51.056 --rc geninfo_all_blocks=1 00:25:51.056 --rc geninfo_unexecuted_blocks=1 00:25:51.056 00:25:51.056 ' 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:51.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:51.056 --rc genhtml_branch_coverage=1 00:25:51.056 --rc genhtml_function_coverage=1 00:25:51.056 --rc genhtml_legend=1 00:25:51.056 --rc geninfo_all_blocks=1 00:25:51.056 --rc geninfo_unexecuted_blocks=1 00:25:51.056 00:25:51.056 ' 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:51.056 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:51.057 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:25:51.057 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:25:51.057 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:51.057 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:51.057 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:51.057 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:51.057 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:25:51.057 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:25:51.057 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:51.057 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:51.057 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:51.057 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:51.057 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:51.057 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:51.057 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:51.057 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:51.057 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:51.057 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:51.057 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:25:51.057 20:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:53.594 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:53.594 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:53.594 Found net devices under 0000:09:00.0: cvl_0_0 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:53.594 Found net devices under 0000:09:00.1: cvl_0_1 00:25:53.594 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:53.595 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:53.595 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:25:53.595 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:53.595 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:53.595 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:53.595 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:53.595 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:53.595 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:53.595 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:53.595 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:53.595 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:53.595 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:53.595 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:53.595 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:53.595 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:53.595 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:53.595 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:53.595 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:53.595 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:53.595 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:53.595 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:53.595 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:53.595 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:53.595 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:53.595 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:53.595 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:53.595 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:53.595 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:53.595 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:53.595 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:25:53.595 00:25:53.595 --- 10.0.0.2 ping statistics --- 00:25:53.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:53.595 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:25:53.595 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:53.595 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:53.595 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:25:53.595 00:25:53.595 --- 10.0.0.1 ping statistics --- 00:25:53.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:53.595 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:25:53.595 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:53.595 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:25:53.595 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:53.595 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:53.595 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:53.595 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:53.595 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:53.595 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:53.595 20:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:53.595 20:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:25:53.595 20:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:53.595 20:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:53.595 20:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:53.595 20:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1775502 00:25:53.595 20:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:25:53.595 20:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1775502 00:25:53.595 20:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1775502 ']' 00:25:53.595 20:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:53.595 20:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:53.595 20:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:53.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:53.595 20:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:53.595 20:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:53.595 [2024-11-26 20:55:57.058913] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:25:53.595 [2024-11-26 20:55:57.060086] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:25:53.595 [2024-11-26 20:55:57.060163] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:53.595 [2024-11-26 20:55:57.136447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:53.595 [2024-11-26 20:55:57.197776] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:53.595 [2024-11-26 20:55:57.197831] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:53.595 [2024-11-26 20:55:57.197845] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:53.595 [2024-11-26 20:55:57.197857] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:53.595 [2024-11-26 20:55:57.197867] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:53.595 [2024-11-26 20:55:57.203326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:53.595 [2024-11-26 20:55:57.203379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:53.595 [2024-11-26 20:55:57.203384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:53.856 [2024-11-26 20:55:57.306655] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:25:53.856 [2024-11-26 20:55:57.306838] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:25:53.856 [2024-11-26 20:55:57.306844] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:25:53.856 [2024-11-26 20:55:57.307112] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:25:53.856 20:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:53.856 20:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:25:53.856 20:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:53.856 20:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:53.856 20:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:53.856 20:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:53.856 20:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:25:53.856 20:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.856 20:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:53.856 [2024-11-26 20:55:57.360114] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:53.856 20:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.856 20:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:25:53.856 20:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.856 20:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:53.856 Malloc0 00:25:53.856 20:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.856 20:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:25:53.856 20:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.856 20:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:53.856 Delay0 00:25:53.856 20:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.856 20:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:25:53.856 20:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.856 20:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:53.856 20:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.856 20:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:25:53.856 20:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.856 20:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:53.856 20:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.856 20:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:53.856 20:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.856 20:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:53.856 [2024-11-26 20:55:57.428271] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:53.856 20:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.856 20:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:53.856 20:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.856 20:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:53.856 20:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.856 20:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:25:53.856 [2024-11-26 20:55:57.539194] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:25:56.409 Initializing NVMe Controllers 00:25:56.409 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:25:56.409 controller IO queue size 128 less than required 00:25:56.409 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:25:56.409 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:25:56.409 Initialization complete. Launching workers. 00:25:56.409 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 28556 00:25:56.409 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28617, failed to submit 66 00:25:56.409 success 28556, unsuccessful 61, failed 0 00:25:56.409 20:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:56.409 20:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.409 20:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:56.409 20:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.409 20:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:56.409 20:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:25:56.409 20:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:56.409 20:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:25:56.409 20:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:56.409 20:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:25:56.409 20:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:56.409 20:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:56.409 rmmod nvme_tcp 00:25:56.409 rmmod nvme_fabrics 00:25:56.409 rmmod nvme_keyring 00:25:56.409 20:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:56.409 20:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:25:56.409 20:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:25:56.409 20:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1775502 ']' 00:25:56.409 20:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1775502 00:25:56.409 20:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1775502 ']' 00:25:56.409 20:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1775502 00:25:56.409 20:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:25:56.409 20:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:56.409 20:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1775502 00:25:56.409 20:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:56.409 20:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:56.409 20:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1775502' 00:25:56.409 killing process with pid 1775502 00:25:56.409 20:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1775502 00:25:56.409 20:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1775502 00:25:56.409 20:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:56.409 20:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:56.409 20:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:56.409 20:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:25:56.409 20:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:25:56.409 20:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:56.409 20:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:25:56.409 20:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:56.409 20:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:56.409 20:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:56.409 20:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:56.409 20:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:58.952 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:58.952 00:25:58.952 real 0m7.513s 00:25:58.952 user 0m9.426s 00:25:58.952 sys 0m2.989s 00:25:58.952 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:58.952 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:58.952 ************************************ 00:25:58.952 END TEST nvmf_abort 00:25:58.952 ************************************ 00:25:58.952 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:25:58.952 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:25:58.952 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:58.952 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:25:58.952 ************************************ 00:25:58.952 START TEST nvmf_ns_hotplug_stress 00:25:58.952 ************************************ 00:25:58.952 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:25:58.952 * Looking for test storage... 00:25:58.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:58.952 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:58.952 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:25:58.952 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:58.952 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:58.952 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:58.952 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:58.952 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:58.952 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:25:58.952 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:25:58.952 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:25:58.952 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:25:58.952 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:25:58.952 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:25:58.952 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:25:58.952 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:58.952 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:25:58.952 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:25:58.952 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:58.952 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:58.952 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:25:58.952 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:25:58.952 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:58.952 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:25:58.952 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:25:58.952 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:25:58.952 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:25:58.952 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:58.952 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:25:58.952 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:25:58.952 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:58.952 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:58.952 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:25:58.952 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:58.952 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:58.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:58.952 --rc genhtml_branch_coverage=1 00:25:58.952 --rc genhtml_function_coverage=1 00:25:58.952 --rc genhtml_legend=1 00:25:58.952 --rc geninfo_all_blocks=1 00:25:58.952 --rc geninfo_unexecuted_blocks=1 00:25:58.952 00:25:58.952 ' 00:25:58.952 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:58.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:58.952 --rc genhtml_branch_coverage=1 00:25:58.952 --rc genhtml_function_coverage=1 00:25:58.952 --rc genhtml_legend=1 00:25:58.952 --rc geninfo_all_blocks=1 00:25:58.952 --rc geninfo_unexecuted_blocks=1 00:25:58.952 00:25:58.952 ' 00:25:58.952 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:58.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:58.952 --rc genhtml_branch_coverage=1 00:25:58.952 --rc genhtml_function_coverage=1 00:25:58.952 --rc genhtml_legend=1 00:25:58.952 --rc geninfo_all_blocks=1 00:25:58.952 --rc geninfo_unexecuted_blocks=1 00:25:58.952 00:25:58.952 ' 00:25:58.952 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:58.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:58.952 --rc genhtml_branch_coverage=1 00:25:58.952 --rc genhtml_function_coverage=1 00:25:58.952 --rc genhtml_legend=1 00:25:58.952 --rc geninfo_all_blocks=1 00:25:58.952 --rc geninfo_unexecuted_blocks=1 00:25:58.952 00:25:58.952 ' 00:25:58.952 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:58.952 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:25:58.952 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:58.952 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:58.952 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:58.952 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:58.953 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:58.953 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:58.953 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:58.953 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:58.953 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:58.953 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:58.953 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:58.953 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:58.953 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:58.953 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:58.953 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:58.953 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:58.953 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:58.953 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:25:58.953 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:58.953 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:58.953 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:58.953 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.953 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.953 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.953 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:25:58.953 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.953 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:25:58.953 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:58.953 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:58.953 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:58.953 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:58.953 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:58.953 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:25:58.953 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:25:58.953 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:58.953 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:58.953 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:58.953 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:58.953 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:25:58.953 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:58.953 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:58.953 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:58.953 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:58.953 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:58.953 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:58.953 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:58.953 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:58.953 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:58.953 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:58.953 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:25:58.953 20:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:00.860 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:00.860 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:26:00.860 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:00.860 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:00.860 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:00.860 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:00.860 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:00.860 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:26:00.860 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:00.860 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:26:00.860 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:26:00.860 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:26:00.860 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:26:00.860 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:26:00.860 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:26:00.860 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:00.860 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:00.860 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:00.860 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:00.860 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:00.860 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:00.860 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:00.860 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:00.860 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:00.860 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:00.860 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:00.860 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:00.860 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:00.860 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:00.860 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:00.860 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:00.860 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:00.860 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:00.860 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:00.860 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:00.860 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:00.860 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:00.860 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:00.860 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:00.860 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:00.860 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:00.861 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:00.861 Found net devices under 0000:09:00.0: cvl_0_0 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:00.861 Found net devices under 0000:09:00.1: cvl_0_1 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:00.861 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:00.861 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:26:00.861 00:26:00.861 --- 10.0.0.2 ping statistics --- 00:26:00.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:00.861 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:00.861 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:00.861 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:26:00.861 00:26:00.861 --- 10.0.0.1 ping statistics --- 00:26:00.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:00.861 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:00.861 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:00.862 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:00.862 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:00.862 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:00.862 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:00.862 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:26:00.862 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:00.862 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:00.862 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:00.862 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1777817 00:26:00.862 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:26:00.862 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1777817 00:26:00.862 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1777817 ']' 00:26:00.862 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:00.862 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:00.862 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:00.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:00.862 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:00.862 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:00.862 [2024-11-26 20:56:04.492705] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:00.862 [2024-11-26 20:56:04.493744] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:26:00.862 [2024-11-26 20:56:04.493807] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:01.120 [2024-11-26 20:56:04.564189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:01.120 [2024-11-26 20:56:04.619594] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:01.120 [2024-11-26 20:56:04.619645] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:01.120 [2024-11-26 20:56:04.619673] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:01.120 [2024-11-26 20:56:04.619684] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:01.120 [2024-11-26 20:56:04.619693] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:01.120 [2024-11-26 20:56:04.621156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:01.120 [2024-11-26 20:56:04.621210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:01.120 [2024-11-26 20:56:04.621214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:01.120 [2024-11-26 20:56:04.707763] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:01.120 [2024-11-26 20:56:04.707969] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:01.120 [2024-11-26 20:56:04.707971] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:01.120 [2024-11-26 20:56:04.708245] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:26:01.120 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:01.120 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:26:01.120 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:01.120 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:01.120 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:01.120 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:01.120 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:26:01.120 20:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:01.378 [2024-11-26 20:56:05.014002] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:01.378 20:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:26:01.637 20:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:01.896 [2024-11-26 20:56:05.566248] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:01.896 20:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:02.465 20:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:26:02.465 Malloc0 00:26:02.465 20:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:26:03.034 Delay0 00:26:03.034 20:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:03.034 20:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:26:03.293 NULL1 00:26:03.293 20:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:26:03.859 20:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1778118 00:26:03.859 20:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1778118 00:26:03.859 20:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:26:03.859 20:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:05.235 Read completed with error (sct=0, sc=11) 00:26:05.235 20:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:05.235 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:05.235 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:05.235 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:05.235 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:05.235 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:05.236 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:05.236 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:05.236 20:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:26:05.236 20:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:26:05.492 true 00:26:05.492 20:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1778118 00:26:05.492 20:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:06.419 20:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:06.419 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:06.675 20:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:26:06.675 20:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:26:06.932 true 00:26:06.932 20:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1778118 00:26:06.932 20:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:07.188 20:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:07.444 20:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:26:07.444 20:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:26:07.701 true 00:26:07.701 20:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1778118 00:26:07.701 20:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:07.958 20:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:08.216 20:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:26:08.216 20:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:26:08.474 true 00:26:08.474 20:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1778118 00:26:08.474 20:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:09.406 20:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:09.407 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:09.407 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:09.663 20:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:26:09.663 20:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:26:09.920 true 00:26:09.920 20:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1778118 00:26:09.920 20:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:10.177 20:56:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:10.433 20:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:26:10.433 20:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:26:10.691 true 00:26:10.948 20:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1778118 00:26:10.948 20:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:11.205 20:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:11.461 20:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:26:11.461 20:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:26:11.718 true 00:26:11.718 20:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1778118 00:26:11.718 20:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:12.650 20:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:12.650 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:12.907 20:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:26:12.907 20:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:26:13.165 true 00:26:13.165 20:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1778118 00:26:13.165 20:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:13.422 20:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:13.680 20:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:26:13.680 20:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:26:13.937 true 00:26:13.937 20:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1778118 00:26:13.937 20:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:14.194 20:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:14.452 20:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:26:14.452 20:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:26:14.710 true 00:26:14.710 20:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1778118 00:26:14.710 20:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:15.641 20:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:15.898 20:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:26:15.898 20:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:26:16.156 true 00:26:16.414 20:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1778118 00:26:16.414 20:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:16.674 20:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:16.930 20:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:26:16.930 20:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:26:17.188 true 00:26:17.188 20:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1778118 00:26:17.188 20:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:17.445 20:56:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:17.703 20:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:26:17.703 20:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:26:17.960 true 00:26:17.960 20:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1778118 00:26:17.960 20:56:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:18.893 20:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:18.893 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:19.150 20:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:26:19.150 20:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:26:19.408 true 00:26:19.408 20:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1778118 00:26:19.408 20:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:19.665 20:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:19.922 20:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:26:19.922 20:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:26:20.179 true 00:26:20.179 20:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1778118 00:26:20.179 20:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:20.749 20:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:20.749 20:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:26:20.749 20:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:26:21.070 true 00:26:21.070 20:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1778118 00:26:21.070 20:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:22.030 20:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:22.287 20:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:26:22.287 20:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:26:22.546 true 00:26:22.546 20:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1778118 00:26:22.546 20:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:23.112 20:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:23.112 20:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:26:23.112 20:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:26:23.370 true 00:26:23.370 20:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1778118 00:26:23.370 20:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:23.628 20:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:24.194 20:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:26:24.194 20:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:26:24.194 true 00:26:24.194 20:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1778118 00:26:24.194 20:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:25.126 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:25.126 20:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:25.383 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:25.383 20:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:26:25.383 20:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:26:25.641 true 00:26:25.898 20:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1778118 00:26:25.898 20:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:26.155 20:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:26.413 20:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:26:26.413 20:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:26:26.670 true 00:26:26.670 20:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1778118 00:26:26.670 20:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:26.928 20:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:27.186 20:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:26:27.186 20:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:26:27.444 true 00:26:27.444 20:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1778118 00:26:27.444 20:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:28.378 20:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:28.378 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:28.378 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:28.378 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:28.636 20:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:26:28.636 20:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:26:28.893 true 00:26:28.893 20:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1778118 00:26:28.893 20:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:29.151 20:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:29.409 20:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:26:29.409 20:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:26:29.667 true 00:26:29.667 20:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1778118 00:26:29.667 20:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:30.600 20:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:30.857 20:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:26:30.857 20:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:26:31.115 true 00:26:31.115 20:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1778118 00:26:31.115 20:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:31.372 20:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:31.629 20:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:26:31.630 20:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:26:31.887 true 00:26:31.887 20:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1778118 00:26:31.887 20:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:32.144 20:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:32.400 20:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:26:32.400 20:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:26:32.657 true 00:26:32.915 20:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1778118 00:26:32.915 20:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:33.847 20:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:34.104 20:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:26:34.104 20:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:26:34.104 Initializing NVMe Controllers 00:26:34.104 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:34.104 Controller IO queue size 128, less than required. 00:26:34.104 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:34.104 Controller IO queue size 128, less than required. 00:26:34.104 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:34.104 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:34.104 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:34.104 Initialization complete. Launching workers. 00:26:34.104 ======================================================== 00:26:34.104 Latency(us) 00:26:34.104 Device Information : IOPS MiB/s Average min max 00:26:34.104 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 642.16 0.31 82765.20 2944.36 1084899.58 00:26:34.104 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8656.76 4.23 14787.59 2667.56 366536.63 00:26:34.104 ======================================================== 00:26:34.104 Total : 9298.93 4.54 19481.99 2667.56 1084899.58 00:26:34.104 00:26:34.361 true 00:26:34.361 20:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1778118 00:26:34.361 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1778118) - No such process 00:26:34.362 20:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1778118 00:26:34.362 20:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:34.619 20:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:34.876 20:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:26:34.876 20:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:26:34.876 20:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:26:34.876 20:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:34.876 20:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:26:35.134 null0 00:26:35.134 20:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:35.134 20:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:35.134 20:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:26:35.392 null1 00:26:35.392 20:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:35.392 20:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:35.392 20:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:26:35.653 null2 00:26:35.653 20:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:35.653 20:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:35.653 20:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:26:35.913 null3 00:26:35.913 20:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:35.913 20:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:35.913 20:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:26:36.171 null4 00:26:36.171 20:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:36.171 20:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:36.171 20:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:26:36.429 null5 00:26:36.429 20:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:36.429 20:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:36.429 20:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:26:36.687 null6 00:26:36.687 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:36.687 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:36.688 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:26:36.946 null7 00:26:36.946 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:36.946 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:36.946 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:26:36.946 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:36.946 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:36.946 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1782120 1782121 1782123 1782125 1782127 1782129 1782131 1782133 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:36.947 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:37.205 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:37.205 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:37.205 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:37.205 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:37.205 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:37.205 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:37.205 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:37.205 20:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:37.464 20:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:37.464 20:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:37.464 20:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:37.722 20:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:37.722 20:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:37.722 20:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:37.722 20:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:37.722 20:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:37.722 20:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:37.722 20:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:37.722 20:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:37.722 20:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:37.722 20:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:37.722 20:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:37.722 20:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:37.722 20:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:37.722 20:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:37.722 20:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:37.722 20:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:37.722 20:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:37.722 20:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:37.722 20:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:37.722 20:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:37.722 20:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:37.980 20:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:37.980 20:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:37.980 20:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:37.980 20:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:37.980 20:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:37.980 20:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:37.980 20:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:37.980 20:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:38.241 20:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:38.241 20:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:38.241 20:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:38.241 20:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:38.241 20:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:38.241 20:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:38.241 20:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:38.241 20:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:38.241 20:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:38.241 20:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:38.241 20:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:38.241 20:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:38.241 20:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:38.241 20:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:38.241 20:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:38.241 20:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:38.241 20:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:38.241 20:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:38.241 20:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:38.241 20:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:38.241 20:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:38.241 20:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:38.241 20:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:38.241 20:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:38.545 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:38.545 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:38.545 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:38.546 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:38.546 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:38.546 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:38.546 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:38.546 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:38.804 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:38.804 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:38.804 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:38.804 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:38.804 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:38.804 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:38.804 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:38.804 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:38.804 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:38.804 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:38.804 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:38.804 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:38.804 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:38.804 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:38.804 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:38.804 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:38.804 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:38.804 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:38.804 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:38.804 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:38.804 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:38.804 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:38.804 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:38.804 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:39.062 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:39.062 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:39.062 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:39.062 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:39.062 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:39.062 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:39.062 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:39.062 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:39.321 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:39.321 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:39.321 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:39.321 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:39.321 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:39.321 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:39.321 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:39.321 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:39.321 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:39.321 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:39.321 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:39.321 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:39.321 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:39.321 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:39.321 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:39.321 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:39.321 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:39.321 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:39.321 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:39.321 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:39.321 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:39.321 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:39.321 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:39.321 20:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:39.579 20:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:39.579 20:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:39.838 20:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:39.838 20:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:39.838 20:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:39.838 20:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:39.838 20:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:39.838 20:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:40.096 20:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:40.096 20:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:40.096 20:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:40.096 20:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:40.096 20:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:40.096 20:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:40.096 20:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:40.096 20:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:40.096 20:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:40.096 20:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:40.096 20:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:40.096 20:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:40.096 20:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:40.096 20:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:40.096 20:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:40.096 20:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:40.096 20:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:40.096 20:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:40.096 20:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:40.096 20:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:40.096 20:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:40.096 20:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:40.096 20:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:40.096 20:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:40.354 20:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:40.354 20:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:40.354 20:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:40.354 20:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:40.354 20:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:40.354 20:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:40.354 20:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:40.354 20:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:40.613 20:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:40.613 20:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:40.613 20:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:40.613 20:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:40.613 20:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:40.613 20:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:40.613 20:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:40.613 20:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:40.613 20:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:40.613 20:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:40.613 20:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:40.613 20:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:40.613 20:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:40.613 20:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:40.614 20:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:40.614 20:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:40.614 20:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:40.614 20:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:40.614 20:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:40.614 20:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:40.614 20:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:40.614 20:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:40.614 20:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:40.614 20:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:40.872 20:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:40.872 20:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:40.872 20:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:40.872 20:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:40.872 20:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:40.872 20:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:40.872 20:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:40.872 20:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:41.131 20:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:41.131 20:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:41.131 20:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:41.131 20:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:41.131 20:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:41.131 20:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:41.131 20:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:41.131 20:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:41.131 20:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:41.131 20:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:41.131 20:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:41.131 20:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:41.131 20:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:41.131 20:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:41.131 20:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:41.131 20:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:41.131 20:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:41.131 20:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:41.131 20:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:41.131 20:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:41.131 20:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:41.131 20:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:41.131 20:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:41.131 20:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:41.389 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:41.389 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:41.389 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:41.389 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:41.389 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:41.389 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:41.389 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:41.389 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:41.956 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:41.956 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:41.956 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:41.956 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:41.956 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:41.956 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:41.956 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:41.956 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:41.956 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:41.956 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:41.956 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:41.956 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:41.956 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:41.956 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:41.956 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:41.956 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:41.956 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:41.956 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:41.956 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:41.956 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:41.956 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:41.956 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:41.956 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:41.956 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:41.956 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:41.956 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:42.214 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:42.214 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:42.214 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:42.214 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:42.214 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:42.214 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:42.472 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:42.472 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:42.472 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:42.472 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:42.472 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:42.472 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:42.472 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:42.472 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:42.472 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:42.472 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:42.472 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:42.472 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:42.472 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:42.472 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:42.472 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:42.472 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:42.472 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:42.472 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:42.472 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:42.472 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:42.472 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:42.472 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:42.472 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:42.472 20:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:42.730 20:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:42.730 20:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:42.730 20:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:42.730 20:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:42.730 20:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:42.730 20:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:42.730 20:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:42.730 20:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:42.988 20:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:42.988 20:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:42.988 20:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:42.988 20:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:42.988 20:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:42.988 20:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:42.988 20:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:42.988 20:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:42.988 20:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:42.988 20:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:42.988 20:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:42.988 20:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:42.988 20:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:42.988 20:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:42.988 20:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:42.988 20:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:42.988 20:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:26:42.988 20:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:26:42.988 20:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:42.988 20:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:26:42.988 20:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:42.988 20:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:26:42.988 20:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:42.988 20:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:42.988 rmmod nvme_tcp 00:26:42.988 rmmod nvme_fabrics 00:26:42.989 rmmod nvme_keyring 00:26:42.989 20:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:42.989 20:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:26:42.989 20:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:26:42.989 20:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1777817 ']' 00:26:42.989 20:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1777817 00:26:42.989 20:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1777817 ']' 00:26:42.989 20:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1777817 00:26:42.989 20:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:26:42.989 20:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:42.989 20:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1777817 00:26:42.989 20:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:42.989 20:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:42.989 20:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1777817' 00:26:42.989 killing process with pid 1777817 00:26:42.989 20:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1777817 00:26:42.989 20:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1777817 00:26:43.247 20:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:43.247 20:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:43.247 20:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:43.247 20:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:26:43.247 20:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:26:43.247 20:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:43.247 20:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:26:43.247 20:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:43.247 20:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:43.247 20:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:43.247 20:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:43.247 20:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:45.778 20:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:45.778 00:26:45.778 real 0m46.894s 00:26:45.778 user 3m17.162s 00:26:45.778 sys 0m21.342s 00:26:45.778 20:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:45.778 20:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:45.778 ************************************ 00:26:45.778 END TEST nvmf_ns_hotplug_stress 00:26:45.778 ************************************ 00:26:45.778 20:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:26:45.778 20:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:45.778 20:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:45.778 20:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:45.778 ************************************ 00:26:45.778 START TEST nvmf_delete_subsystem 00:26:45.778 ************************************ 00:26:45.778 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:26:45.778 * Looking for test storage... 00:26:45.778 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:45.778 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:45.778 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:26:45.778 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:45.778 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:45.778 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:45.778 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:45.778 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:45.778 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:26:45.778 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:26:45.778 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:26:45.778 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:26:45.778 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:26:45.778 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:26:45.778 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:26:45.778 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:45.778 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:26:45.778 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:26:45.778 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:45.778 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:45.778 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:26:45.778 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:26:45.778 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:45.778 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:26:45.778 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:26:45.779 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:26:45.779 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:26:45.779 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:45.779 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:26:45.779 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:26:45.779 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:45.779 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:45.779 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:26:45.779 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:45.779 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:45.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:45.779 --rc genhtml_branch_coverage=1 00:26:45.779 --rc genhtml_function_coverage=1 00:26:45.779 --rc genhtml_legend=1 00:26:45.779 --rc geninfo_all_blocks=1 00:26:45.779 --rc geninfo_unexecuted_blocks=1 00:26:45.779 00:26:45.779 ' 00:26:45.779 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:45.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:45.779 --rc genhtml_branch_coverage=1 00:26:45.779 --rc genhtml_function_coverage=1 00:26:45.779 --rc genhtml_legend=1 00:26:45.779 --rc geninfo_all_blocks=1 00:26:45.779 --rc geninfo_unexecuted_blocks=1 00:26:45.779 00:26:45.779 ' 00:26:45.779 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:45.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:45.779 --rc genhtml_branch_coverage=1 00:26:45.779 --rc genhtml_function_coverage=1 00:26:45.779 --rc genhtml_legend=1 00:26:45.779 --rc geninfo_all_blocks=1 00:26:45.779 --rc geninfo_unexecuted_blocks=1 00:26:45.779 00:26:45.779 ' 00:26:45.779 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:45.779 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:45.779 --rc genhtml_branch_coverage=1 00:26:45.779 --rc genhtml_function_coverage=1 00:26:45.779 --rc genhtml_legend=1 00:26:45.779 --rc geninfo_all_blocks=1 00:26:45.779 --rc geninfo_unexecuted_blocks=1 00:26:45.779 00:26:45.779 ' 00:26:45.779 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:45.779 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:26:45.779 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:45.779 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:45.779 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:45.779 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:45.779 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:45.779 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:45.779 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:45.779 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:45.779 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:45.779 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:45.779 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:45.779 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:45.779 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:45.779 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:45.779 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:45.779 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:45.779 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:45.779 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:26:45.779 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:45.779 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:45.779 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:45.779 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.779 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.780 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.780 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:26:45.780 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:45.780 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:26:45.780 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:45.780 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:45.780 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:45.780 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:45.780 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:45.780 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:45.780 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:45.780 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:45.780 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:45.780 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:45.780 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:26:45.780 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:45.780 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:45.780 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:45.780 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:45.780 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:45.780 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:45.780 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:45.780 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:45.780 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:45.780 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:45.780 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:26:45.780 20:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:47.679 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:47.679 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:26:47.679 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:47.679 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:47.679 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:47.679 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:47.679 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:47.679 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:26:47.679 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:47.679 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:26:47.679 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:26:47.679 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:26:47.679 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:26:47.679 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:26:47.679 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:26:47.679 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:47.679 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:47.679 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:47.679 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:47.679 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:47.679 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:47.679 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:47.679 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:47.679 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:47.679 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:47.679 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:47.679 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:47.679 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:47.679 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:47.679 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:47.679 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:47.679 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:47.679 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:47.679 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:47.679 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:47.679 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:47.679 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:47.679 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:47.680 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:47.680 Found net devices under 0000:09:00.0: cvl_0_0 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:47.680 Found net devices under 0000:09:00.1: cvl_0_1 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:47.680 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:47.680 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.349 ms 00:26:47.680 00:26:47.680 --- 10.0.0.2 ping statistics --- 00:26:47.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:47.680 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:47.680 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:47.680 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:26:47.680 00:26:47.680 --- 10.0.0.1 ping statistics --- 00:26:47.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:47.680 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:47.680 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:47.938 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1784999 00:26:47.938 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:26:47.938 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1784999 00:26:47.938 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1784999 ']' 00:26:47.938 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:47.938 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:47.938 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:47.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:47.938 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:47.938 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:47.938 [2024-11-26 20:56:51.423255] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:47.938 [2024-11-26 20:56:51.424432] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:26:47.938 [2024-11-26 20:56:51.424486] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:47.938 [2024-11-26 20:56:51.497747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:47.938 [2024-11-26 20:56:51.556466] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:47.938 [2024-11-26 20:56:51.556521] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:47.938 [2024-11-26 20:56:51.556534] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:47.938 [2024-11-26 20:56:51.556545] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:47.938 [2024-11-26 20:56:51.556554] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:47.938 [2024-11-26 20:56:51.559348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:47.938 [2024-11-26 20:56:51.559357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:48.194 [2024-11-26 20:56:51.660013] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:48.194 [2024-11-26 20:56:51.660063] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:48.194 [2024-11-26 20:56:51.660259] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:48.194 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:48.194 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:26:48.194 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:48.194 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:48.194 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:48.194 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:48.194 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:48.194 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.194 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:48.194 [2024-11-26 20:56:51.712055] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:48.194 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.194 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:26:48.194 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.194 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:48.194 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.194 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:48.194 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.194 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:48.194 [2024-11-26 20:56:51.732355] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:48.194 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.194 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:26:48.194 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.194 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:48.194 NULL1 00:26:48.194 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.194 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:26:48.194 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.194 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:48.194 Delay0 00:26:48.194 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.194 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:48.194 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.194 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:48.194 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.194 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1785026 00:26:48.194 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:26:48.194 20:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:26:48.194 [2024-11-26 20:56:51.813393] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:50.087 20:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:50.087 20:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.087 20:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 starting I/O failed: -6 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Write completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 starting I/O failed: -6 00:26:50.652 Write completed with error (sct=0, sc=8) 00:26:50.652 Write completed with error (sct=0, sc=8) 00:26:50.652 Write completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 starting I/O failed: -6 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Write completed with error (sct=0, sc=8) 00:26:50.652 starting I/O failed: -6 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 starting I/O failed: -6 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 starting I/O failed: -6 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 starting I/O failed: -6 00:26:50.652 Write completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Write completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 starting I/O failed: -6 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Write completed with error (sct=0, sc=8) 00:26:50.652 starting I/O failed: -6 00:26:50.652 Write completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 starting I/O failed: -6 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 starting I/O failed: -6 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Write completed with error (sct=0, sc=8) 00:26:50.652 starting I/O failed: -6 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 starting I/O failed: -6 00:26:50.652 Write completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Write completed with error (sct=0, sc=8) 00:26:50.652 starting I/O failed: -6 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Write completed with error (sct=0, sc=8) 00:26:50.652 starting I/O failed: -6 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 starting I/O failed: -6 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Write completed with error (sct=0, sc=8) 00:26:50.652 Write completed with error (sct=0, sc=8) 00:26:50.652 starting I/O failed: -6 00:26:50.652 Write completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Write completed with error (sct=0, sc=8) 00:26:50.652 starting I/O failed: -6 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 starting I/O failed: -6 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 starting I/O failed: -6 00:26:50.652 Write completed with error (sct=0, sc=8) 00:26:50.652 starting I/O failed: -6 00:26:50.652 Write completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 starting I/O failed: -6 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 starting I/O failed: -6 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 starting I/O failed: -6 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 starting I/O failed: -6 00:26:50.652 starting I/O failed: -6 00:26:50.652 Write completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Write completed with error (sct=0, sc=8) 00:26:50.652 starting I/O failed: -6 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 starting I/O failed: -6 00:26:50.652 Write completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 starting I/O failed: -6 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 starting I/O failed: -6 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 starting I/O failed: -6 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 starting I/O failed: -6 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 starting I/O failed: -6 00:26:50.652 Write completed with error (sct=0, sc=8) 00:26:50.652 starting I/O failed: -6 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Write completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 starting I/O failed: -6 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 starting I/O failed: -6 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Write completed with error (sct=0, sc=8) 00:26:50.652 starting I/O failed: -6 00:26:50.652 Write completed with error (sct=0, sc=8) 00:26:50.652 starting I/O failed: -6 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 starting I/O failed: -6 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 starting I/O failed: -6 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Write completed with error (sct=0, sc=8) 00:26:50.652 starting I/O failed: -6 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 starting I/O failed: -6 00:26:50.652 Write completed with error (sct=0, sc=8) 00:26:50.652 Write completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 starting I/O failed: -6 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 starting I/O failed: -6 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Write completed with error (sct=0, sc=8) 00:26:50.652 starting I/O failed: -6 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 starting I/O failed: -6 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 starting I/O failed: -6 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 starting I/O failed: -6 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 starting I/O failed: -6 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 starting I/O failed: -6 00:26:50.652 Write completed with error (sct=0, sc=8) 00:26:50.652 starting I/O failed: -6 00:26:50.652 starting I/O failed: -6 00:26:50.652 starting I/O failed: -6 00:26:50.652 starting I/O failed: -6 00:26:50.652 starting I/O failed: -6 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 starting I/O failed: -6 00:26:50.652 Read completed with error (sct=0, sc=8) 00:26:50.652 starting I/O failed: -6 00:26:50.652 Write completed with error (sct=0, sc=8) 00:26:50.652 Write completed with error (sct=0, sc=8) 00:26:50.652 starting I/O failed: -6 00:26:50.653 Write completed with error (sct=0, sc=8) 00:26:50.653 starting I/O failed: -6 00:26:50.653 Read completed with error (sct=0, sc=8) 00:26:50.653 starting I/O failed: -6 00:26:50.653 Write completed with error (sct=0, sc=8) 00:26:50.653 Write completed with error (sct=0, sc=8) 00:26:50.653 starting I/O failed: -6 00:26:50.653 Read completed with error (sct=0, sc=8) 00:26:50.653 starting I/O failed: -6 00:26:50.653 Read completed with error (sct=0, sc=8) 00:26:50.653 starting I/O failed: -6 00:26:50.653 Read completed with error (sct=0, sc=8) 00:26:50.653 Read completed with error (sct=0, sc=8) 00:26:50.653 starting I/O failed: -6 00:26:50.653 Read completed with error (sct=0, sc=8) 00:26:50.653 starting I/O failed: -6 00:26:50.653 Read completed with error (sct=0, sc=8) 00:26:50.653 starting I/O failed: -6 00:26:50.653 Read completed with error (sct=0, sc=8) 00:26:50.653 Write completed with error (sct=0, sc=8) 00:26:50.653 starting I/O failed: -6 00:26:50.653 Read completed with error (sct=0, sc=8) 00:26:50.653 starting I/O failed: -6 00:26:50.653 Read completed with error (sct=0, sc=8) 00:26:50.653 starting I/O failed: -6 00:26:50.653 Read completed with error (sct=0, sc=8) 00:26:50.653 Read completed with error (sct=0, sc=8) 00:26:50.653 starting I/O failed: -6 00:26:50.653 Read completed with error (sct=0, sc=8) 00:26:50.653 starting I/O failed: -6 00:26:50.653 Write completed with error (sct=0, sc=8) 00:26:50.653 starting I/O failed: -6 00:26:50.653 Read completed with error (sct=0, sc=8) 00:26:50.653 Read completed with error (sct=0, sc=8) 00:26:50.653 starting I/O failed: -6 00:26:50.653 Read completed with error (sct=0, sc=8) 00:26:50.653 starting I/O failed: -6 00:26:50.653 Read completed with error (sct=0, sc=8) 00:26:50.653 starting I/O failed: -6 00:26:50.653 Read completed with error (sct=0, sc=8) 00:26:50.653 Read completed with error (sct=0, sc=8) 00:26:50.653 starting I/O failed: -6 00:26:50.653 Read completed with error (sct=0, sc=8) 00:26:50.653 starting I/O failed: -6 00:26:50.653 Read completed with error (sct=0, sc=8) 00:26:50.653 starting I/O failed: -6 00:26:50.653 Read completed with error (sct=0, sc=8) 00:26:50.653 Read completed with error (sct=0, sc=8) 00:26:50.653 starting I/O failed: -6 00:26:50.653 Read completed with error (sct=0, sc=8) 00:26:50.653 starting I/O failed: -6 00:26:50.653 Read completed with error (sct=0, sc=8) 00:26:50.653 starting I/O failed: -6 00:26:50.653 Read completed with error (sct=0, sc=8) 00:26:50.653 Read completed with error (sct=0, sc=8) 00:26:50.653 starting I/O failed: -6 00:26:50.653 Write completed with error (sct=0, sc=8) 00:26:50.653 starting I/O failed: -6 00:26:50.653 Read completed with error (sct=0, sc=8) 00:26:50.653 starting I/O failed: -6 00:26:50.653 Read completed with error (sct=0, sc=8) 00:26:50.653 Write completed with error (sct=0, sc=8) 00:26:50.653 starting I/O failed: -6 00:26:50.653 Read completed with error (sct=0, sc=8) 00:26:50.653 starting I/O failed: -6 00:26:50.653 Write completed with error (sct=0, sc=8) 00:26:50.653 starting I/O failed: -6 00:26:50.653 Read completed with error (sct=0, sc=8) 00:26:50.653 Read completed with error (sct=0, sc=8) 00:26:50.653 starting I/O failed: -6 00:26:50.653 Read completed with error (sct=0, sc=8) 00:26:50.653 starting I/O failed: -6 00:26:50.653 Read completed with error (sct=0, sc=8) 00:26:50.653 starting I/O failed: -6 00:26:50.653 Read completed with error (sct=0, sc=8) 00:26:50.653 Write completed with error (sct=0, sc=8) 00:26:50.653 starting I/O failed: -6 00:26:50.653 Read completed with error (sct=0, sc=8) 00:26:50.653 starting I/O failed: -6 00:26:50.653 Read completed with error (sct=0, sc=8) 00:26:50.653 starting I/O failed: -6 00:26:50.653 Read completed with error (sct=0, sc=8) 00:26:50.653 Write completed with error (sct=0, sc=8) 00:26:50.653 starting I/O failed: -6 00:26:50.653 Read completed with error (sct=0, sc=8) 00:26:50.653 starting I/O failed: -6 00:26:50.653 Write completed with error (sct=0, sc=8) 00:26:50.653 starting I/O failed: -6 00:26:50.653 Read completed with error (sct=0, sc=8) 00:26:50.653 Read completed with error (sct=0, sc=8) 00:26:50.653 starting I/O failed: -6 00:26:50.653 Read completed with error (sct=0, sc=8) 00:26:50.653 starting I/O failed: -6 00:26:50.653 Read completed with error (sct=0, sc=8) 00:26:50.653 starting I/O failed: -6 00:26:50.653 Read completed with error (sct=0, sc=8) 00:26:50.653 Write completed with error (sct=0, sc=8) 00:26:50.653 starting I/O failed: -6 00:26:50.653 Write completed with error (sct=0, sc=8) 00:26:50.653 starting I/O failed: -6 00:26:50.653 [2024-11-26 20:56:54.109557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f92c0000c40 is same with the state(6) to be set 00:26:51.586 [2024-11-26 20:56:55.071202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56d9b0 is same with the state(6) to be set 00:26:51.586 Read completed with error (sct=0, sc=8) 00:26:51.586 Read completed with error (sct=0, sc=8) 00:26:51.586 Read completed with error (sct=0, sc=8) 00:26:51.586 Write completed with error (sct=0, sc=8) 00:26:51.586 Read completed with error (sct=0, sc=8) 00:26:51.586 Write completed with error (sct=0, sc=8) 00:26:51.586 Write completed with error (sct=0, sc=8) 00:26:51.586 Read completed with error (sct=0, sc=8) 00:26:51.586 Read completed with error (sct=0, sc=8) 00:26:51.586 Write completed with error (sct=0, sc=8) 00:26:51.587 Write completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Write completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Write completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Write completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Write completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Write completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Write completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Write completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Write completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 [2024-11-26 20:56:55.107647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f92c000d020 is same with the state(6) to be set 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Write completed with error (sct=0, sc=8) 00:26:51.587 Write completed with error (sct=0, sc=8) 00:26:51.587 Write completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Write completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Write completed with error (sct=0, sc=8) 00:26:51.587 Write completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 [2024-11-26 20:56:55.110905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56c860 is same with the state(6) to be set 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Write completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Write completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Write completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Write completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Write completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Write completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Write completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Write completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Write completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 [2024-11-26 20:56:55.111179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56c4a0 is same with the state(6) to be set 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Write completed with error (sct=0, sc=8) 00:26:51.587 Write completed with error (sct=0, sc=8) 00:26:51.587 Write completed with error (sct=0, sc=8) 00:26:51.587 Write completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Write completed with error (sct=0, sc=8) 00:26:51.587 Write completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Write completed with error (sct=0, sc=8) 00:26:51.587 Write completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Write completed with error (sct=0, sc=8) 00:26:51.587 Write completed with error (sct=0, sc=8) 00:26:51.587 Write completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Write completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Write completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Write completed with error (sct=0, sc=8) 00:26:51.587 Write completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Write completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Read completed with error (sct=0, sc=8) 00:26:51.587 Write completed with error (sct=0, sc=8) 00:26:51.587 [2024-11-26 20:56:55.111451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56c2c0 is same with the state(6) to be set 00:26:51.587 Initializing NVMe Controllers 00:26:51.587 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:51.587 Controller IO queue size 128, less than required. 00:26:51.587 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:51.587 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:26:51.587 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:26:51.587 Initialization complete. Launching workers. 00:26:51.587 ======================================================== 00:26:51.587 Latency(us) 00:26:51.587 Device Information : IOPS MiB/s Average min max 00:26:51.587 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 190.57 0.09 955723.43 681.84 1012547.12 00:26:51.587 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 172.70 0.08 867457.60 562.52 1013323.61 00:26:51.587 ======================================================== 00:26:51.587 Total : 363.27 0.18 913760.99 562.52 1013323.61 00:26:51.587 00:26:51.587 [2024-11-26 20:56:55.112393] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x56d9b0 (9): Bad file descriptor 00:26:51.587 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:26:51.587 20:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.587 20:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:26:51.587 20:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1785026 00:26:51.587 20:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:26:52.154 20:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:26:52.154 20:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1785026 00:26:52.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1785026) - No such process 00:26:52.154 20:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1785026 00:26:52.154 20:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:26:52.154 20:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1785026 00:26:52.154 20:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:26:52.154 20:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:52.154 20:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:26:52.154 20:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:52.154 20:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1785026 00:26:52.154 20:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:26:52.154 20:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:52.154 20:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:52.154 20:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:52.154 20:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:26:52.154 20:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.154 20:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:52.154 20:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.154 20:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:52.154 20:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.154 20:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:52.154 [2024-11-26 20:56:55.632232] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:52.154 20:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.154 20:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:52.154 20:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.154 20:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:52.154 20:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.154 20:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1785473 00:26:52.154 20:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:26:52.154 20:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:26:52.154 20:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1785473 00:26:52.154 20:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:26:52.154 [2024-11-26 20:56:55.697043] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:52.718 20:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:26:52.718 20:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1785473 00:26:52.718 20:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:26:52.974 20:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:26:52.975 20:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1785473 00:26:52.975 20:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:26:53.536 20:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:26:53.536 20:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1785473 00:26:53.536 20:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:26:54.099 20:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:26:54.099 20:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1785473 00:26:54.099 20:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:26:54.662 20:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:26:54.662 20:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1785473 00:26:54.662 20:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:26:55.227 20:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:26:55.227 20:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1785473 00:26:55.227 20:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:26:55.485 Initializing NVMe Controllers 00:26:55.485 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:55.485 Controller IO queue size 128, less than required. 00:26:55.485 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:55.485 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:26:55.485 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:26:55.485 Initialization complete. Launching workers. 00:26:55.485 ======================================================== 00:26:55.485 Latency(us) 00:26:55.485 Device Information : IOPS MiB/s Average min max 00:26:55.485 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003653.89 1000199.55 1041126.87 00:26:55.485 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006236.51 1000198.42 1041319.03 00:26:55.485 ======================================================== 00:26:55.485 Total : 256.00 0.12 1004945.20 1000198.42 1041319.03 00:26:55.485 00:26:55.485 20:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:26:55.485 20:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1785473 00:26:55.485 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1785473) - No such process 00:26:55.485 20:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1785473 00:26:55.485 20:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:26:55.485 20:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:26:55.485 20:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:55.485 20:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:26:55.485 20:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:55.485 20:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:26:55.485 20:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:55.485 20:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:55.485 rmmod nvme_tcp 00:26:55.742 rmmod nvme_fabrics 00:26:55.742 rmmod nvme_keyring 00:26:55.742 20:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:55.742 20:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:26:55.742 20:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:26:55.742 20:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1784999 ']' 00:26:55.742 20:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1784999 00:26:55.742 20:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1784999 ']' 00:26:55.742 20:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1784999 00:26:55.742 20:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:26:55.742 20:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:55.742 20:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1784999 00:26:55.742 20:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:55.742 20:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:55.742 20:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1784999' 00:26:55.742 killing process with pid 1784999 00:26:55.742 20:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1784999 00:26:55.742 20:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1784999 00:26:56.001 20:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:56.001 20:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:56.001 20:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:56.001 20:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:26:56.001 20:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:26:56.001 20:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:56.001 20:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:26:56.001 20:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:56.001 20:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:56.001 20:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:56.001 20:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:56.001 20:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:57.908 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:57.908 00:26:57.908 real 0m12.514s 00:26:57.908 user 0m25.316s 00:26:57.908 sys 0m3.661s 00:26:57.908 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:57.908 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:57.908 ************************************ 00:26:57.908 END TEST nvmf_delete_subsystem 00:26:57.908 ************************************ 00:26:57.908 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:26:57.908 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:57.908 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:57.908 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:57.908 ************************************ 00:26:57.908 START TEST nvmf_host_management 00:26:57.908 ************************************ 00:26:57.908 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:26:58.168 * Looking for test storage... 00:26:58.168 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:58.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.168 --rc genhtml_branch_coverage=1 00:26:58.168 --rc genhtml_function_coverage=1 00:26:58.168 --rc genhtml_legend=1 00:26:58.168 --rc geninfo_all_blocks=1 00:26:58.168 --rc geninfo_unexecuted_blocks=1 00:26:58.168 00:26:58.168 ' 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:58.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.168 --rc genhtml_branch_coverage=1 00:26:58.168 --rc genhtml_function_coverage=1 00:26:58.168 --rc genhtml_legend=1 00:26:58.168 --rc geninfo_all_blocks=1 00:26:58.168 --rc geninfo_unexecuted_blocks=1 00:26:58.168 00:26:58.168 ' 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:58.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.168 --rc genhtml_branch_coverage=1 00:26:58.168 --rc genhtml_function_coverage=1 00:26:58.168 --rc genhtml_legend=1 00:26:58.168 --rc geninfo_all_blocks=1 00:26:58.168 --rc geninfo_unexecuted_blocks=1 00:26:58.168 00:26:58.168 ' 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:58.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:58.168 --rc genhtml_branch_coverage=1 00:26:58.168 --rc genhtml_function_coverage=1 00:26:58.168 --rc genhtml_legend=1 00:26:58.168 --rc geninfo_all_blocks=1 00:26:58.168 --rc geninfo_unexecuted_blocks=1 00:26:58.168 00:26:58.168 ' 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:26:58.168 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:58.169 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:58.169 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:58.169 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:58.169 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:58.169 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:58.169 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:58.169 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:58.169 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:58.169 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:58.169 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:58.169 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:58.169 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:26:58.169 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:58.169 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:58.169 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:58.169 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:58.169 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:58.169 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:58.169 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:58.169 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:58.169 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:58.169 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:58.169 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:26:58.169 20:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:00.703 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:00.703 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:27:00.703 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:00.703 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:00.703 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:00.703 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:00.703 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:00.703 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:27:00.703 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:00.703 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:27:00.703 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:27:00.703 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:27:00.703 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:27:00.703 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:27:00.703 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:27:00.703 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:00.703 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:00.703 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:00.703 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:00.703 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:00.703 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:00.703 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:00.703 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:00.703 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:00.703 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:00.703 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:00.703 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:00.703 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:00.703 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:00.703 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:00.703 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:00.703 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:00.703 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:00.703 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:00.703 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:00.703 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:00.703 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:00.703 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:00.703 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:00.703 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:00.703 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:00.703 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:00.703 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:00.703 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:00.703 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:00.703 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:00.703 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:00.703 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:00.703 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:00.703 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:00.703 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:00.703 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:00.703 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:00.703 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:00.703 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:00.703 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:00.703 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:00.704 Found net devices under 0000:09:00.0: cvl_0_0 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:00.704 Found net devices under 0000:09:00.1: cvl_0_1 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:00.704 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:00.704 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:27:00.704 00:27:00.704 --- 10.0.0.2 ping statistics --- 00:27:00.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:00.704 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:00.704 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:00.704 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:27:00.704 00:27:00.704 --- 10.0.0.1 ping statistics --- 00:27:00.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:00.704 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1788004 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1788004 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1788004 ']' 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:00.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:00.704 20:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:00.704 [2024-11-26 20:57:04.022468] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:00.704 [2024-11-26 20:57:04.023588] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:27:00.704 [2024-11-26 20:57:04.023655] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:00.704 [2024-11-26 20:57:04.099392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:00.704 [2024-11-26 20:57:04.159539] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:00.704 [2024-11-26 20:57:04.159588] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:00.704 [2024-11-26 20:57:04.159613] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:00.704 [2024-11-26 20:57:04.159644] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:00.704 [2024-11-26 20:57:04.159655] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:00.704 [2024-11-26 20:57:04.161237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:00.704 [2024-11-26 20:57:04.161300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:00.704 [2024-11-26 20:57:04.161330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:00.704 [2024-11-26 20:57:04.161335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:00.704 [2024-11-26 20:57:04.256167] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:00.704 [2024-11-26 20:57:04.256411] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:00.704 [2024-11-26 20:57:04.256682] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:00.704 [2024-11-26 20:57:04.257387] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:00.704 [2024-11-26 20:57:04.257657] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:27:00.704 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:00.704 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:27:00.704 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:00.705 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:00.705 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:00.705 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:00.705 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:00.705 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.705 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:00.705 [2024-11-26 20:57:04.314000] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:00.705 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.705 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:27:00.705 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:00.705 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:00.705 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:00.705 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:27:00.705 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:27:00.705 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.705 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:00.705 Malloc0 00:27:00.705 [2024-11-26 20:57:04.394223] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:01.008 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.008 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:27:01.008 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:01.008 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:01.008 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1788047 00:27:01.008 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1788047 /var/tmp/bdevperf.sock 00:27:01.008 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1788047 ']' 00:27:01.008 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:01.008 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:27:01.008 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:01.008 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:01.008 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:01.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:01.008 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:27:01.008 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:01.008 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:27:01.008 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:01.008 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:01.008 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:01.008 { 00:27:01.008 "params": { 00:27:01.008 "name": "Nvme$subsystem", 00:27:01.008 "trtype": "$TEST_TRANSPORT", 00:27:01.008 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:01.008 "adrfam": "ipv4", 00:27:01.008 "trsvcid": "$NVMF_PORT", 00:27:01.008 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:01.009 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:01.009 "hdgst": ${hdgst:-false}, 00:27:01.009 "ddgst": ${ddgst:-false} 00:27:01.009 }, 00:27:01.009 "method": "bdev_nvme_attach_controller" 00:27:01.009 } 00:27:01.009 EOF 00:27:01.009 )") 00:27:01.009 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:27:01.009 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:27:01.009 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:27:01.009 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:01.009 "params": { 00:27:01.009 "name": "Nvme0", 00:27:01.009 "trtype": "tcp", 00:27:01.009 "traddr": "10.0.0.2", 00:27:01.009 "adrfam": "ipv4", 00:27:01.009 "trsvcid": "4420", 00:27:01.009 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:01.009 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:01.009 "hdgst": false, 00:27:01.009 "ddgst": false 00:27:01.009 }, 00:27:01.009 "method": "bdev_nvme_attach_controller" 00:27:01.009 }' 00:27:01.009 [2024-11-26 20:57:04.481418] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:27:01.009 [2024-11-26 20:57:04.481501] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1788047 ] 00:27:01.009 [2024-11-26 20:57:04.551361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:01.009 [2024-11-26 20:57:04.611375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:01.291 Running I/O for 10 seconds... 00:27:01.291 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:01.291 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:27:01.291 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:01.291 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.291 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:01.291 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.291 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:01.291 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:27:01.291 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:01.291 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:27:01.291 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:27:01.291 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:27:01.291 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:27:01.291 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:27:01.291 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:27:01.291 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:27:01.291 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.291 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:01.291 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.291 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:27:01.291 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:27:01.291 20:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:27:01.549 20:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:27:01.549 20:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:27:01.549 20:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:27:01.549 20:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:27:01.549 20:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.549 20:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:01.549 20:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.809 20:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:27:01.809 20:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:27:01.809 20:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:27:01.809 20:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:27:01.809 20:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:27:01.809 20:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:27:01.809 20:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.809 20:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:01.809 [2024-11-26 20:57:05.274463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.809 [2024-11-26 20:57:05.274516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.809 [2024-11-26 20:57:05.274553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.809 [2024-11-26 20:57:05.274570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.809 [2024-11-26 20:57:05.274586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.809 [2024-11-26 20:57:05.274600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.809 [2024-11-26 20:57:05.274626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.809 [2024-11-26 20:57:05.274640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.809 [2024-11-26 20:57:05.274655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.809 [2024-11-26 20:57:05.274669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.809 [2024-11-26 20:57:05.274684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.809 [2024-11-26 20:57:05.274698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.809 [2024-11-26 20:57:05.274713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.809 [2024-11-26 20:57:05.274727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.809 [2024-11-26 20:57:05.274742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.809 [2024-11-26 20:57:05.274755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.809 [2024-11-26 20:57:05.274770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.809 [2024-11-26 20:57:05.274784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.809 [2024-11-26 20:57:05.274799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.809 [2024-11-26 20:57:05.274813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.809 [2024-11-26 20:57:05.274839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.809 [2024-11-26 20:57:05.274853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.809 [2024-11-26 20:57:05.274869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.809 [2024-11-26 20:57:05.274883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.809 [2024-11-26 20:57:05.274898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.809 [2024-11-26 20:57:05.274912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.809 [2024-11-26 20:57:05.274927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.809 [2024-11-26 20:57:05.274940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.810 [2024-11-26 20:57:05.274956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.810 [2024-11-26 20:57:05.274970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.810 [2024-11-26 20:57:05.274985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.810 [2024-11-26 20:57:05.274999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.810 [2024-11-26 20:57:05.275014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.810 [2024-11-26 20:57:05.275027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.810 [2024-11-26 20:57:05.275043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.810 [2024-11-26 20:57:05.275057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.810 [2024-11-26 20:57:05.275073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.810 [2024-11-26 20:57:05.275087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.810 [2024-11-26 20:57:05.275102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.810 [2024-11-26 20:57:05.275116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.810 [2024-11-26 20:57:05.275130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.810 [2024-11-26 20:57:05.275144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.810 [2024-11-26 20:57:05.275158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.810 [2024-11-26 20:57:05.275172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.810 [2024-11-26 20:57:05.275188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.810 [2024-11-26 20:57:05.275206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.810 [2024-11-26 20:57:05.275222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.810 [2024-11-26 20:57:05.275235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.810 [2024-11-26 20:57:05.275251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.810 [2024-11-26 20:57:05.275265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.810 [2024-11-26 20:57:05.275280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.810 [2024-11-26 20:57:05.275293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.810 [2024-11-26 20:57:05.275316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.810 [2024-11-26 20:57:05.275331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.810 [2024-11-26 20:57:05.275346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.810 [2024-11-26 20:57:05.275362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.810 [2024-11-26 20:57:05.275377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.810 [2024-11-26 20:57:05.275391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.810 [2024-11-26 20:57:05.275406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.810 [2024-11-26 20:57:05.275419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.810 [2024-11-26 20:57:05.275434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.810 [2024-11-26 20:57:05.275447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.810 [2024-11-26 20:57:05.275462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.810 [2024-11-26 20:57:05.275476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.810 [2024-11-26 20:57:05.275491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.810 [2024-11-26 20:57:05.275504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.810 [2024-11-26 20:57:05.275519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.810 [2024-11-26 20:57:05.275532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.810 [2024-11-26 20:57:05.275548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.810 [2024-11-26 20:57:05.275561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.810 [2024-11-26 20:57:05.275581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.810 [2024-11-26 20:57:05.275595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.810 [2024-11-26 20:57:05.275610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.810 [2024-11-26 20:57:05.275624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.810 [2024-11-26 20:57:05.275639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.810 [2024-11-26 20:57:05.275653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.810 [2024-11-26 20:57:05.275674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.810 [2024-11-26 20:57:05.275688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.810 [2024-11-26 20:57:05.275704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.810 [2024-11-26 20:57:05.275718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.810 [2024-11-26 20:57:05.275733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.810 [2024-11-26 20:57:05.275746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.810 [2024-11-26 20:57:05.275761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.810 [2024-11-26 20:57:05.275775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.810 [2024-11-26 20:57:05.275791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.810 [2024-11-26 20:57:05.275808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.810 [2024-11-26 20:57:05.275824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.810 [2024-11-26 20:57:05.275837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.810 [2024-11-26 20:57:05.275853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.810 [2024-11-26 20:57:05.275867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.810 [2024-11-26 20:57:05.275883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.810 [2024-11-26 20:57:05.275897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.810 [2024-11-26 20:57:05.275912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.810 [2024-11-26 20:57:05.275926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.810 [2024-11-26 20:57:05.275942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.810 [2024-11-26 20:57:05.275960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.810 [2024-11-26 20:57:05.275976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.810 [2024-11-26 20:57:05.275991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.810 [2024-11-26 20:57:05.276007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.810 [2024-11-26 20:57:05.276021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.810 [2024-11-26 20:57:05.276036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.810 [2024-11-26 20:57:05.276049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.810 [2024-11-26 20:57:05.276066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.810 [2024-11-26 20:57:05.276080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.810 [2024-11-26 20:57:05.276096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.810 [2024-11-26 20:57:05.276110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.811 [2024-11-26 20:57:05.276125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.811 [2024-11-26 20:57:05.276138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.811 [2024-11-26 20:57:05.276153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.811 [2024-11-26 20:57:05.276168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.811 [2024-11-26 20:57:05.276184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.811 [2024-11-26 20:57:05.276198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.811 [2024-11-26 20:57:05.276213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.811 [2024-11-26 20:57:05.276227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.811 [2024-11-26 20:57:05.276243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.811 [2024-11-26 20:57:05.276257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.811 [2024-11-26 20:57:05.276272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.811 [2024-11-26 20:57:05.276286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.811 [2024-11-26 20:57:05.276309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.811 [2024-11-26 20:57:05.276326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.811 [2024-11-26 20:57:05.276345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.811 [2024-11-26 20:57:05.276360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.811 [2024-11-26 20:57:05.276375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.811 [2024-11-26 20:57:05.276389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.811 [2024-11-26 20:57:05.276404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.811 [2024-11-26 20:57:05.276418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.811 [2024-11-26 20:57:05.276433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.811 [2024-11-26 20:57:05.276447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.811 [2024-11-26 20:57:05.277700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:01.811 20:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.811 20:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:27:01.811 20:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.811 20:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:01.811 task offset: 87296 on job bdev=Nvme0n1 fails 00:27:01.811 00:27:01.811 Latency(us) 00:27:01.811 [2024-11-26T19:57:05.508Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:01.811 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:01.811 Job: Nvme0n1 ended in about 0.40 seconds with error 00:27:01.811 Verification LBA range: start 0x0 length 0x400 00:27:01.811 Nvme0n1 : 0.40 1589.53 99.35 158.95 0.00 35560.44 2657.85 34564.17 00:27:01.811 [2024-11-26T19:57:05.508Z] =================================================================================================================== 00:27:01.811 [2024-11-26T19:57:05.508Z] Total : 1589.53 99.35 158.95 0.00 35560.44 2657.85 34564.17 00:27:01.811 [2024-11-26 20:57:05.279630] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:01.811 [2024-11-26 20:57:05.279661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144aa50 (9): Bad file descriptor 00:27:01.811 [2024-11-26 20:57:05.280899] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:27:01.811 [2024-11-26 20:57:05.281005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:01.811 [2024-11-26 20:57:05.281033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:01.811 [2024-11-26 20:57:05.281059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:27:01.811 [2024-11-26 20:57:05.281076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:27:01.811 [2024-11-26 20:57:05.281090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.811 [2024-11-26 20:57:05.281103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x144aa50 00:27:01.811 [2024-11-26 20:57:05.281143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144aa50 (9): Bad file descriptor 00:27:01.811 [2024-11-26 20:57:05.281169] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:01.811 [2024-11-26 20:57:05.281184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:01.811 [2024-11-26 20:57:05.281200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:01.811 [2024-11-26 20:57:05.281215] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:01.811 20:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.811 20:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:27:02.745 20:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1788047 00:27:02.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1788047) - No such process 00:27:02.745 20:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:27:02.745 20:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:27:02.745 20:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:02.745 20:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:27:02.745 20:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:27:02.745 20:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:27:02.745 20:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:02.745 20:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:02.745 { 00:27:02.745 "params": { 00:27:02.745 "name": "Nvme$subsystem", 00:27:02.745 "trtype": "$TEST_TRANSPORT", 00:27:02.745 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:02.745 "adrfam": "ipv4", 00:27:02.745 "trsvcid": "$NVMF_PORT", 00:27:02.745 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:02.745 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:02.745 "hdgst": ${hdgst:-false}, 00:27:02.745 "ddgst": ${ddgst:-false} 00:27:02.745 }, 00:27:02.745 "method": "bdev_nvme_attach_controller" 00:27:02.745 } 00:27:02.745 EOF 00:27:02.745 )") 00:27:02.745 20:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:27:02.745 20:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:27:02.745 20:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:27:02.745 20:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:02.745 "params": { 00:27:02.745 "name": "Nvme0", 00:27:02.745 "trtype": "tcp", 00:27:02.745 "traddr": "10.0.0.2", 00:27:02.745 "adrfam": "ipv4", 00:27:02.745 "trsvcid": "4420", 00:27:02.745 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:02.745 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:02.745 "hdgst": false, 00:27:02.745 "ddgst": false 00:27:02.745 }, 00:27:02.745 "method": "bdev_nvme_attach_controller" 00:27:02.745 }' 00:27:02.745 [2024-11-26 20:57:06.335669] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:27:02.745 [2024-11-26 20:57:06.335771] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1788326 ] 00:27:02.745 [2024-11-26 20:57:06.405382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:03.003 [2024-11-26 20:57:06.465705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:03.003 Running I/O for 1 seconds... 00:27:04.375 1600.00 IOPS, 100.00 MiB/s 00:27:04.375 Latency(us) 00:27:04.375 [2024-11-26T19:57:08.072Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:04.375 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:04.375 Verification LBA range: start 0x0 length 0x400 00:27:04.375 Nvme0n1 : 1.01 1647.36 102.96 0.00 0.00 38211.11 5339.97 34952.53 00:27:04.375 [2024-11-26T19:57:08.072Z] =================================================================================================================== 00:27:04.375 [2024-11-26T19:57:08.072Z] Total : 1647.36 102.96 0.00 0.00 38211.11 5339.97 34952.53 00:27:04.375 20:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:27:04.375 20:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:27:04.375 20:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:04.375 20:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:04.375 20:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:27:04.375 20:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:04.375 20:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:27:04.375 20:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:04.375 20:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:27:04.375 20:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:04.375 20:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:04.375 rmmod nvme_tcp 00:27:04.375 rmmod nvme_fabrics 00:27:04.375 rmmod nvme_keyring 00:27:04.375 20:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:04.375 20:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:27:04.375 20:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:27:04.375 20:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1788004 ']' 00:27:04.375 20:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1788004 00:27:04.375 20:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1788004 ']' 00:27:04.375 20:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1788004 00:27:04.375 20:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:27:04.375 20:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:04.375 20:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1788004 00:27:04.375 20:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:04.375 20:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:04.375 20:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1788004' 00:27:04.375 killing process with pid 1788004 00:27:04.375 20:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1788004 00:27:04.375 20:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1788004 00:27:04.635 [2024-11-26 20:57:08.200321] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:27:04.635 20:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:04.635 20:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:04.635 20:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:04.635 20:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:27:04.635 20:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:27:04.635 20:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:04.635 20:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:27:04.635 20:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:04.635 20:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:04.635 20:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:04.635 20:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:04.635 20:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:07.172 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:07.172 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:27:07.172 00:27:07.172 real 0m8.690s 00:27:07.172 user 0m17.083s 00:27:07.172 sys 0m3.742s 00:27:07.172 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:07.172 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:07.172 ************************************ 00:27:07.172 END TEST nvmf_host_management 00:27:07.172 ************************************ 00:27:07.172 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:27:07.172 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:07.173 ************************************ 00:27:07.173 START TEST nvmf_lvol 00:27:07.173 ************************************ 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:27:07.173 * Looking for test storage... 00:27:07.173 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:07.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:07.173 --rc genhtml_branch_coverage=1 00:27:07.173 --rc genhtml_function_coverage=1 00:27:07.173 --rc genhtml_legend=1 00:27:07.173 --rc geninfo_all_blocks=1 00:27:07.173 --rc geninfo_unexecuted_blocks=1 00:27:07.173 00:27:07.173 ' 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:07.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:07.173 --rc genhtml_branch_coverage=1 00:27:07.173 --rc genhtml_function_coverage=1 00:27:07.173 --rc genhtml_legend=1 00:27:07.173 --rc geninfo_all_blocks=1 00:27:07.173 --rc geninfo_unexecuted_blocks=1 00:27:07.173 00:27:07.173 ' 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:07.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:07.173 --rc genhtml_branch_coverage=1 00:27:07.173 --rc genhtml_function_coverage=1 00:27:07.173 --rc genhtml_legend=1 00:27:07.173 --rc geninfo_all_blocks=1 00:27:07.173 --rc geninfo_unexecuted_blocks=1 00:27:07.173 00:27:07.173 ' 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:07.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:07.173 --rc genhtml_branch_coverage=1 00:27:07.173 --rc genhtml_function_coverage=1 00:27:07.173 --rc genhtml_legend=1 00:27:07.173 --rc geninfo_all_blocks=1 00:27:07.173 --rc geninfo_unexecuted_blocks=1 00:27:07.173 00:27:07.173 ' 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.173 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:27:07.174 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:07.174 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:07.174 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:07.174 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:07.174 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:07.174 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:07.174 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:07.174 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:07.174 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:07.174 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:07.174 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:07.174 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:07.174 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:27:07.174 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:27:07.174 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:07.174 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:27:07.174 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:07.174 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:07.174 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:07.174 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:07.174 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:07.174 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:07.174 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:07.174 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:07.174 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:07.174 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:07.174 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:27:07.174 20:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:27:09.077 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:09.077 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:27:09.077 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:09.077 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:09.077 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:09.077 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:09.077 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:09.077 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:27:09.077 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:09.077 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:27:09.077 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:27:09.077 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:27:09.077 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:27:09.077 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:27:09.077 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:27:09.077 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:09.077 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:09.077 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:09.077 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:09.077 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:09.077 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:09.077 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:09.077 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:09.077 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:09.077 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:09.077 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:09.077 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:09.077 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:09.077 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:09.077 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:09.077 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:09.077 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:09.077 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:09.077 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:09.077 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:09.077 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:09.077 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:09.077 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:09.077 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:09.077 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:09.077 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:09.077 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:09.077 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:09.077 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:09.077 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:09.077 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:09.077 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:09.077 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:09.077 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:09.078 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:09.078 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:09.078 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:09.078 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:09.078 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:09.078 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:09.078 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:09.078 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:09.078 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:09.078 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:09.078 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:09.078 Found net devices under 0000:09:00.0: cvl_0_0 00:27:09.078 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:09.078 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:09.078 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:09.078 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:09.078 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:09.078 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:09.078 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:09.078 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:09.078 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:09.078 Found net devices under 0000:09:00.1: cvl_0_1 00:27:09.078 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:09.078 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:09.078 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:27:09.078 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:09.078 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:09.078 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:09.078 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:09.078 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:09.078 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:09.078 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:09.078 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:09.078 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:09.078 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:09.078 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:09.078 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:09.078 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:09.078 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:09.078 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:09.078 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:09.078 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:09.078 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:09.078 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:09.078 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:09.078 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:09.337 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:09.337 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:09.337 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:09.337 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:09.337 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:09.337 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:09.337 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.309 ms 00:27:09.337 00:27:09.337 --- 10.0.0.2 ping statistics --- 00:27:09.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:09.337 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:27:09.337 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:09.337 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:09.337 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:27:09.337 00:27:09.337 --- 10.0.0.1 ping statistics --- 00:27:09.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:09.337 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:27:09.337 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:09.337 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:27:09.337 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:09.337 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:09.337 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:09.337 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:09.337 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:09.337 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:09.337 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:09.337 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:27:09.337 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:09.337 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:09.337 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:27:09.337 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1791032 00:27:09.337 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:27:09.337 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1791032 00:27:09.337 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1791032 ']' 00:27:09.337 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:09.337 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:09.337 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:09.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:09.337 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:09.337 20:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:27:09.337 [2024-11-26 20:57:12.905396] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:09.337 [2024-11-26 20:57:12.906534] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:27:09.337 [2024-11-26 20:57:12.906590] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:09.337 [2024-11-26 20:57:12.980592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:09.596 [2024-11-26 20:57:13.042460] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:09.596 [2024-11-26 20:57:13.042506] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:09.596 [2024-11-26 20:57:13.042520] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:09.596 [2024-11-26 20:57:13.042531] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:09.596 [2024-11-26 20:57:13.042541] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:09.596 [2024-11-26 20:57:13.044026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:09.596 [2024-11-26 20:57:13.044090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:09.596 [2024-11-26 20:57:13.044093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:09.596 [2024-11-26 20:57:13.143911] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:09.596 [2024-11-26 20:57:13.144111] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:09.596 [2024-11-26 20:57:13.144130] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:09.596 [2024-11-26 20:57:13.144381] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:09.596 20:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:09.596 20:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:27:09.596 20:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:09.596 20:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:09.596 20:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:27:09.596 20:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:09.596 20:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:09.856 [2024-11-26 20:57:13.448819] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:09.856 20:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:10.114 20:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:27:10.114 20:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:10.373 20:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:27:10.373 20:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:27:10.631 20:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:27:11.197 20:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=f1209100-fbbc-4b4d-9ead-2b98bbdb5871 00:27:11.197 20:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f1209100-fbbc-4b4d-9ead-2b98bbdb5871 lvol 20 00:27:11.197 20:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=cbe0db20-eb63-4b2f-b304-fbe1206dff5b 00:27:11.197 20:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:27:11.764 20:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 cbe0db20-eb63-4b2f-b304-fbe1206dff5b 00:27:11.764 20:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:12.023 [2024-11-26 20:57:15.676987] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:12.023 20:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:12.280 20:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1791341 00:27:12.280 20:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:27:12.280 20:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:27:13.654 20:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot cbe0db20-eb63-4b2f-b304-fbe1206dff5b MY_SNAPSHOT 00:27:13.654 20:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=5f2d8459-80f9-4c2f-a9ca-9ea22e594ec2 00:27:13.654 20:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize cbe0db20-eb63-4b2f-b304-fbe1206dff5b 30 00:27:13.911 20:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 5f2d8459-80f9-4c2f-a9ca-9ea22e594ec2 MY_CLONE 00:27:14.478 20:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=5e6410a8-c3f1-47a2-972c-83ca43861120 00:27:14.478 20:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 5e6410a8-c3f1-47a2-972c-83ca43861120 00:27:15.043 20:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1791341 00:27:23.148 Initializing NVMe Controllers 00:27:23.148 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:27:23.148 Controller IO queue size 128, less than required. 00:27:23.148 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:23.148 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:27:23.148 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:27:23.148 Initialization complete. Launching workers. 00:27:23.148 ======================================================== 00:27:23.148 Latency(us) 00:27:23.148 Device Information : IOPS MiB/s Average min max 00:27:23.148 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10372.33 40.52 12343.66 5583.83 69093.08 00:27:23.148 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10428.93 40.74 12278.01 4880.42 78007.49 00:27:23.148 ======================================================== 00:27:23.148 Total : 20801.26 81.25 12310.74 4880.42 78007.49 00:27:23.148 00:27:23.148 20:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:23.148 20:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete cbe0db20-eb63-4b2f-b304-fbe1206dff5b 00:27:23.407 20:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f1209100-fbbc-4b4d-9ead-2b98bbdb5871 00:27:23.665 20:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:27:23.665 20:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:27:23.665 20:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:27:23.665 20:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:23.665 20:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:27:23.665 20:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:23.665 20:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:27:23.665 20:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:23.665 20:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:23.665 rmmod nvme_tcp 00:27:23.665 rmmod nvme_fabrics 00:27:23.665 rmmod nvme_keyring 00:27:23.665 20:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:23.665 20:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:27:23.665 20:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:27:23.665 20:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1791032 ']' 00:27:23.665 20:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1791032 00:27:23.665 20:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1791032 ']' 00:27:23.665 20:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1791032 00:27:23.665 20:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:27:23.665 20:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:23.665 20:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1791032 00:27:23.665 20:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:23.665 20:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:23.665 20:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1791032' 00:27:23.665 killing process with pid 1791032 00:27:23.665 20:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1791032 00:27:23.665 20:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1791032 00:27:23.925 20:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:23.925 20:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:23.925 20:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:23.925 20:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:27:23.925 20:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:27:23.925 20:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:23.925 20:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:27:23.925 20:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:23.925 20:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:23.925 20:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:23.925 20:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:23.925 20:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:26.463 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:26.463 00:27:26.463 real 0m19.248s 00:27:26.463 user 0m56.067s 00:27:26.463 sys 0m7.962s 00:27:26.463 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:26.463 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:27:26.463 ************************************ 00:27:26.463 END TEST nvmf_lvol 00:27:26.463 ************************************ 00:27:26.463 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:27:26.463 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:26.463 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:26.463 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:26.463 ************************************ 00:27:26.463 START TEST nvmf_lvs_grow 00:27:26.463 ************************************ 00:27:26.463 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:27:26.463 * Looking for test storage... 00:27:26.463 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:26.463 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:26.463 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:27:26.463 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:26.463 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:26.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:26.464 --rc genhtml_branch_coverage=1 00:27:26.464 --rc genhtml_function_coverage=1 00:27:26.464 --rc genhtml_legend=1 00:27:26.464 --rc geninfo_all_blocks=1 00:27:26.464 --rc geninfo_unexecuted_blocks=1 00:27:26.464 00:27:26.464 ' 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:26.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:26.464 --rc genhtml_branch_coverage=1 00:27:26.464 --rc genhtml_function_coverage=1 00:27:26.464 --rc genhtml_legend=1 00:27:26.464 --rc geninfo_all_blocks=1 00:27:26.464 --rc geninfo_unexecuted_blocks=1 00:27:26.464 00:27:26.464 ' 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:26.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:26.464 --rc genhtml_branch_coverage=1 00:27:26.464 --rc genhtml_function_coverage=1 00:27:26.464 --rc genhtml_legend=1 00:27:26.464 --rc geninfo_all_blocks=1 00:27:26.464 --rc geninfo_unexecuted_blocks=1 00:27:26.464 00:27:26.464 ' 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:26.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:26.464 --rc genhtml_branch_coverage=1 00:27:26.464 --rc genhtml_function_coverage=1 00:27:26.464 --rc genhtml_legend=1 00:27:26.464 --rc geninfo_all_blocks=1 00:27:26.464 --rc geninfo_unexecuted_blocks=1 00:27:26.464 00:27:26.464 ' 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:26.464 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:26.465 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:26.465 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:27:26.465 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:26.465 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:26.465 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:26.465 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:26.465 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:26.465 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:26.465 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:26.465 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:26.465 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:26.465 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:26.465 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:27:26.465 20:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:27:28.394 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:28.394 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:27:28.394 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:28.394 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:28.394 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:28.394 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:28.394 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:28.394 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:27:28.394 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:28.394 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:27:28.394 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:27:28.394 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:27:28.394 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:27:28.394 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:27:28.394 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:27:28.394 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:28.394 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:28.394 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:28.394 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:28.394 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:28.394 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:28.394 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:28.394 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:28.394 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:28.394 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:28.394 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:28.394 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:28.394 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:28.395 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:28.395 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:28.395 Found net devices under 0000:09:00.0: cvl_0_0 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:28.395 Found net devices under 0000:09:00.1: cvl_0_1 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:28.395 20:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:28.395 20:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:28.395 20:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:28.395 20:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:28.395 20:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:28.395 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:28.395 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.311 ms 00:27:28.395 00:27:28.395 --- 10.0.0.2 ping statistics --- 00:27:28.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:28.395 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:27:28.395 20:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:28.395 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:28.395 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:27:28.395 00:27:28.395 --- 10.0.0.1 ping statistics --- 00:27:28.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:28.395 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:27:28.395 20:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:28.395 20:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:27:28.395 20:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:28.395 20:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:28.395 20:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:28.395 20:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:28.395 20:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:28.395 20:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:28.395 20:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:28.654 20:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:27:28.654 20:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:28.654 20:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:28.654 20:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:27:28.654 20:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1794712 00:27:28.654 20:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:27:28.654 20:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1794712 00:27:28.654 20:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1794712 ']' 00:27:28.654 20:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:28.654 20:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:28.654 20:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:28.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:28.654 20:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:28.654 20:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:27:28.654 [2024-11-26 20:57:32.143588] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:28.654 [2024-11-26 20:57:32.144733] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:27:28.654 [2024-11-26 20:57:32.144813] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:28.654 [2024-11-26 20:57:32.219383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:28.654 [2024-11-26 20:57:32.275604] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:28.654 [2024-11-26 20:57:32.275658] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:28.654 [2024-11-26 20:57:32.275682] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:28.654 [2024-11-26 20:57:32.275693] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:28.654 [2024-11-26 20:57:32.275703] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:28.654 [2024-11-26 20:57:32.276263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:28.912 [2024-11-26 20:57:32.365745] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:28.912 [2024-11-26 20:57:32.366038] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:28.912 20:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:28.912 20:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:27:28.912 20:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:28.912 20:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:28.912 20:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:27:28.912 20:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:28.912 20:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:29.171 [2024-11-26 20:57:32.672883] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:29.171 20:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:27:29.171 20:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:29.171 20:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:29.171 20:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:27:29.171 ************************************ 00:27:29.171 START TEST lvs_grow_clean 00:27:29.171 ************************************ 00:27:29.171 20:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:27:29.171 20:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:27:29.171 20:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:27:29.171 20:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:27:29.171 20:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:27:29.171 20:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:27:29.171 20:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:27:29.171 20:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:27:29.171 20:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:27:29.171 20:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:27:29.430 20:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:27:29.430 20:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:27:29.688 20:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=23689e0a-bb38-41ee-8e8b-012ef4ffcc20 00:27:29.688 20:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 23689e0a-bb38-41ee-8e8b-012ef4ffcc20 00:27:29.688 20:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:27:29.946 20:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:27:29.946 20:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:27:29.946 20:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 23689e0a-bb38-41ee-8e8b-012ef4ffcc20 lvol 150 00:27:30.204 20:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=80fdd29f-95c7-4483-9da9-817bb47cf5ef 00:27:30.204 20:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:27:30.204 20:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:27:30.462 [2024-11-26 20:57:34.100785] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:27:30.462 [2024-11-26 20:57:34.100878] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:27:30.462 true 00:27:30.462 20:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 23689e0a-bb38-41ee-8e8b-012ef4ffcc20 00:27:30.462 20:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:27:30.721 20:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:27:30.721 20:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:27:30.979 20:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 80fdd29f-95c7-4483-9da9-817bb47cf5ef 00:27:31.237 20:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:31.495 [2024-11-26 20:57:35.177033] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:31.754 20:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:32.012 20:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1795147 00:27:32.012 20:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:27:32.012 20:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:32.012 20:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1795147 /var/tmp/bdevperf.sock 00:27:32.012 20:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1795147 ']' 00:27:32.012 20:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:32.012 20:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:32.012 20:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:32.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:32.012 20:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:32.012 20:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:27:32.012 [2024-11-26 20:57:35.504694] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:27:32.012 [2024-11-26 20:57:35.504769] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1795147 ] 00:27:32.012 [2024-11-26 20:57:35.574421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:32.012 [2024-11-26 20:57:35.635131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:32.270 20:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:32.270 20:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:27:32.270 20:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:27:32.529 Nvme0n1 00:27:32.529 20:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:27:32.788 [ 00:27:32.788 { 00:27:32.788 "name": "Nvme0n1", 00:27:32.788 "aliases": [ 00:27:32.788 "80fdd29f-95c7-4483-9da9-817bb47cf5ef" 00:27:32.788 ], 00:27:32.788 "product_name": "NVMe disk", 00:27:32.788 "block_size": 4096, 00:27:32.788 "num_blocks": 38912, 00:27:32.788 "uuid": "80fdd29f-95c7-4483-9da9-817bb47cf5ef", 00:27:32.788 "numa_id": 0, 00:27:32.788 "assigned_rate_limits": { 00:27:32.788 "rw_ios_per_sec": 0, 00:27:32.788 "rw_mbytes_per_sec": 0, 00:27:32.788 "r_mbytes_per_sec": 0, 00:27:32.788 "w_mbytes_per_sec": 0 00:27:32.788 }, 00:27:32.788 "claimed": false, 00:27:32.788 "zoned": false, 00:27:32.788 "supported_io_types": { 00:27:32.788 "read": true, 00:27:32.788 "write": true, 00:27:32.788 "unmap": true, 00:27:32.788 "flush": true, 00:27:32.788 "reset": true, 00:27:32.788 "nvme_admin": true, 00:27:32.788 "nvme_io": true, 00:27:32.788 "nvme_io_md": false, 00:27:32.788 "write_zeroes": true, 00:27:32.788 "zcopy": false, 00:27:32.788 "get_zone_info": false, 00:27:32.788 "zone_management": false, 00:27:32.788 "zone_append": false, 00:27:32.788 "compare": true, 00:27:32.788 "compare_and_write": true, 00:27:32.788 "abort": true, 00:27:32.788 "seek_hole": false, 00:27:32.788 "seek_data": false, 00:27:32.788 "copy": true, 00:27:32.788 "nvme_iov_md": false 00:27:32.788 }, 00:27:32.788 "memory_domains": [ 00:27:32.788 { 00:27:32.788 "dma_device_id": "system", 00:27:32.788 "dma_device_type": 1 00:27:32.788 } 00:27:32.788 ], 00:27:32.788 "driver_specific": { 00:27:32.788 "nvme": [ 00:27:32.788 { 00:27:32.788 "trid": { 00:27:32.788 "trtype": "TCP", 00:27:32.788 "adrfam": "IPv4", 00:27:32.788 "traddr": "10.0.0.2", 00:27:32.788 "trsvcid": "4420", 00:27:32.788 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:32.788 }, 00:27:32.788 "ctrlr_data": { 00:27:32.788 "cntlid": 1, 00:27:32.788 "vendor_id": "0x8086", 00:27:32.788 "model_number": "SPDK bdev Controller", 00:27:32.788 "serial_number": "SPDK0", 00:27:32.788 "firmware_revision": "25.01", 00:27:32.788 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:32.788 "oacs": { 00:27:32.788 "security": 0, 00:27:32.788 "format": 0, 00:27:32.788 "firmware": 0, 00:27:32.788 "ns_manage": 0 00:27:32.788 }, 00:27:32.788 "multi_ctrlr": true, 00:27:32.788 "ana_reporting": false 00:27:32.788 }, 00:27:32.788 "vs": { 00:27:32.788 "nvme_version": "1.3" 00:27:32.788 }, 00:27:32.788 "ns_data": { 00:27:32.788 "id": 1, 00:27:32.788 "can_share": true 00:27:32.788 } 00:27:32.788 } 00:27:32.788 ], 00:27:32.788 "mp_policy": "active_passive" 00:27:32.788 } 00:27:32.788 } 00:27:32.788 ] 00:27:32.788 20:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1795199 00:27:32.788 20:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:32.788 20:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:27:33.046 Running I/O for 10 seconds... 00:27:33.981 Latency(us) 00:27:33.981 [2024-11-26T19:57:37.678Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:33.981 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:33.981 Nvme0n1 : 1.00 14893.00 58.18 0.00 0.00 0.00 0.00 0.00 00:27:33.981 [2024-11-26T19:57:37.678Z] =================================================================================================================== 00:27:33.981 [2024-11-26T19:57:37.678Z] Total : 14893.00 58.18 0.00 0.00 0.00 0.00 0.00 00:27:33.981 00:27:34.914 20:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 23689e0a-bb38-41ee-8e8b-012ef4ffcc20 00:27:34.914 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:34.914 Nvme0n1 : 2.00 15003.00 58.61 0.00 0.00 0.00 0.00 0.00 00:27:34.914 [2024-11-26T19:57:38.611Z] =================================================================================================================== 00:27:34.914 [2024-11-26T19:57:38.611Z] Total : 15003.00 58.61 0.00 0.00 0.00 0.00 0.00 00:27:34.914 00:27:35.172 true 00:27:35.172 20:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 23689e0a-bb38-41ee-8e8b-012ef4ffcc20 00:27:35.172 20:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:27:35.430 20:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:27:35.430 20:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:27:35.430 20:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1795199 00:27:35.994 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:35.994 Nvme0n1 : 3.00 15082.00 58.91 0.00 0.00 0.00 0.00 0.00 00:27:35.994 [2024-11-26T19:57:39.691Z] =================================================================================================================== 00:27:35.994 [2024-11-26T19:57:39.691Z] Total : 15082.00 58.91 0.00 0.00 0.00 0.00 0.00 00:27:35.994 00:27:36.927 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:36.927 Nvme0n1 : 4.00 15153.25 59.19 0.00 0.00 0.00 0.00 0.00 00:27:36.927 [2024-11-26T19:57:40.624Z] =================================================================================================================== 00:27:36.927 [2024-11-26T19:57:40.624Z] Total : 15153.25 59.19 0.00 0.00 0.00 0.00 0.00 00:27:36.927 00:27:38.302 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:38.302 Nvme0n1 : 5.00 15221.40 59.46 0.00 0.00 0.00 0.00 0.00 00:27:38.302 [2024-11-26T19:57:41.999Z] =================================================================================================================== 00:27:38.302 [2024-11-26T19:57:41.999Z] Total : 15221.40 59.46 0.00 0.00 0.00 0.00 0.00 00:27:38.302 00:27:39.236 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:39.236 Nvme0n1 : 6.00 15288.00 59.72 0.00 0.00 0.00 0.00 0.00 00:27:39.236 [2024-11-26T19:57:42.933Z] =================================================================================================================== 00:27:39.236 [2024-11-26T19:57:42.933Z] Total : 15288.00 59.72 0.00 0.00 0.00 0.00 0.00 00:27:39.236 00:27:40.169 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:40.169 Nvme0n1 : 7.00 15335.57 59.90 0.00 0.00 0.00 0.00 0.00 00:27:40.169 [2024-11-26T19:57:43.866Z] =================================================================================================================== 00:27:40.169 [2024-11-26T19:57:43.866Z] Total : 15335.57 59.90 0.00 0.00 0.00 0.00 0.00 00:27:40.169 00:27:41.101 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:41.101 Nvme0n1 : 8.00 15373.38 60.05 0.00 0.00 0.00 0.00 0.00 00:27:41.101 [2024-11-26T19:57:44.798Z] =================================================================================================================== 00:27:41.101 [2024-11-26T19:57:44.798Z] Total : 15373.38 60.05 0.00 0.00 0.00 0.00 0.00 00:27:41.101 00:27:42.033 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:42.033 Nvme0n1 : 9.00 15415.00 60.21 0.00 0.00 0.00 0.00 0.00 00:27:42.033 [2024-11-26T19:57:45.730Z] =================================================================================================================== 00:27:42.033 [2024-11-26T19:57:45.730Z] Total : 15415.00 60.21 0.00 0.00 0.00 0.00 0.00 00:27:42.033 00:27:42.963 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:42.963 Nvme0n1 : 10.00 15448.30 60.34 0.00 0.00 0.00 0.00 0.00 00:27:42.963 [2024-11-26T19:57:46.660Z] =================================================================================================================== 00:27:42.963 [2024-11-26T19:57:46.660Z] Total : 15448.30 60.34 0.00 0.00 0.00 0.00 0.00 00:27:42.963 00:27:42.963 00:27:42.963 Latency(us) 00:27:42.963 [2024-11-26T19:57:46.660Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:42.963 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:42.963 Nvme0n1 : 10.01 15449.62 60.35 0.00 0.00 8280.24 4247.70 18155.90 00:27:42.963 [2024-11-26T19:57:46.660Z] =================================================================================================================== 00:27:42.963 [2024-11-26T19:57:46.660Z] Total : 15449.62 60.35 0.00 0.00 8280.24 4247.70 18155.90 00:27:42.963 { 00:27:42.963 "results": [ 00:27:42.963 { 00:27:42.963 "job": "Nvme0n1", 00:27:42.963 "core_mask": "0x2", 00:27:42.963 "workload": "randwrite", 00:27:42.963 "status": "finished", 00:27:42.963 "queue_depth": 128, 00:27:42.963 "io_size": 4096, 00:27:42.963 "runtime": 10.007432, 00:27:42.963 "iops": 15449.617844018325, 00:27:42.963 "mibps": 60.35006970319658, 00:27:42.963 "io_failed": 0, 00:27:42.963 "io_timeout": 0, 00:27:42.963 "avg_latency_us": 8280.239824936993, 00:27:42.963 "min_latency_us": 4247.7037037037035, 00:27:42.963 "max_latency_us": 18155.89925925926 00:27:42.963 } 00:27:42.963 ], 00:27:42.963 "core_count": 1 00:27:42.963 } 00:27:42.963 20:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1795147 00:27:42.963 20:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1795147 ']' 00:27:42.963 20:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1795147 00:27:42.963 20:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:27:42.963 20:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:42.963 20:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1795147 00:27:42.963 20:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:42.963 20:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:42.963 20:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1795147' 00:27:42.964 killing process with pid 1795147 00:27:42.964 20:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1795147 00:27:42.964 Received shutdown signal, test time was about 10.000000 seconds 00:27:42.964 00:27:42.964 Latency(us) 00:27:42.964 [2024-11-26T19:57:46.661Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:42.964 [2024-11-26T19:57:46.661Z] =================================================================================================================== 00:27:42.964 [2024-11-26T19:57:46.661Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:42.964 20:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1795147 00:27:43.221 20:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:43.480 20:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:43.738 20:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 23689e0a-bb38-41ee-8e8b-012ef4ffcc20 00:27:43.738 20:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:27:43.996 20:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:27:43.997 20:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:27:43.997 20:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:27:44.256 [2024-11-26 20:57:47.932822] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:27:44.514 20:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 23689e0a-bb38-41ee-8e8b-012ef4ffcc20 00:27:44.514 20:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:27:44.514 20:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 23689e0a-bb38-41ee-8e8b-012ef4ffcc20 00:27:44.514 20:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:44.514 20:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:44.514 20:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:44.514 20:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:44.514 20:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:44.514 20:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:44.514 20:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:44.514 20:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:27:44.514 20:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 23689e0a-bb38-41ee-8e8b-012ef4ffcc20 00:27:44.772 request: 00:27:44.772 { 00:27:44.772 "uuid": "23689e0a-bb38-41ee-8e8b-012ef4ffcc20", 00:27:44.772 "method": "bdev_lvol_get_lvstores", 00:27:44.772 "req_id": 1 00:27:44.772 } 00:27:44.772 Got JSON-RPC error response 00:27:44.772 response: 00:27:44.772 { 00:27:44.772 "code": -19, 00:27:44.772 "message": "No such device" 00:27:44.772 } 00:27:44.772 20:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:27:44.772 20:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:44.772 20:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:44.772 20:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:44.772 20:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:27:45.030 aio_bdev 00:27:45.030 20:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 80fdd29f-95c7-4483-9da9-817bb47cf5ef 00:27:45.030 20:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=80fdd29f-95c7-4483-9da9-817bb47cf5ef 00:27:45.030 20:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:45.030 20:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:27:45.030 20:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:45.030 20:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:45.030 20:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:27:45.288 20:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 80fdd29f-95c7-4483-9da9-817bb47cf5ef -t 2000 00:27:45.690 [ 00:27:45.690 { 00:27:45.690 "name": "80fdd29f-95c7-4483-9da9-817bb47cf5ef", 00:27:45.690 "aliases": [ 00:27:45.690 "lvs/lvol" 00:27:45.690 ], 00:27:45.690 "product_name": "Logical Volume", 00:27:45.690 "block_size": 4096, 00:27:45.690 "num_blocks": 38912, 00:27:45.690 "uuid": "80fdd29f-95c7-4483-9da9-817bb47cf5ef", 00:27:45.690 "assigned_rate_limits": { 00:27:45.690 "rw_ios_per_sec": 0, 00:27:45.690 "rw_mbytes_per_sec": 0, 00:27:45.690 "r_mbytes_per_sec": 0, 00:27:45.690 "w_mbytes_per_sec": 0 00:27:45.690 }, 00:27:45.690 "claimed": false, 00:27:45.690 "zoned": false, 00:27:45.690 "supported_io_types": { 00:27:45.690 "read": true, 00:27:45.690 "write": true, 00:27:45.690 "unmap": true, 00:27:45.690 "flush": false, 00:27:45.690 "reset": true, 00:27:45.690 "nvme_admin": false, 00:27:45.690 "nvme_io": false, 00:27:45.690 "nvme_io_md": false, 00:27:45.690 "write_zeroes": true, 00:27:45.690 "zcopy": false, 00:27:45.690 "get_zone_info": false, 00:27:45.690 "zone_management": false, 00:27:45.690 "zone_append": false, 00:27:45.690 "compare": false, 00:27:45.690 "compare_and_write": false, 00:27:45.690 "abort": false, 00:27:45.690 "seek_hole": true, 00:27:45.690 "seek_data": true, 00:27:45.690 "copy": false, 00:27:45.690 "nvme_iov_md": false 00:27:45.690 }, 00:27:45.690 "driver_specific": { 00:27:45.690 "lvol": { 00:27:45.690 "lvol_store_uuid": "23689e0a-bb38-41ee-8e8b-012ef4ffcc20", 00:27:45.690 "base_bdev": "aio_bdev", 00:27:45.690 "thin_provision": false, 00:27:45.690 "num_allocated_clusters": 38, 00:27:45.690 "snapshot": false, 00:27:45.690 "clone": false, 00:27:45.690 "esnap_clone": false 00:27:45.690 } 00:27:45.690 } 00:27:45.690 } 00:27:45.690 ] 00:27:45.690 20:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:27:45.690 20:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 23689e0a-bb38-41ee-8e8b-012ef4ffcc20 00:27:45.690 20:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:27:45.690 20:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:27:45.690 20:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 23689e0a-bb38-41ee-8e8b-012ef4ffcc20 00:27:45.690 20:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:27:45.975 20:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:27:45.975 20:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 80fdd29f-95c7-4483-9da9-817bb47cf5ef 00:27:46.233 20:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 23689e0a-bb38-41ee-8e8b-012ef4ffcc20 00:27:46.800 20:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:27:46.800 20:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:27:47.058 00:27:47.058 real 0m17.789s 00:27:47.058 user 0m17.428s 00:27:47.058 sys 0m1.780s 00:27:47.058 20:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:47.058 20:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:27:47.058 ************************************ 00:27:47.058 END TEST lvs_grow_clean 00:27:47.058 ************************************ 00:27:47.058 20:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:27:47.058 20:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:47.058 20:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:47.058 20:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:27:47.058 ************************************ 00:27:47.058 START TEST lvs_grow_dirty 00:27:47.058 ************************************ 00:27:47.058 20:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:27:47.058 20:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:27:47.058 20:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:27:47.058 20:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:27:47.059 20:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:27:47.059 20:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:27:47.059 20:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:27:47.059 20:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:27:47.059 20:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:27:47.059 20:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:27:47.317 20:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:27:47.317 20:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:27:47.576 20:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=f8a88770-3eaa-46eb-b072-170420791def 00:27:47.576 20:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f8a88770-3eaa-46eb-b072-170420791def 00:27:47.576 20:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:27:47.835 20:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:27:47.835 20:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:27:47.835 20:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f8a88770-3eaa-46eb-b072-170420791def lvol 150 00:27:48.092 20:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=1f48ba8d-185e-40e2-b824-cf3d18b118df 00:27:48.092 20:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:27:48.092 20:57:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:27:48.351 [2024-11-26 20:57:51.988765] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:27:48.351 [2024-11-26 20:57:51.988852] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:27:48.351 true 00:27:48.351 20:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f8a88770-3eaa-46eb-b072-170420791def 00:27:48.351 20:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:27:48.609 20:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:27:48.609 20:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:27:48.867 20:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1f48ba8d-185e-40e2-b824-cf3d18b118df 00:27:49.125 20:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:49.383 [2024-11-26 20:57:53.065145] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:49.642 20:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:49.900 20:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1797196 00:27:49.900 20:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:27:49.900 20:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:49.900 20:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1797196 /var/tmp/bdevperf.sock 00:27:49.900 20:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1797196 ']' 00:27:49.900 20:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:49.900 20:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:49.900 20:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:49.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:49.900 20:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:49.900 20:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:27:49.900 [2024-11-26 20:57:53.392518] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:27:49.900 [2024-11-26 20:57:53.392608] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1797196 ] 00:27:49.900 [2024-11-26 20:57:53.459759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:49.900 [2024-11-26 20:57:53.519636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:50.158 20:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:50.158 20:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:27:50.158 20:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:27:50.417 Nvme0n1 00:27:50.417 20:57:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:27:50.675 [ 00:27:50.675 { 00:27:50.675 "name": "Nvme0n1", 00:27:50.675 "aliases": [ 00:27:50.675 "1f48ba8d-185e-40e2-b824-cf3d18b118df" 00:27:50.675 ], 00:27:50.675 "product_name": "NVMe disk", 00:27:50.675 "block_size": 4096, 00:27:50.675 "num_blocks": 38912, 00:27:50.676 "uuid": "1f48ba8d-185e-40e2-b824-cf3d18b118df", 00:27:50.676 "numa_id": 0, 00:27:50.676 "assigned_rate_limits": { 00:27:50.676 "rw_ios_per_sec": 0, 00:27:50.676 "rw_mbytes_per_sec": 0, 00:27:50.676 "r_mbytes_per_sec": 0, 00:27:50.676 "w_mbytes_per_sec": 0 00:27:50.676 }, 00:27:50.676 "claimed": false, 00:27:50.676 "zoned": false, 00:27:50.676 "supported_io_types": { 00:27:50.676 "read": true, 00:27:50.676 "write": true, 00:27:50.676 "unmap": true, 00:27:50.676 "flush": true, 00:27:50.676 "reset": true, 00:27:50.676 "nvme_admin": true, 00:27:50.676 "nvme_io": true, 00:27:50.676 "nvme_io_md": false, 00:27:50.676 "write_zeroes": true, 00:27:50.676 "zcopy": false, 00:27:50.676 "get_zone_info": false, 00:27:50.676 "zone_management": false, 00:27:50.676 "zone_append": false, 00:27:50.676 "compare": true, 00:27:50.676 "compare_and_write": true, 00:27:50.676 "abort": true, 00:27:50.676 "seek_hole": false, 00:27:50.676 "seek_data": false, 00:27:50.676 "copy": true, 00:27:50.676 "nvme_iov_md": false 00:27:50.676 }, 00:27:50.676 "memory_domains": [ 00:27:50.676 { 00:27:50.676 "dma_device_id": "system", 00:27:50.676 "dma_device_type": 1 00:27:50.676 } 00:27:50.676 ], 00:27:50.676 "driver_specific": { 00:27:50.676 "nvme": [ 00:27:50.676 { 00:27:50.676 "trid": { 00:27:50.676 "trtype": "TCP", 00:27:50.676 "adrfam": "IPv4", 00:27:50.676 "traddr": "10.0.0.2", 00:27:50.676 "trsvcid": "4420", 00:27:50.676 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:50.676 }, 00:27:50.676 "ctrlr_data": { 00:27:50.676 "cntlid": 1, 00:27:50.676 "vendor_id": "0x8086", 00:27:50.676 "model_number": "SPDK bdev Controller", 00:27:50.676 "serial_number": "SPDK0", 00:27:50.676 "firmware_revision": "25.01", 00:27:50.676 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:50.676 "oacs": { 00:27:50.676 "security": 0, 00:27:50.676 "format": 0, 00:27:50.676 "firmware": 0, 00:27:50.676 "ns_manage": 0 00:27:50.676 }, 00:27:50.676 "multi_ctrlr": true, 00:27:50.676 "ana_reporting": false 00:27:50.676 }, 00:27:50.676 "vs": { 00:27:50.676 "nvme_version": "1.3" 00:27:50.676 }, 00:27:50.676 "ns_data": { 00:27:50.676 "id": 1, 00:27:50.676 "can_share": true 00:27:50.676 } 00:27:50.676 } 00:27:50.676 ], 00:27:50.676 "mp_policy": "active_passive" 00:27:50.676 } 00:27:50.676 } 00:27:50.676 ] 00:27:50.676 20:57:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1797325 00:27:50.676 20:57:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:27:50.676 20:57:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:50.933 Running I/O for 10 seconds... 00:27:51.897 Latency(us) 00:27:51.897 [2024-11-26T19:57:55.594Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:51.897 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:51.897 Nvme0n1 : 1.00 14859.00 58.04 0.00 0.00 0.00 0.00 0.00 00:27:51.897 [2024-11-26T19:57:55.594Z] =================================================================================================================== 00:27:51.897 [2024-11-26T19:57:55.594Z] Total : 14859.00 58.04 0.00 0.00 0.00 0.00 0.00 00:27:51.897 00:27:52.831 20:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f8a88770-3eaa-46eb-b072-170420791def 00:27:52.831 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:52.831 Nvme0n1 : 2.00 15113.00 59.04 0.00 0.00 0.00 0.00 0.00 00:27:52.831 [2024-11-26T19:57:56.528Z] =================================================================================================================== 00:27:52.831 [2024-11-26T19:57:56.528Z] Total : 15113.00 59.04 0.00 0.00 0.00 0.00 0.00 00:27:52.831 00:27:53.089 true 00:27:53.089 20:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f8a88770-3eaa-46eb-b072-170420791def 00:27:53.089 20:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:27:53.347 20:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:27:53.347 20:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:27:53.347 20:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1797325 00:27:53.911 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:53.911 Nvme0n1 : 3.00 15240.00 59.53 0.00 0.00 0.00 0.00 0.00 00:27:53.911 [2024-11-26T19:57:57.608Z] =================================================================================================================== 00:27:53.911 [2024-11-26T19:57:57.608Z] Total : 15240.00 59.53 0.00 0.00 0.00 0.00 0.00 00:27:53.911 00:27:54.847 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:54.848 Nvme0n1 : 4.00 15335.25 59.90 0.00 0.00 0.00 0.00 0.00 00:27:54.848 [2024-11-26T19:57:58.545Z] =================================================================================================================== 00:27:54.848 [2024-11-26T19:57:58.545Z] Total : 15335.25 59.90 0.00 0.00 0.00 0.00 0.00 00:27:54.848 00:27:55.780 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:55.780 Nvme0n1 : 5.00 15417.80 60.23 0.00 0.00 0.00 0.00 0.00 00:27:55.780 [2024-11-26T19:57:59.477Z] =================================================================================================================== 00:27:55.780 [2024-11-26T19:57:59.477Z] Total : 15417.80 60.23 0.00 0.00 0.00 0.00 0.00 00:27:55.780 00:27:57.152 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:57.152 Nvme0n1 : 6.00 15409.33 60.19 0.00 0.00 0.00 0.00 0.00 00:27:57.152 [2024-11-26T19:58:00.849Z] =================================================================================================================== 00:27:57.152 [2024-11-26T19:58:00.849Z] Total : 15409.33 60.19 0.00 0.00 0.00 0.00 0.00 00:27:57.152 00:27:58.084 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:58.084 Nvme0n1 : 7.00 15439.57 60.31 0.00 0.00 0.00 0.00 0.00 00:27:58.084 [2024-11-26T19:58:01.781Z] =================================================================================================================== 00:27:58.084 [2024-11-26T19:58:01.781Z] Total : 15439.57 60.31 0.00 0.00 0.00 0.00 0.00 00:27:58.084 00:27:59.017 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:59.017 Nvme0n1 : 8.00 15486.12 60.49 0.00 0.00 0.00 0.00 0.00 00:27:59.017 [2024-11-26T19:58:02.714Z] =================================================================================================================== 00:27:59.017 [2024-11-26T19:58:02.714Z] Total : 15486.12 60.49 0.00 0.00 0.00 0.00 0.00 00:27:59.017 00:27:59.949 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:59.949 Nvme0n1 : 9.00 15515.22 60.61 0.00 0.00 0.00 0.00 0.00 00:27:59.949 [2024-11-26T19:58:03.646Z] =================================================================================================================== 00:27:59.949 [2024-11-26T19:58:03.646Z] Total : 15515.22 60.61 0.00 0.00 0.00 0.00 0.00 00:27:59.949 00:28:00.881 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:00.881 Nvme0n1 : 10.00 15551.20 60.75 0.00 0.00 0.00 0.00 0.00 00:28:00.881 [2024-11-26T19:58:04.578Z] =================================================================================================================== 00:28:00.881 [2024-11-26T19:58:04.578Z] Total : 15551.20 60.75 0.00 0.00 0.00 0.00 0.00 00:28:00.881 00:28:00.881 00:28:00.881 Latency(us) 00:28:00.881 [2024-11-26T19:58:04.578Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:00.881 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:00.881 Nvme0n1 : 10.01 15553.08 60.75 0.00 0.00 8225.23 5267.15 18544.26 00:28:00.881 [2024-11-26T19:58:04.578Z] =================================================================================================================== 00:28:00.881 [2024-11-26T19:58:04.578Z] Total : 15553.08 60.75 0.00 0.00 8225.23 5267.15 18544.26 00:28:00.881 { 00:28:00.881 "results": [ 00:28:00.881 { 00:28:00.881 "job": "Nvme0n1", 00:28:00.881 "core_mask": "0x2", 00:28:00.881 "workload": "randwrite", 00:28:00.881 "status": "finished", 00:28:00.881 "queue_depth": 128, 00:28:00.881 "io_size": 4096, 00:28:00.881 "runtime": 10.00702, 00:28:00.881 "iops": 15553.081736620892, 00:28:00.881 "mibps": 60.75422553367536, 00:28:00.881 "io_failed": 0, 00:28:00.881 "io_timeout": 0, 00:28:00.881 "avg_latency_us": 8225.234378746774, 00:28:00.881 "min_latency_us": 5267.152592592593, 00:28:00.881 "max_latency_us": 18544.26074074074 00:28:00.881 } 00:28:00.881 ], 00:28:00.881 "core_count": 1 00:28:00.881 } 00:28:00.881 20:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1797196 00:28:00.881 20:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1797196 ']' 00:28:00.881 20:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1797196 00:28:00.881 20:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:28:00.881 20:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:00.881 20:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1797196 00:28:00.881 20:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:00.881 20:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:00.881 20:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1797196' 00:28:00.881 killing process with pid 1797196 00:28:00.881 20:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1797196 00:28:00.881 Received shutdown signal, test time was about 10.000000 seconds 00:28:00.881 00:28:00.881 Latency(us) 00:28:00.881 [2024-11-26T19:58:04.578Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:00.881 [2024-11-26T19:58:04.578Z] =================================================================================================================== 00:28:00.881 [2024-11-26T19:58:04.578Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:00.881 20:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1797196 00:28:01.139 20:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:01.397 20:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:01.655 20:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f8a88770-3eaa-46eb-b072-170420791def 00:28:01.655 20:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:28:01.913 20:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:28:01.913 20:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:28:01.913 20:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1794712 00:28:01.913 20:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1794712 00:28:01.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1794712 Killed "${NVMF_APP[@]}" "$@" 00:28:01.913 20:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:28:01.913 20:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:28:01.913 20:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:01.913 20:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:01.913 20:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:01.913 20:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1798644 00:28:01.913 20:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:28:01.913 20:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1798644 00:28:01.913 20:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1798644 ']' 00:28:01.913 20:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:01.913 20:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:01.913 20:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:01.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:01.913 20:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:01.913 20:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:02.172 [2024-11-26 20:58:05.637421] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:02.172 [2024-11-26 20:58:05.638520] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:28:02.172 [2024-11-26 20:58:05.638576] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:02.172 [2024-11-26 20:58:05.713113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:02.172 [2024-11-26 20:58:05.771167] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:02.172 [2024-11-26 20:58:05.771221] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:02.172 [2024-11-26 20:58:05.771235] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:02.172 [2024-11-26 20:58:05.771246] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:02.172 [2024-11-26 20:58:05.771255] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:02.172 [2024-11-26 20:58:05.771895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:02.430 [2024-11-26 20:58:05.870373] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:02.430 [2024-11-26 20:58:05.870686] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:02.430 20:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:02.430 20:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:28:02.430 20:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:02.430 20:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:02.430 20:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:02.430 20:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:02.430 20:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:02.688 [2024-11-26 20:58:06.182751] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:28:02.688 [2024-11-26 20:58:06.182880] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:28:02.688 [2024-11-26 20:58:06.182928] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:28:02.688 20:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:28:02.688 20:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 1f48ba8d-185e-40e2-b824-cf3d18b118df 00:28:02.688 20:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=1f48ba8d-185e-40e2-b824-cf3d18b118df 00:28:02.688 20:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:02.688 20:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:28:02.688 20:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:02.688 20:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:02.688 20:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:02.946 20:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1f48ba8d-185e-40e2-b824-cf3d18b118df -t 2000 00:28:03.204 [ 00:28:03.204 { 00:28:03.204 "name": "1f48ba8d-185e-40e2-b824-cf3d18b118df", 00:28:03.204 "aliases": [ 00:28:03.204 "lvs/lvol" 00:28:03.204 ], 00:28:03.204 "product_name": "Logical Volume", 00:28:03.204 "block_size": 4096, 00:28:03.204 "num_blocks": 38912, 00:28:03.204 "uuid": "1f48ba8d-185e-40e2-b824-cf3d18b118df", 00:28:03.204 "assigned_rate_limits": { 00:28:03.204 "rw_ios_per_sec": 0, 00:28:03.204 "rw_mbytes_per_sec": 0, 00:28:03.204 "r_mbytes_per_sec": 0, 00:28:03.204 "w_mbytes_per_sec": 0 00:28:03.204 }, 00:28:03.204 "claimed": false, 00:28:03.204 "zoned": false, 00:28:03.204 "supported_io_types": { 00:28:03.204 "read": true, 00:28:03.204 "write": true, 00:28:03.204 "unmap": true, 00:28:03.204 "flush": false, 00:28:03.204 "reset": true, 00:28:03.204 "nvme_admin": false, 00:28:03.204 "nvme_io": false, 00:28:03.204 "nvme_io_md": false, 00:28:03.204 "write_zeroes": true, 00:28:03.204 "zcopy": false, 00:28:03.204 "get_zone_info": false, 00:28:03.204 "zone_management": false, 00:28:03.204 "zone_append": false, 00:28:03.204 "compare": false, 00:28:03.204 "compare_and_write": false, 00:28:03.204 "abort": false, 00:28:03.204 "seek_hole": true, 00:28:03.204 "seek_data": true, 00:28:03.204 "copy": false, 00:28:03.204 "nvme_iov_md": false 00:28:03.204 }, 00:28:03.204 "driver_specific": { 00:28:03.204 "lvol": { 00:28:03.204 "lvol_store_uuid": "f8a88770-3eaa-46eb-b072-170420791def", 00:28:03.204 "base_bdev": "aio_bdev", 00:28:03.204 "thin_provision": false, 00:28:03.204 "num_allocated_clusters": 38, 00:28:03.204 "snapshot": false, 00:28:03.204 "clone": false, 00:28:03.204 "esnap_clone": false 00:28:03.204 } 00:28:03.204 } 00:28:03.204 } 00:28:03.204 ] 00:28:03.204 20:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:28:03.204 20:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f8a88770-3eaa-46eb-b072-170420791def 00:28:03.204 20:58:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:28:03.462 20:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:28:03.462 20:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f8a88770-3eaa-46eb-b072-170420791def 00:28:03.462 20:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:28:03.721 20:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:28:03.721 20:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:28:03.978 [2024-11-26 20:58:07.552517] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:28:03.978 20:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f8a88770-3eaa-46eb-b072-170420791def 00:28:03.978 20:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:28:03.978 20:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f8a88770-3eaa-46eb-b072-170420791def 00:28:03.978 20:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:03.978 20:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:03.978 20:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:03.978 20:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:03.978 20:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:03.978 20:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:03.978 20:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:03.978 20:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:28:03.978 20:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f8a88770-3eaa-46eb-b072-170420791def 00:28:04.236 request: 00:28:04.236 { 00:28:04.236 "uuid": "f8a88770-3eaa-46eb-b072-170420791def", 00:28:04.236 "method": "bdev_lvol_get_lvstores", 00:28:04.236 "req_id": 1 00:28:04.236 } 00:28:04.236 Got JSON-RPC error response 00:28:04.236 response: 00:28:04.236 { 00:28:04.236 "code": -19, 00:28:04.236 "message": "No such device" 00:28:04.236 } 00:28:04.236 20:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:28:04.236 20:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:04.236 20:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:04.236 20:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:04.236 20:58:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:04.494 aio_bdev 00:28:04.494 20:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1f48ba8d-185e-40e2-b824-cf3d18b118df 00:28:04.494 20:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=1f48ba8d-185e-40e2-b824-cf3d18b118df 00:28:04.494 20:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:04.494 20:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:28:04.494 20:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:04.494 20:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:04.494 20:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:04.752 20:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1f48ba8d-185e-40e2-b824-cf3d18b118df -t 2000 00:28:05.011 [ 00:28:05.011 { 00:28:05.011 "name": "1f48ba8d-185e-40e2-b824-cf3d18b118df", 00:28:05.011 "aliases": [ 00:28:05.011 "lvs/lvol" 00:28:05.011 ], 00:28:05.011 "product_name": "Logical Volume", 00:28:05.011 "block_size": 4096, 00:28:05.011 "num_blocks": 38912, 00:28:05.011 "uuid": "1f48ba8d-185e-40e2-b824-cf3d18b118df", 00:28:05.011 "assigned_rate_limits": { 00:28:05.011 "rw_ios_per_sec": 0, 00:28:05.011 "rw_mbytes_per_sec": 0, 00:28:05.011 "r_mbytes_per_sec": 0, 00:28:05.011 "w_mbytes_per_sec": 0 00:28:05.011 }, 00:28:05.011 "claimed": false, 00:28:05.011 "zoned": false, 00:28:05.011 "supported_io_types": { 00:28:05.011 "read": true, 00:28:05.011 "write": true, 00:28:05.011 "unmap": true, 00:28:05.011 "flush": false, 00:28:05.011 "reset": true, 00:28:05.011 "nvme_admin": false, 00:28:05.011 "nvme_io": false, 00:28:05.011 "nvme_io_md": false, 00:28:05.011 "write_zeroes": true, 00:28:05.011 "zcopy": false, 00:28:05.011 "get_zone_info": false, 00:28:05.011 "zone_management": false, 00:28:05.011 "zone_append": false, 00:28:05.011 "compare": false, 00:28:05.011 "compare_and_write": false, 00:28:05.011 "abort": false, 00:28:05.011 "seek_hole": true, 00:28:05.011 "seek_data": true, 00:28:05.011 "copy": false, 00:28:05.011 "nvme_iov_md": false 00:28:05.011 }, 00:28:05.011 "driver_specific": { 00:28:05.011 "lvol": { 00:28:05.011 "lvol_store_uuid": "f8a88770-3eaa-46eb-b072-170420791def", 00:28:05.011 "base_bdev": "aio_bdev", 00:28:05.011 "thin_provision": false, 00:28:05.011 "num_allocated_clusters": 38, 00:28:05.011 "snapshot": false, 00:28:05.011 "clone": false, 00:28:05.011 "esnap_clone": false 00:28:05.011 } 00:28:05.011 } 00:28:05.011 } 00:28:05.011 ] 00:28:05.011 20:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:28:05.011 20:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f8a88770-3eaa-46eb-b072-170420791def 00:28:05.011 20:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:28:05.269 20:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:28:05.269 20:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f8a88770-3eaa-46eb-b072-170420791def 00:28:05.269 20:58:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:28:05.527 20:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:28:05.527 20:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1f48ba8d-185e-40e2-b824-cf3d18b118df 00:28:05.786 20:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f8a88770-3eaa-46eb-b072-170420791def 00:28:06.352 20:58:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:28:06.352 20:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:06.610 00:28:06.610 real 0m19.501s 00:28:06.610 user 0m35.690s 00:28:06.610 sys 0m5.056s 00:28:06.610 20:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:06.610 20:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:06.610 ************************************ 00:28:06.610 END TEST lvs_grow_dirty 00:28:06.610 ************************************ 00:28:06.610 20:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:28:06.610 20:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:28:06.610 20:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:28:06.610 20:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:28:06.610 20:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:28:06.610 20:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:28:06.610 20:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:28:06.610 20:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:28:06.610 20:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:28:06.610 nvmf_trace.0 00:28:06.610 20:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:28:06.610 20:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:28:06.610 20:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:06.610 20:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:28:06.610 20:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:06.610 20:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:28:06.610 20:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:06.610 20:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:06.610 rmmod nvme_tcp 00:28:06.610 rmmod nvme_fabrics 00:28:06.610 rmmod nvme_keyring 00:28:06.610 20:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:06.610 20:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:28:06.610 20:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:28:06.610 20:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1798644 ']' 00:28:06.610 20:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1798644 00:28:06.610 20:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1798644 ']' 00:28:06.610 20:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1798644 00:28:06.610 20:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:28:06.610 20:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:06.610 20:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1798644 00:28:06.610 20:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:06.610 20:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:06.610 20:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1798644' 00:28:06.610 killing process with pid 1798644 00:28:06.610 20:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1798644 00:28:06.610 20:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1798644 00:28:06.868 20:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:06.868 20:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:06.868 20:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:06.868 20:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:28:06.868 20:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:28:06.868 20:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:06.868 20:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:28:06.868 20:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:06.868 20:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:06.868 20:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:06.868 20:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:06.868 20:58:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:09.403 00:28:09.403 real 0m42.884s 00:28:09.403 user 0m54.815s 00:28:09.403 sys 0m9.017s 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:09.403 ************************************ 00:28:09.403 END TEST nvmf_lvs_grow 00:28:09.403 ************************************ 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:09.403 ************************************ 00:28:09.403 START TEST nvmf_bdev_io_wait 00:28:09.403 ************************************ 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:28:09.403 * Looking for test storage... 00:28:09.403 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:09.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:09.403 --rc genhtml_branch_coverage=1 00:28:09.403 --rc genhtml_function_coverage=1 00:28:09.403 --rc genhtml_legend=1 00:28:09.403 --rc geninfo_all_blocks=1 00:28:09.403 --rc geninfo_unexecuted_blocks=1 00:28:09.403 00:28:09.403 ' 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:09.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:09.403 --rc genhtml_branch_coverage=1 00:28:09.403 --rc genhtml_function_coverage=1 00:28:09.403 --rc genhtml_legend=1 00:28:09.403 --rc geninfo_all_blocks=1 00:28:09.403 --rc geninfo_unexecuted_blocks=1 00:28:09.403 00:28:09.403 ' 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:09.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:09.403 --rc genhtml_branch_coverage=1 00:28:09.403 --rc genhtml_function_coverage=1 00:28:09.403 --rc genhtml_legend=1 00:28:09.403 --rc geninfo_all_blocks=1 00:28:09.403 --rc geninfo_unexecuted_blocks=1 00:28:09.403 00:28:09.403 ' 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:09.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:09.403 --rc genhtml_branch_coverage=1 00:28:09.403 --rc genhtml_function_coverage=1 00:28:09.403 --rc genhtml_legend=1 00:28:09.403 --rc geninfo_all_blocks=1 00:28:09.403 --rc geninfo_unexecuted_blocks=1 00:28:09.403 00:28:09.403 ' 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:09.403 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:09.404 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:09.404 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:09.404 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:28:09.404 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:09.404 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:09.404 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:09.404 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.404 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.404 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.404 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:28:09.404 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.404 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:28:09.404 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:09.404 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:09.404 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:09.404 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:09.404 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:09.404 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:09.404 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:09.404 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:09.404 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:09.404 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:09.404 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:09.404 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:09.404 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:28:09.404 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:09.404 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:09.404 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:09.404 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:09.404 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:09.404 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:09.404 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:09.404 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:09.404 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:09.404 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:09.404 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:28:09.404 20:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:28:11.309 Found 0000:09:00.0 (0x8086 - 0x159b) 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:28:11.309 Found 0000:09:00.1 (0x8086 - 0x159b) 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:11.309 Found net devices under 0000:09:00.0: cvl_0_0 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:11.309 Found net devices under 0000:09:00.1: cvl_0_1 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:11.309 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:11.310 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:11.310 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:11.310 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:11.310 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:11.310 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:11.310 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.401 ms 00:28:11.310 00:28:11.310 --- 10.0.0.2 ping statistics --- 00:28:11.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:11.310 rtt min/avg/max/mdev = 0.401/0.401/0.401/0.000 ms 00:28:11.310 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:11.310 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:11.310 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:28:11.310 00:28:11.310 --- 10.0.0.1 ping statistics --- 00:28:11.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:11.310 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:28:11.310 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:11.310 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:28:11.310 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:11.310 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:11.310 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:11.310 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:11.310 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:11.310 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:11.310 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:11.310 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:11.310 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:11.310 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:11.310 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:11.310 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1801168 00:28:11.310 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:28:11.310 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1801168 00:28:11.310 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1801168 ']' 00:28:11.310 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:11.310 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:11.310 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:11.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:11.310 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:11.310 20:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:11.310 [2024-11-26 20:58:14.994178] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:11.310 [2024-11-26 20:58:14.995268] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:28:11.310 [2024-11-26 20:58:14.995364] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:11.568 [2024-11-26 20:58:15.072238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:11.568 [2024-11-26 20:58:15.134132] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:11.568 [2024-11-26 20:58:15.134192] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:11.568 [2024-11-26 20:58:15.134205] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:11.569 [2024-11-26 20:58:15.134230] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:11.569 [2024-11-26 20:58:15.134241] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:11.569 [2024-11-26 20:58:15.135807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:11.569 [2024-11-26 20:58:15.135865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:11.569 [2024-11-26 20:58:15.135934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:11.569 [2024-11-26 20:58:15.135938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:11.569 [2024-11-26 20:58:15.136446] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:11.569 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:11.569 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:28:11.569 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:11.569 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:11.569 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:11.569 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:11.569 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:28:11.569 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.569 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:11.569 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.569 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:28:11.569 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.569 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:11.828 [2024-11-26 20:58:15.315265] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:11.828 [2024-11-26 20:58:15.315558] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:11.828 [2024-11-26 20:58:15.316496] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:11.828 [2024-11-26 20:58:15.317318] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:28:11.828 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.828 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:11.828 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.828 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:11.828 [2024-11-26 20:58:15.324641] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:11.828 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.828 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:11.828 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.828 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:11.828 Malloc0 00:28:11.828 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.828 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:11.828 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.828 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:11.828 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.828 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:11.828 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.828 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:11.828 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.828 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:11.828 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.828 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:11.828 [2024-11-26 20:58:15.380825] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:11.828 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.828 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1801311 00:28:11.828 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1801313 00:28:11.828 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:28:11.828 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:28:11.828 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:28:11.828 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:28:11.828 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:11.828 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1801315 00:28:11.828 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:28:11.828 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:28:11.828 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:11.828 { 00:28:11.828 "params": { 00:28:11.828 "name": "Nvme$subsystem", 00:28:11.828 "trtype": "$TEST_TRANSPORT", 00:28:11.828 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:11.828 "adrfam": "ipv4", 00:28:11.828 "trsvcid": "$NVMF_PORT", 00:28:11.828 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:11.828 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:11.828 "hdgst": ${hdgst:-false}, 00:28:11.828 "ddgst": ${ddgst:-false} 00:28:11.828 }, 00:28:11.828 "method": "bdev_nvme_attach_controller" 00:28:11.828 } 00:28:11.828 EOF 00:28:11.828 )") 00:28:11.828 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:28:11.828 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:28:11.828 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:11.828 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1801317 00:28:11.828 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:11.828 { 00:28:11.828 "params": { 00:28:11.828 "name": "Nvme$subsystem", 00:28:11.828 "trtype": "$TEST_TRANSPORT", 00:28:11.828 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:11.829 "adrfam": "ipv4", 00:28:11.829 "trsvcid": "$NVMF_PORT", 00:28:11.829 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:11.829 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:11.829 "hdgst": ${hdgst:-false}, 00:28:11.829 "ddgst": ${ddgst:-false} 00:28:11.829 }, 00:28:11.829 "method": "bdev_nvme_attach_controller" 00:28:11.829 } 00:28:11.829 EOF 00:28:11.829 )") 00:28:11.829 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:28:11.829 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:28:11.829 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:28:11.829 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:28:11.829 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:28:11.829 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:28:11.829 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:28:11.829 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:28:11.829 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:11.829 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:11.829 { 00:28:11.829 "params": { 00:28:11.829 "name": "Nvme$subsystem", 00:28:11.829 "trtype": "$TEST_TRANSPORT", 00:28:11.829 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:11.829 "adrfam": "ipv4", 00:28:11.829 "trsvcid": "$NVMF_PORT", 00:28:11.829 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:11.829 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:11.829 "hdgst": ${hdgst:-false}, 00:28:11.829 "ddgst": ${ddgst:-false} 00:28:11.829 }, 00:28:11.829 "method": "bdev_nvme_attach_controller" 00:28:11.829 } 00:28:11.829 EOF 00:28:11.829 )") 00:28:11.829 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:28:11.829 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:28:11.829 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:11.829 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:28:11.829 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:11.829 { 00:28:11.829 "params": { 00:28:11.829 "name": "Nvme$subsystem", 00:28:11.829 "trtype": "$TEST_TRANSPORT", 00:28:11.829 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:11.829 "adrfam": "ipv4", 00:28:11.829 "trsvcid": "$NVMF_PORT", 00:28:11.829 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:11.829 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:11.829 "hdgst": ${hdgst:-false}, 00:28:11.829 "ddgst": ${ddgst:-false} 00:28:11.829 }, 00:28:11.829 "method": "bdev_nvme_attach_controller" 00:28:11.829 } 00:28:11.829 EOF 00:28:11.829 )") 00:28:11.829 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:28:11.829 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1801311 00:28:11.829 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:28:11.829 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:28:11.829 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:28:11.829 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:28:11.829 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:28:11.829 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:28:11.829 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:11.829 "params": { 00:28:11.829 "name": "Nvme1", 00:28:11.829 "trtype": "tcp", 00:28:11.829 "traddr": "10.0.0.2", 00:28:11.829 "adrfam": "ipv4", 00:28:11.829 "trsvcid": "4420", 00:28:11.829 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:11.829 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:11.829 "hdgst": false, 00:28:11.829 "ddgst": false 00:28:11.829 }, 00:28:11.829 "method": "bdev_nvme_attach_controller" 00:28:11.829 }' 00:28:11.829 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:28:11.829 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:11.829 "params": { 00:28:11.829 "name": "Nvme1", 00:28:11.829 "trtype": "tcp", 00:28:11.829 "traddr": "10.0.0.2", 00:28:11.829 "adrfam": "ipv4", 00:28:11.829 "trsvcid": "4420", 00:28:11.829 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:11.829 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:11.829 "hdgst": false, 00:28:11.829 "ddgst": false 00:28:11.829 }, 00:28:11.829 "method": "bdev_nvme_attach_controller" 00:28:11.829 }' 00:28:11.829 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:28:11.829 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:11.829 "params": { 00:28:11.829 "name": "Nvme1", 00:28:11.829 "trtype": "tcp", 00:28:11.829 "traddr": "10.0.0.2", 00:28:11.829 "adrfam": "ipv4", 00:28:11.829 "trsvcid": "4420", 00:28:11.829 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:11.829 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:11.829 "hdgst": false, 00:28:11.829 "ddgst": false 00:28:11.829 }, 00:28:11.829 "method": "bdev_nvme_attach_controller" 00:28:11.829 }' 00:28:11.829 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:28:11.829 20:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:11.829 "params": { 00:28:11.829 "name": "Nvme1", 00:28:11.829 "trtype": "tcp", 00:28:11.829 "traddr": "10.0.0.2", 00:28:11.829 "adrfam": "ipv4", 00:28:11.829 "trsvcid": "4420", 00:28:11.829 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:11.829 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:11.829 "hdgst": false, 00:28:11.829 "ddgst": false 00:28:11.829 }, 00:28:11.829 "method": "bdev_nvme_attach_controller" 00:28:11.829 }' 00:28:11.829 [2024-11-26 20:58:15.431945] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:28:11.829 [2024-11-26 20:58:15.431946] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:28:11.829 [2024-11-26 20:58:15.431946] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:28:11.829 [2024-11-26 20:58:15.432035] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-26 20:58:15.432036] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-26 20:58:15.432035] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:28:11.829 --proc-type=auto ] 00:28:11.829 --proc-type=auto ] 00:28:11.829 [2024-11-26 20:58:15.432059] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:28:11.829 [2024-11-26 20:58:15.432125] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:28:12.087 [2024-11-26 20:58:15.617281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:12.087 [2024-11-26 20:58:15.673698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:28:12.087 [2024-11-26 20:58:15.722459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:12.087 [2024-11-26 20:58:15.778619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:12.345 [2024-11-26 20:58:15.824605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:12.345 [2024-11-26 20:58:15.882487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:28:12.345 [2024-11-26 20:58:15.901514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:12.345 [2024-11-26 20:58:15.954133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:28:12.603 Running I/O for 1 seconds... 00:28:12.603 Running I/O for 1 seconds... 00:28:12.603 Running I/O for 1 seconds... 00:28:12.603 Running I/O for 1 seconds... 00:28:13.536 6148.00 IOPS, 24.02 MiB/s [2024-11-26T19:58:17.233Z] 10144.00 IOPS, 39.62 MiB/s 00:28:13.536 Latency(us) 00:28:13.536 [2024-11-26T19:58:17.233Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:13.536 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:28:13.536 Nvme1n1 : 1.01 10212.75 39.89 0.00 0.00 12487.57 5194.33 16796.63 00:28:13.536 [2024-11-26T19:58:17.233Z] =================================================================================================================== 00:28:13.536 [2024-11-26T19:58:17.233Z] Total : 10212.75 39.89 0.00 0.00 12487.57 5194.33 16796.63 00:28:13.536 00:28:13.536 Latency(us) 00:28:13.536 [2024-11-26T19:58:17.234Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:13.537 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:28:13.537 Nvme1n1 : 1.06 5916.64 23.11 0.00 0.00 20603.38 4053.52 60972.75 00:28:13.537 [2024-11-26T19:58:17.234Z] =================================================================================================================== 00:28:13.537 [2024-11-26T19:58:17.234Z] Total : 5916.64 23.11 0.00 0.00 20603.38 4053.52 60972.75 00:28:13.537 5952.00 IOPS, 23.25 MiB/s 00:28:13.537 Latency(us) 00:28:13.537 [2024-11-26T19:58:17.234Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:13.537 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:28:13.537 Nvme1n1 : 1.01 6071.28 23.72 0.00 0.00 21016.63 4903.06 39418.69 00:28:13.537 [2024-11-26T19:58:17.234Z] =================================================================================================================== 00:28:13.537 [2024-11-26T19:58:17.234Z] Total : 6071.28 23.72 0.00 0.00 21016.63 4903.06 39418.69 00:28:13.537 188512.00 IOPS, 736.38 MiB/s 00:28:13.537 Latency(us) 00:28:13.537 [2024-11-26T19:58:17.234Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:13.537 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:28:13.537 Nvme1n1 : 1.00 188158.17 734.99 0.00 0.00 676.50 301.89 1856.85 00:28:13.537 [2024-11-26T19:58:17.234Z] =================================================================================================================== 00:28:13.537 [2024-11-26T19:58:17.234Z] Total : 188158.17 734.99 0.00 0.00 676.50 301.89 1856.85 00:28:13.795 20:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1801313 00:28:13.795 20:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1801315 00:28:13.795 20:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1801317 00:28:13.795 20:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:13.795 20:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.795 20:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:13.795 20:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.795 20:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:28:13.795 20:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:28:13.795 20:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:13.795 20:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:28:13.795 20:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:13.795 20:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:28:13.795 20:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:13.795 20:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:13.795 rmmod nvme_tcp 00:28:13.795 rmmod nvme_fabrics 00:28:13.795 rmmod nvme_keyring 00:28:13.795 20:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:13.795 20:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:28:13.795 20:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:28:13.795 20:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1801168 ']' 00:28:13.795 20:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1801168 00:28:13.795 20:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1801168 ']' 00:28:13.795 20:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1801168 00:28:13.795 20:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:28:13.795 20:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:13.795 20:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1801168 00:28:14.053 20:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:14.053 20:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:14.053 20:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1801168' 00:28:14.053 killing process with pid 1801168 00:28:14.053 20:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1801168 00:28:14.053 20:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1801168 00:28:14.053 20:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:14.053 20:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:14.053 20:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:14.053 20:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:28:14.053 20:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:28:14.053 20:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:14.053 20:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:28:14.053 20:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:14.053 20:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:14.053 20:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:14.053 20:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:14.053 20:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:16.590 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:16.590 00:28:16.590 real 0m7.188s 00:28:16.590 user 0m14.712s 00:28:16.590 sys 0m3.884s 00:28:16.590 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:16.590 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:16.590 ************************************ 00:28:16.590 END TEST nvmf_bdev_io_wait 00:28:16.590 ************************************ 00:28:16.590 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:28:16.590 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:16.590 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:16.590 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:16.590 ************************************ 00:28:16.590 START TEST nvmf_queue_depth 00:28:16.590 ************************************ 00:28:16.590 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:28:16.590 * Looking for test storage... 00:28:16.590 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:16.590 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:16.590 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:28:16.590 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:16.590 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:16.590 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:16.590 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:16.590 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:16.590 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:28:16.590 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:28:16.590 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:28:16.590 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:28:16.590 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:28:16.590 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:28:16.590 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:28:16.590 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:16.590 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:28:16.590 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:28:16.590 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:16.590 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:16.590 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:28:16.590 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:28:16.590 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:16.590 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:28:16.590 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:16.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:16.591 --rc genhtml_branch_coverage=1 00:28:16.591 --rc genhtml_function_coverage=1 00:28:16.591 --rc genhtml_legend=1 00:28:16.591 --rc geninfo_all_blocks=1 00:28:16.591 --rc geninfo_unexecuted_blocks=1 00:28:16.591 00:28:16.591 ' 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:16.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:16.591 --rc genhtml_branch_coverage=1 00:28:16.591 --rc genhtml_function_coverage=1 00:28:16.591 --rc genhtml_legend=1 00:28:16.591 --rc geninfo_all_blocks=1 00:28:16.591 --rc geninfo_unexecuted_blocks=1 00:28:16.591 00:28:16.591 ' 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:16.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:16.591 --rc genhtml_branch_coverage=1 00:28:16.591 --rc genhtml_function_coverage=1 00:28:16.591 --rc genhtml_legend=1 00:28:16.591 --rc geninfo_all_blocks=1 00:28:16.591 --rc geninfo_unexecuted_blocks=1 00:28:16.591 00:28:16.591 ' 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:16.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:16.591 --rc genhtml_branch_coverage=1 00:28:16.591 --rc genhtml_function_coverage=1 00:28:16.591 --rc genhtml_legend=1 00:28:16.591 --rc geninfo_all_blocks=1 00:28:16.591 --rc geninfo_unexecuted_blocks=1 00:28:16.591 00:28:16.591 ' 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:28:16.591 20:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:18.500 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:18.500 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:28:18.500 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:18.500 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:18.500 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:18.500 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:18.500 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:18.500 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:28:18.500 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:18.500 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:28:18.500 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:28:18.500 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:28:18.500 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:28:18.500 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:28:18.500 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:28:18.501 Found 0000:09:00.0 (0x8086 - 0x159b) 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:28:18.501 Found 0000:09:00.1 (0x8086 - 0x159b) 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:18.501 Found net devices under 0000:09:00.0: cvl_0_0 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:18.501 Found net devices under 0000:09:00.1: cvl_0_1 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:18.501 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:18.760 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:18.760 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:18.760 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:18.760 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:18.760 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.394 ms 00:28:18.760 00:28:18.760 --- 10.0.0.2 ping statistics --- 00:28:18.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:18.760 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:28:18.760 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:18.760 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:18.760 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:28:18.760 00:28:18.760 --- 10.0.0.1 ping statistics --- 00:28:18.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:18.760 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:28:18.760 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:18.760 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:28:18.760 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:18.760 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:18.760 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:18.760 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:18.760 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:18.760 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:18.760 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:18.760 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:28:18.760 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:18.760 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:18.760 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:18.760 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1803533 00:28:18.760 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:28:18.760 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1803533 00:28:18.760 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1803533 ']' 00:28:18.760 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:18.760 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:18.760 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:18.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:18.760 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:18.760 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:18.760 [2024-11-26 20:58:22.288947] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:18.760 [2024-11-26 20:58:22.290111] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:28:18.760 [2024-11-26 20:58:22.290179] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:18.760 [2024-11-26 20:58:22.367999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:18.760 [2024-11-26 20:58:22.426720] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:18.760 [2024-11-26 20:58:22.426776] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:18.760 [2024-11-26 20:58:22.426803] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:18.760 [2024-11-26 20:58:22.426814] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:18.760 [2024-11-26 20:58:22.426824] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:18.760 [2024-11-26 20:58:22.427462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:19.044 [2024-11-26 20:58:22.527134] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:19.044 [2024-11-26 20:58:22.527483] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:19.044 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:19.044 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:28:19.044 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:19.044 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:19.044 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:19.044 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:19.044 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:19.044 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.044 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:19.044 [2024-11-26 20:58:22.576066] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:19.044 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.044 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:19.044 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.044 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:19.044 Malloc0 00:28:19.044 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.044 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:19.044 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.044 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:19.044 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.044 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:19.044 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.044 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:19.044 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.044 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:19.044 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.044 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:19.044 [2024-11-26 20:58:22.636138] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:19.044 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.044 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1803556 00:28:19.044 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:28:19.044 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:19.044 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1803556 /var/tmp/bdevperf.sock 00:28:19.044 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1803556 ']' 00:28:19.044 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:19.044 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:19.044 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:19.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:19.044 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:19.044 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:19.044 [2024-11-26 20:58:22.683365] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:28:19.044 [2024-11-26 20:58:22.683443] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1803556 ] 00:28:19.301 [2024-11-26 20:58:22.754016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:19.301 [2024-11-26 20:58:22.815241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:19.301 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:19.301 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:28:19.301 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:19.301 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.301 20:58:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:19.559 NVMe0n1 00:28:19.559 20:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.559 20:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:19.559 Running I/O for 10 seconds... 00:28:21.865 7855.00 IOPS, 30.68 MiB/s [2024-11-26T19:58:26.495Z] 8175.00 IOPS, 31.93 MiB/s [2024-11-26T19:58:27.459Z] 8192.00 IOPS, 32.00 MiB/s [2024-11-26T19:58:28.409Z] 8192.50 IOPS, 32.00 MiB/s [2024-11-26T19:58:29.341Z] 8197.20 IOPS, 32.02 MiB/s [2024-11-26T19:58:30.273Z] 8277.17 IOPS, 32.33 MiB/s [2024-11-26T19:58:31.646Z] 8305.43 IOPS, 32.44 MiB/s [2024-11-26T19:58:32.579Z] 8314.88 IOPS, 32.48 MiB/s [2024-11-26T19:58:33.514Z] 8306.78 IOPS, 32.45 MiB/s [2024-11-26T19:58:33.514Z] 8315.50 IOPS, 32.48 MiB/s 00:28:29.817 Latency(us) 00:28:29.817 [2024-11-26T19:58:33.514Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:29.817 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:28:29.817 Verification LBA range: start 0x0 length 0x4000 00:28:29.817 NVMe0n1 : 10.07 8358.23 32.65 0.00 0.00 121961.95 12039.21 74953.77 00:28:29.817 [2024-11-26T19:58:33.514Z] =================================================================================================================== 00:28:29.817 [2024-11-26T19:58:33.514Z] Total : 8358.23 32.65 0.00 0.00 121961.95 12039.21 74953.77 00:28:29.817 { 00:28:29.817 "results": [ 00:28:29.817 { 00:28:29.817 "job": "NVMe0n1", 00:28:29.817 "core_mask": "0x1", 00:28:29.817 "workload": "verify", 00:28:29.817 "status": "finished", 00:28:29.817 "verify_range": { 00:28:29.817 "start": 0, 00:28:29.817 "length": 16384 00:28:29.817 }, 00:28:29.817 "queue_depth": 1024, 00:28:29.817 "io_size": 4096, 00:28:29.817 "runtime": 10.069952, 00:28:29.817 "iops": 8358.232492071462, 00:28:29.817 "mibps": 32.64934567215415, 00:28:29.817 "io_failed": 0, 00:28:29.817 "io_timeout": 0, 00:28:29.817 "avg_latency_us": 121961.95242104653, 00:28:29.817 "min_latency_us": 12039.205925925926, 00:28:29.817 "max_latency_us": 74953.76592592592 00:28:29.817 } 00:28:29.817 ], 00:28:29.817 "core_count": 1 00:28:29.817 } 00:28:29.817 20:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1803556 00:28:29.817 20:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1803556 ']' 00:28:29.817 20:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1803556 00:28:29.817 20:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:28:29.817 20:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:29.817 20:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1803556 00:28:29.817 20:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:29.817 20:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:29.817 20:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1803556' 00:28:29.817 killing process with pid 1803556 00:28:29.817 20:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1803556 00:28:29.817 Received shutdown signal, test time was about 10.000000 seconds 00:28:29.817 00:28:29.817 Latency(us) 00:28:29.817 [2024-11-26T19:58:33.514Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:29.817 [2024-11-26T19:58:33.514Z] =================================================================================================================== 00:28:29.817 [2024-11-26T19:58:33.514Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:29.817 20:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1803556 00:28:30.075 20:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:28:30.075 20:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:28:30.075 20:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:30.075 20:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:28:30.075 20:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:30.075 20:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:28:30.075 20:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:30.075 20:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:30.075 rmmod nvme_tcp 00:28:30.075 rmmod nvme_fabrics 00:28:30.075 rmmod nvme_keyring 00:28:30.075 20:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:30.075 20:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:28:30.075 20:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:28:30.075 20:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1803533 ']' 00:28:30.075 20:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1803533 00:28:30.075 20:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1803533 ']' 00:28:30.075 20:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1803533 00:28:30.075 20:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:28:30.075 20:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:30.075 20:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1803533 00:28:30.076 20:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:30.076 20:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:30.076 20:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1803533' 00:28:30.076 killing process with pid 1803533 00:28:30.076 20:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1803533 00:28:30.076 20:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1803533 00:28:30.335 20:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:30.335 20:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:30.335 20:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:30.335 20:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:28:30.335 20:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:28:30.335 20:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:30.335 20:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:28:30.335 20:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:30.335 20:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:30.335 20:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:30.335 20:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:30.335 20:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:32.867 20:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:32.867 00:28:32.867 real 0m16.162s 00:28:32.867 user 0m21.326s 00:28:32.867 sys 0m3.844s 00:28:32.867 20:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:32.867 20:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:32.867 ************************************ 00:28:32.867 END TEST nvmf_queue_depth 00:28:32.867 ************************************ 00:28:32.867 20:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:28:32.867 20:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:32.867 20:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:32.867 20:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:32.867 ************************************ 00:28:32.867 START TEST nvmf_target_multipath 00:28:32.867 ************************************ 00:28:32.867 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:28:32.867 * Looking for test storage... 00:28:32.867 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:32.867 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:32.867 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:28:32.867 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:32.867 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:32.867 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:32.867 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:32.867 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:32.867 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:28:32.867 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:28:32.867 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:28:32.867 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:28:32.867 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:28:32.867 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:28:32.867 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:28:32.867 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:32.867 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:28:32.867 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:28:32.867 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:32.867 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:32.867 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:28:32.868 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:28:32.868 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:32.868 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:28:32.868 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:28:32.868 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:28:32.868 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:28:32.868 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:32.868 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:28:32.868 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:28:32.868 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:32.868 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:32.868 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:28:32.868 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:32.868 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:32.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:32.868 --rc genhtml_branch_coverage=1 00:28:32.868 --rc genhtml_function_coverage=1 00:28:32.868 --rc genhtml_legend=1 00:28:32.868 --rc geninfo_all_blocks=1 00:28:32.868 --rc geninfo_unexecuted_blocks=1 00:28:32.868 00:28:32.868 ' 00:28:32.868 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:32.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:32.868 --rc genhtml_branch_coverage=1 00:28:32.868 --rc genhtml_function_coverage=1 00:28:32.868 --rc genhtml_legend=1 00:28:32.868 --rc geninfo_all_blocks=1 00:28:32.868 --rc geninfo_unexecuted_blocks=1 00:28:32.868 00:28:32.868 ' 00:28:32.868 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:32.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:32.868 --rc genhtml_branch_coverage=1 00:28:32.868 --rc genhtml_function_coverage=1 00:28:32.868 --rc genhtml_legend=1 00:28:32.868 --rc geninfo_all_blocks=1 00:28:32.868 --rc geninfo_unexecuted_blocks=1 00:28:32.868 00:28:32.868 ' 00:28:32.868 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:32.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:32.868 --rc genhtml_branch_coverage=1 00:28:32.868 --rc genhtml_function_coverage=1 00:28:32.868 --rc genhtml_legend=1 00:28:32.868 --rc geninfo_all_blocks=1 00:28:32.868 --rc geninfo_unexecuted_blocks=1 00:28:32.868 00:28:32.868 ' 00:28:32.868 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:32.868 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:28:32.868 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:32.868 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:32.868 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:32.868 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:32.868 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:32.868 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:32.868 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:32.868 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:32.868 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:32.868 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:32.868 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:32.868 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:32.868 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:32.868 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:32.868 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:32.868 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:32.868 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:32.868 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:28:32.868 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:32.868 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:32.868 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:32.868 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.868 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.868 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.868 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:28:32.868 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:32.868 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:28:32.868 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:32.869 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:32.869 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:32.869 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:32.869 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:32.869 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:32.869 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:32.869 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:32.869 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:32.869 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:32.869 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:32.869 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:32.869 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:32.869 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:32.869 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:28:32.869 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:32.869 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:32.869 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:32.869 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:32.869 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:32.869 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:32.869 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:32.869 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:32.869 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:32.869 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:32.869 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:28:32.869 20:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:28:34.766 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:34.766 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:28:34.766 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:34.766 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:34.766 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:34.766 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:34.766 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:34.766 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:28:34.766 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:34.766 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:28:34.766 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:28:34.766 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:28:34.766 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:28:34.766 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:28:34.766 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:28:34.766 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:34.766 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:34.766 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:34.766 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:34.766 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:34.766 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:34.766 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:34.766 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:34.766 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:34.766 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:34.766 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:34.766 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:34.766 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:34.766 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:34.766 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:34.766 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:34.766 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:34.766 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:34.767 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:34.767 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:28:34.767 Found 0000:09:00.0 (0x8086 - 0x159b) 00:28:34.767 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:34.767 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:34.767 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:34.767 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:34.767 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:34.767 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:34.767 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:28:34.767 Found 0000:09:00.1 (0x8086 - 0x159b) 00:28:34.767 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:34.767 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:34.767 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:34.767 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:34.767 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:34.767 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:34.767 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:34.767 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:34.767 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:34.767 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:34.767 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:34.767 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:34.767 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:34.767 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:34.767 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:34.767 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:34.767 Found net devices under 0000:09:00.0: cvl_0_0 00:28:34.767 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:34.767 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:34.767 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:34.767 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:34.767 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:34.767 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:34.767 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:34.767 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:34.767 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:34.767 Found net devices under 0000:09:00.1: cvl_0_1 00:28:34.767 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:34.767 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:34.767 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:28:34.767 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:34.767 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:34.767 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:34.767 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:34.767 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:34.767 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:34.767 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:34.767 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:34.767 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:34.767 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:34.767 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:34.767 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:34.767 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:34.767 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:34.767 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:34.767 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:34.767 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:34.767 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:34.767 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:34.767 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:34.767 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:34.767 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:34.767 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:35.026 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:35.026 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:35.026 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:35.026 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:35.026 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.388 ms 00:28:35.026 00:28:35.026 --- 10.0.0.2 ping statistics --- 00:28:35.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:35.026 rtt min/avg/max/mdev = 0.388/0.388/0.388/0.000 ms 00:28:35.026 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:35.026 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:35.026 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:28:35.026 00:28:35.026 --- 10.0.0.1 ping statistics --- 00:28:35.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:35.026 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:28:35.026 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:35.026 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:28:35.026 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:35.026 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:35.026 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:35.026 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:35.026 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:35.026 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:35.026 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:35.026 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:28:35.026 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:28:35.026 only one NIC for nvmf test 00:28:35.026 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:28:35.026 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:35.026 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:28:35.026 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:35.026 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:28:35.026 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:35.026 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:35.026 rmmod nvme_tcp 00:28:35.026 rmmod nvme_fabrics 00:28:35.026 rmmod nvme_keyring 00:28:35.026 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:35.026 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:28:35.026 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:28:35.026 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:28:35.026 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:35.026 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:35.026 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:35.026 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:28:35.026 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:28:35.026 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:35.026 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:28:35.026 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:35.026 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:35.026 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:35.026 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:35.026 20:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:36.929 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:36.929 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:28:36.929 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:28:36.929 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:36.929 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:28:36.929 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:36.929 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:28:36.929 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:36.929 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:36.929 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:36.929 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:28:36.929 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:28:36.929 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:28:36.929 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:36.929 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:36.929 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:36.929 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:28:36.929 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:28:36.929 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:36.929 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:28:36.929 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:36.929 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:36.929 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:36.929 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:36.929 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:36.929 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:36.929 00:28:36.929 real 0m4.613s 00:28:36.929 user 0m0.938s 00:28:36.929 sys 0m1.698s 00:28:36.929 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:36.929 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:28:36.929 ************************************ 00:28:36.929 END TEST nvmf_target_multipath 00:28:36.929 ************************************ 00:28:37.188 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:28:37.188 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:37.189 ************************************ 00:28:37.189 START TEST nvmf_zcopy 00:28:37.189 ************************************ 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:28:37.189 * Looking for test storage... 00:28:37.189 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:37.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.189 --rc genhtml_branch_coverage=1 00:28:37.189 --rc genhtml_function_coverage=1 00:28:37.189 --rc genhtml_legend=1 00:28:37.189 --rc geninfo_all_blocks=1 00:28:37.189 --rc geninfo_unexecuted_blocks=1 00:28:37.189 00:28:37.189 ' 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:37.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.189 --rc genhtml_branch_coverage=1 00:28:37.189 --rc genhtml_function_coverage=1 00:28:37.189 --rc genhtml_legend=1 00:28:37.189 --rc geninfo_all_blocks=1 00:28:37.189 --rc geninfo_unexecuted_blocks=1 00:28:37.189 00:28:37.189 ' 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:37.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.189 --rc genhtml_branch_coverage=1 00:28:37.189 --rc genhtml_function_coverage=1 00:28:37.189 --rc genhtml_legend=1 00:28:37.189 --rc geninfo_all_blocks=1 00:28:37.189 --rc geninfo_unexecuted_blocks=1 00:28:37.189 00:28:37.189 ' 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:37.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.189 --rc genhtml_branch_coverage=1 00:28:37.189 --rc genhtml_function_coverage=1 00:28:37.189 --rc genhtml_legend=1 00:28:37.189 --rc geninfo_all_blocks=1 00:28:37.189 --rc geninfo_unexecuted_blocks=1 00:28:37.189 00:28:37.189 ' 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:37.189 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.190 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.190 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.190 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:28:37.190 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.190 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:28:37.190 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:37.190 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:37.190 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:37.190 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:37.190 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:37.190 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:37.190 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:37.190 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:37.190 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:37.190 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:37.190 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:28:37.190 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:37.190 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:37.190 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:37.190 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:37.190 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:37.190 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:37.190 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:37.190 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:37.190 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:37.190 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:37.190 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:28:37.190 20:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:39.720 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:39.720 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:28:39.721 Found 0000:09:00.0 (0x8086 - 0x159b) 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:28:39.721 Found 0000:09:00.1 (0x8086 - 0x159b) 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:39.721 Found net devices under 0000:09:00.0: cvl_0_0 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:39.721 Found net devices under 0000:09:00.1: cvl_0_1 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:39.721 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:39.721 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:39.721 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.360 ms 00:28:39.721 00:28:39.721 --- 10.0.0.2 ping statistics --- 00:28:39.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:39.722 rtt min/avg/max/mdev = 0.360/0.360/0.360/0.000 ms 00:28:39.722 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:39.722 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:39.722 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:28:39.722 00:28:39.722 --- 10.0.0.1 ping statistics --- 00:28:39.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:39.722 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:28:39.722 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:39.722 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:28:39.722 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:39.722 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:39.722 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:39.722 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:39.722 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:39.722 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:39.722 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:39.722 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:28:39.722 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:39.722 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:39.722 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:39.722 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1808742 00:28:39.722 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:28:39.722 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1808742 00:28:39.722 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1808742 ']' 00:28:39.722 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:39.722 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:39.722 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:39.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:39.722 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:39.722 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:39.722 [2024-11-26 20:58:43.278685] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:39.722 [2024-11-26 20:58:43.279744] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:28:39.722 [2024-11-26 20:58:43.279806] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:39.722 [2024-11-26 20:58:43.349923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.722 [2024-11-26 20:58:43.405285] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:39.722 [2024-11-26 20:58:43.405341] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:39.722 [2024-11-26 20:58:43.405362] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:39.722 [2024-11-26 20:58:43.405375] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:39.722 [2024-11-26 20:58:43.405384] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:39.722 [2024-11-26 20:58:43.405955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:39.980 [2024-11-26 20:58:43.495712] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:39.981 [2024-11-26 20:58:43.496049] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:39.981 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:39.981 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:28:39.981 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:39.981 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:39.981 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:39.981 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:39.981 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:28:39.981 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:28:39.981 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.981 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:39.981 [2024-11-26 20:58:43.550618] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:39.981 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.981 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:39.981 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.981 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:39.981 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.981 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:39.981 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.981 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:39.981 [2024-11-26 20:58:43.566758] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:39.981 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.981 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:39.981 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.981 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:39.981 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.981 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:28:39.981 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.981 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:39.981 malloc0 00:28:39.981 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.981 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:28:39.981 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.981 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:39.981 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.981 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:28:39.981 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:28:39.981 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:28:39.981 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:28:39.981 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:39.981 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:39.981 { 00:28:39.981 "params": { 00:28:39.981 "name": "Nvme$subsystem", 00:28:39.981 "trtype": "$TEST_TRANSPORT", 00:28:39.981 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:39.981 "adrfam": "ipv4", 00:28:39.981 "trsvcid": "$NVMF_PORT", 00:28:39.981 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:39.981 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:39.981 "hdgst": ${hdgst:-false}, 00:28:39.981 "ddgst": ${ddgst:-false} 00:28:39.981 }, 00:28:39.981 "method": "bdev_nvme_attach_controller" 00:28:39.981 } 00:28:39.981 EOF 00:28:39.981 )") 00:28:39.981 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:28:39.981 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:28:39.981 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:28:39.981 20:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:39.981 "params": { 00:28:39.981 "name": "Nvme1", 00:28:39.981 "trtype": "tcp", 00:28:39.981 "traddr": "10.0.0.2", 00:28:39.981 "adrfam": "ipv4", 00:28:39.981 "trsvcid": "4420", 00:28:39.981 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:39.981 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:39.981 "hdgst": false, 00:28:39.981 "ddgst": false 00:28:39.981 }, 00:28:39.981 "method": "bdev_nvme_attach_controller" 00:28:39.981 }' 00:28:39.981 [2024-11-26 20:58:43.651753] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:28:39.981 [2024-11-26 20:58:43.651825] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1808762 ] 00:28:40.239 [2024-11-26 20:58:43.719529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:40.239 [2024-11-26 20:58:43.779824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:40.497 Running I/O for 10 seconds... 00:28:42.364 5610.00 IOPS, 43.83 MiB/s [2024-11-26T19:58:46.994Z] 5653.00 IOPS, 44.16 MiB/s [2024-11-26T19:58:48.366Z] 5676.00 IOPS, 44.34 MiB/s [2024-11-26T19:58:49.304Z] 5692.00 IOPS, 44.47 MiB/s [2024-11-26T19:58:50.241Z] 5695.80 IOPS, 44.50 MiB/s [2024-11-26T19:58:51.174Z] 5696.83 IOPS, 44.51 MiB/s [2024-11-26T19:58:52.107Z] 5701.71 IOPS, 44.54 MiB/s [2024-11-26T19:58:53.144Z] 5705.88 IOPS, 44.58 MiB/s [2024-11-26T19:58:54.076Z] 5703.44 IOPS, 44.56 MiB/s [2024-11-26T19:58:54.076Z] 5706.80 IOPS, 44.58 MiB/s 00:28:50.379 Latency(us) 00:28:50.379 [2024-11-26T19:58:54.076Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:50.379 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:28:50.379 Verification LBA range: start 0x0 length 0x1000 00:28:50.379 Nvme1n1 : 10.02 5708.21 44.60 0.00 0.00 22362.89 3131.16 29515.47 00:28:50.379 [2024-11-26T19:58:54.076Z] =================================================================================================================== 00:28:50.379 [2024-11-26T19:58:54.076Z] Total : 5708.21 44.60 0.00 0.00 22362.89 3131.16 29515.47 00:28:50.637 20:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1810060 00:28:50.637 20:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:28:50.637 20:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:50.637 20:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:28:50.637 20:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:28:50.637 20:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:28:50.637 20:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:28:50.637 20:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:50.637 20:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:50.637 { 00:28:50.637 "params": { 00:28:50.637 "name": "Nvme$subsystem", 00:28:50.637 "trtype": "$TEST_TRANSPORT", 00:28:50.637 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.637 "adrfam": "ipv4", 00:28:50.637 "trsvcid": "$NVMF_PORT", 00:28:50.637 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.637 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.637 "hdgst": ${hdgst:-false}, 00:28:50.637 "ddgst": ${ddgst:-false} 00:28:50.637 }, 00:28:50.638 "method": "bdev_nvme_attach_controller" 00:28:50.638 } 00:28:50.638 EOF 00:28:50.638 )") 00:28:50.638 20:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:28:50.638 [2024-11-26 20:58:54.230524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:50.638 [2024-11-26 20:58:54.230573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:50.638 20:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:28:50.638 20:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:28:50.638 20:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:50.638 "params": { 00:28:50.638 "name": "Nvme1", 00:28:50.638 "trtype": "tcp", 00:28:50.638 "traddr": "10.0.0.2", 00:28:50.638 "adrfam": "ipv4", 00:28:50.638 "trsvcid": "4420", 00:28:50.638 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:50.638 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:50.638 "hdgst": false, 00:28:50.638 "ddgst": false 00:28:50.638 }, 00:28:50.638 "method": "bdev_nvme_attach_controller" 00:28:50.638 }' 00:28:50.638 [2024-11-26 20:58:54.238459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:50.638 [2024-11-26 20:58:54.238485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:50.638 [2024-11-26 20:58:54.246455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:50.638 [2024-11-26 20:58:54.246478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:50.638 [2024-11-26 20:58:54.254453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:50.638 [2024-11-26 20:58:54.254475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:50.638 [2024-11-26 20:58:54.262454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:50.638 [2024-11-26 20:58:54.262476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:50.638 [2024-11-26 20:58:54.270450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:50.638 [2024-11-26 20:58:54.270471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:50.638 [2024-11-26 20:58:54.276010] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:28:50.638 [2024-11-26 20:58:54.276082] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1810060 ] 00:28:50.638 [2024-11-26 20:58:54.278449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:50.638 [2024-11-26 20:58:54.278479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:50.638 [2024-11-26 20:58:54.286450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:50.638 [2024-11-26 20:58:54.286471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:50.638 [2024-11-26 20:58:54.294465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:50.638 [2024-11-26 20:58:54.294486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:50.638 [2024-11-26 20:58:54.302452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:50.638 [2024-11-26 20:58:54.302475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:50.638 [2024-11-26 20:58:54.310449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:50.638 [2024-11-26 20:58:54.310469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:50.638 [2024-11-26 20:58:54.318448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:50.638 [2024-11-26 20:58:54.318468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:50.638 [2024-11-26 20:58:54.326449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:50.638 [2024-11-26 20:58:54.326469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:50.896 [2024-11-26 20:58:54.334451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:50.896 [2024-11-26 20:58:54.334472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:50.896 [2024-11-26 20:58:54.342448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:50.896 [2024-11-26 20:58:54.342469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:50.896 [2024-11-26 20:58:54.346370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:50.896 [2024-11-26 20:58:54.350453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:50.896 [2024-11-26 20:58:54.350474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:50.896 [2024-11-26 20:58:54.358494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:50.896 [2024-11-26 20:58:54.358530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:50.896 [2024-11-26 20:58:54.366459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:50.896 [2024-11-26 20:58:54.366484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:50.896 [2024-11-26 20:58:54.374450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:50.896 [2024-11-26 20:58:54.374471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:50.896 [2024-11-26 20:58:54.382449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:50.896 [2024-11-26 20:58:54.382470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:50.896 [2024-11-26 20:58:54.390450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:50.896 [2024-11-26 20:58:54.390470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:50.896 [2024-11-26 20:58:54.398450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:50.896 [2024-11-26 20:58:54.398470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:50.896 [2024-11-26 20:58:54.406451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:50.896 [2024-11-26 20:58:54.406472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:50.896 [2024-11-26 20:58:54.408830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:50.896 [2024-11-26 20:58:54.414449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:50.896 [2024-11-26 20:58:54.414469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:50.896 [2024-11-26 20:58:54.422459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:50.896 [2024-11-26 20:58:54.422489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:50.896 [2024-11-26 20:58:54.430484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:50.896 [2024-11-26 20:58:54.430516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:50.897 [2024-11-26 20:58:54.438482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:50.897 [2024-11-26 20:58:54.438516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:50.897 [2024-11-26 20:58:54.446483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:50.897 [2024-11-26 20:58:54.446515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:50.897 [2024-11-26 20:58:54.454484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:50.897 [2024-11-26 20:58:54.454519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:50.897 [2024-11-26 20:58:54.462485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:50.897 [2024-11-26 20:58:54.462520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:50.897 [2024-11-26 20:58:54.470481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:50.897 [2024-11-26 20:58:54.470513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:50.897 [2024-11-26 20:58:54.478460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:50.897 [2024-11-26 20:58:54.478483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:50.897 [2024-11-26 20:58:54.486464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:50.897 [2024-11-26 20:58:54.486491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:50.897 [2024-11-26 20:58:54.494489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:50.897 [2024-11-26 20:58:54.494532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:50.897 [2024-11-26 20:58:54.502486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:50.897 [2024-11-26 20:58:54.502521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:50.897 [2024-11-26 20:58:54.510474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:50.897 [2024-11-26 20:58:54.510505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:50.897 [2024-11-26 20:58:54.518449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:50.897 [2024-11-26 20:58:54.518469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:50.897 [2024-11-26 20:58:54.526450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:50.897 [2024-11-26 20:58:54.526470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:50.897 [2024-11-26 20:58:54.534457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:50.897 [2024-11-26 20:58:54.534482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:50.897 [2024-11-26 20:58:54.542455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:50.897 [2024-11-26 20:58:54.542479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:50.897 [2024-11-26 20:58:54.550455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:50.897 [2024-11-26 20:58:54.550478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:50.897 [2024-11-26 20:58:54.558456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:50.897 [2024-11-26 20:58:54.558480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:50.897 [2024-11-26 20:58:54.566450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:50.897 [2024-11-26 20:58:54.566471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:50.897 [2024-11-26 20:58:54.574450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:50.897 [2024-11-26 20:58:54.574478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:50.897 [2024-11-26 20:58:54.582449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:50.897 [2024-11-26 20:58:54.582469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:50.897 [2024-11-26 20:58:54.590452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:50.897 [2024-11-26 20:58:54.590473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.155 [2024-11-26 20:58:54.598457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.155 [2024-11-26 20:58:54.598481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.155 [2024-11-26 20:58:54.606470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.155 [2024-11-26 20:58:54.606494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.155 [2024-11-26 20:58:54.614473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.155 [2024-11-26 20:58:54.614497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.155 [2024-11-26 20:58:54.622450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.155 [2024-11-26 20:58:54.622472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.155 [2024-11-26 20:58:54.630458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.155 [2024-11-26 20:58:54.630483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.155 [2024-11-26 20:58:54.638456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.155 [2024-11-26 20:58:54.638481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.155 Running I/O for 5 seconds... 00:28:51.155 [2024-11-26 20:58:54.654607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.155 [2024-11-26 20:58:54.654635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.155 [2024-11-26 20:58:54.665463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.155 [2024-11-26 20:58:54.665491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.155 [2024-11-26 20:58:54.677764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.155 [2024-11-26 20:58:54.677791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.155 [2024-11-26 20:58:54.689936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.155 [2024-11-26 20:58:54.689963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.155 [2024-11-26 20:58:54.699738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.155 [2024-11-26 20:58:54.699761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.155 [2024-11-26 20:58:54.715945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.155 [2024-11-26 20:58:54.715984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.155 [2024-11-26 20:58:54.725312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.155 [2024-11-26 20:58:54.725339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.155 [2024-11-26 20:58:54.739887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.155 [2024-11-26 20:58:54.739911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.155 [2024-11-26 20:58:54.749324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.155 [2024-11-26 20:58:54.749351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.155 [2024-11-26 20:58:54.764459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.155 [2024-11-26 20:58:54.764484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.155 [2024-11-26 20:58:54.774875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.155 [2024-11-26 20:58:54.774906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.155 [2024-11-26 20:58:54.786248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.155 [2024-11-26 20:58:54.786271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.155 [2024-11-26 20:58:54.797132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.155 [2024-11-26 20:58:54.797155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.155 [2024-11-26 20:58:54.812358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.155 [2024-11-26 20:58:54.812400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.155 [2024-11-26 20:58:54.821326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.155 [2024-11-26 20:58:54.821353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.155 [2024-11-26 20:58:54.836615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.155 [2024-11-26 20:58:54.836640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.156 [2024-11-26 20:58:54.846254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.156 [2024-11-26 20:58:54.846281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.414 [2024-11-26 20:58:54.857910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.414 [2024-11-26 20:58:54.857933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.414 [2024-11-26 20:58:54.870275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.414 [2024-11-26 20:58:54.870309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.414 [2024-11-26 20:58:54.879973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.414 [2024-11-26 20:58:54.879999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.414 [2024-11-26 20:58:54.895694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.414 [2024-11-26 20:58:54.895719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.414 [2024-11-26 20:58:54.905346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.414 [2024-11-26 20:58:54.905373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.414 [2024-11-26 20:58:54.920159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.414 [2024-11-26 20:58:54.920184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.414 [2024-11-26 20:58:54.936374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.414 [2024-11-26 20:58:54.936401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.414 [2024-11-26 20:58:54.945862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.414 [2024-11-26 20:58:54.945888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.414 [2024-11-26 20:58:54.957115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.414 [2024-11-26 20:58:54.957141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.414 [2024-11-26 20:58:54.972320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.414 [2024-11-26 20:58:54.972345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.414 [2024-11-26 20:58:54.981692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.414 [2024-11-26 20:58:54.981718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.414 [2024-11-26 20:58:54.995996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.414 [2024-11-26 20:58:54.996034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.414 [2024-11-26 20:58:55.005810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.414 [2024-11-26 20:58:55.005835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.414 [2024-11-26 20:58:55.017136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.414 [2024-11-26 20:58:55.017160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.414 [2024-11-26 20:58:55.033215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.414 [2024-11-26 20:58:55.033239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.414 [2024-11-26 20:58:55.048779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.414 [2024-11-26 20:58:55.048806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.414 [2024-11-26 20:58:55.058136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.414 [2024-11-26 20:58:55.058161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.414 [2024-11-26 20:58:55.070051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.414 [2024-11-26 20:58:55.070076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.414 [2024-11-26 20:58:55.081003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.414 [2024-11-26 20:58:55.081027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.414 [2024-11-26 20:58:55.093973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.414 [2024-11-26 20:58:55.094000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.414 [2024-11-26 20:58:55.107950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.414 [2024-11-26 20:58:55.107978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.672 [2024-11-26 20:58:55.117066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.672 [2024-11-26 20:58:55.117092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.672 [2024-11-26 20:58:55.132120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.672 [2024-11-26 20:58:55.132145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.672 [2024-11-26 20:58:55.142037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.672 [2024-11-26 20:58:55.142062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.672 [2024-11-26 20:58:55.153858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.672 [2024-11-26 20:58:55.153883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.672 [2024-11-26 20:58:55.167680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.672 [2024-11-26 20:58:55.167707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.672 [2024-11-26 20:58:55.176893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.672 [2024-11-26 20:58:55.176934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.672 [2024-11-26 20:58:55.190759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.672 [2024-11-26 20:58:55.190803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.672 [2024-11-26 20:58:55.200382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.672 [2024-11-26 20:58:55.200407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.672 [2024-11-26 20:58:55.212413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.672 [2024-11-26 20:58:55.212439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.672 [2024-11-26 20:58:55.228669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.672 [2024-11-26 20:58:55.228693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.672 [2024-11-26 20:58:55.238493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.672 [2024-11-26 20:58:55.238521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.672 [2024-11-26 20:58:55.250571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.672 [2024-11-26 20:58:55.250615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.672 [2024-11-26 20:58:55.261008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.672 [2024-11-26 20:58:55.261048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.672 [2024-11-26 20:58:55.276343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.672 [2024-11-26 20:58:55.276371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.672 [2024-11-26 20:58:55.286153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.672 [2024-11-26 20:58:55.286178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.672 [2024-11-26 20:58:55.297982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.672 [2024-11-26 20:58:55.298007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.672 [2024-11-26 20:58:55.312560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.672 [2024-11-26 20:58:55.312602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.672 [2024-11-26 20:58:55.322133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.672 [2024-11-26 20:58:55.322160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.672 [2024-11-26 20:58:55.334081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.672 [2024-11-26 20:58:55.334106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.672 [2024-11-26 20:58:55.345110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.672 [2024-11-26 20:58:55.345134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.672 [2024-11-26 20:58:55.360036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.672 [2024-11-26 20:58:55.360060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.930 [2024-11-26 20:58:55.369728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.930 [2024-11-26 20:58:55.369755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.930 [2024-11-26 20:58:55.381439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.930 [2024-11-26 20:58:55.381464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.930 [2024-11-26 20:58:55.395690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.930 [2024-11-26 20:58:55.395717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.930 [2024-11-26 20:58:55.405106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.931 [2024-11-26 20:58:55.405129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.931 [2024-11-26 20:58:55.419041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.931 [2024-11-26 20:58:55.419065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.931 [2024-11-26 20:58:55.428349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.931 [2024-11-26 20:58:55.428376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.931 [2024-11-26 20:58:55.439831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.931 [2024-11-26 20:58:55.439857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.931 [2024-11-26 20:58:55.450097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.931 [2024-11-26 20:58:55.450120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.931 [2024-11-26 20:58:55.463852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.931 [2024-11-26 20:58:55.463893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.931 [2024-11-26 20:58:55.473422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.931 [2024-11-26 20:58:55.473448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.931 [2024-11-26 20:58:55.487511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.931 [2024-11-26 20:58:55.487537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.931 [2024-11-26 20:58:55.497495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.931 [2024-11-26 20:58:55.497521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.931 [2024-11-26 20:58:55.513084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.931 [2024-11-26 20:58:55.513108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.931 [2024-11-26 20:58:55.528166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.931 [2024-11-26 20:58:55.528193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.931 [2024-11-26 20:58:55.537502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.931 [2024-11-26 20:58:55.537529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.931 [2024-11-26 20:58:55.551632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.931 [2024-11-26 20:58:55.551671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.931 [2024-11-26 20:58:55.560963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.931 [2024-11-26 20:58:55.560988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.931 [2024-11-26 20:58:55.574974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.931 [2024-11-26 20:58:55.575015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.931 [2024-11-26 20:58:55.584748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.931 [2024-11-26 20:58:55.584773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.931 [2024-11-26 20:58:55.599260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.931 [2024-11-26 20:58:55.599286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.931 [2024-11-26 20:58:55.608924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.931 [2024-11-26 20:58:55.608951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:51.931 [2024-11-26 20:58:55.623497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:51.931 [2024-11-26 20:58:55.623524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.190 [2024-11-26 20:58:55.632764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.190 [2024-11-26 20:58:55.632788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.190 11687.00 IOPS, 91.30 MiB/s [2024-11-26T19:58:55.887Z] [2024-11-26 20:58:55.647189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.190 [2024-11-26 20:58:55.647213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.190 [2024-11-26 20:58:55.656585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.190 [2024-11-26 20:58:55.656627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.190 [2024-11-26 20:58:55.668552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.190 [2024-11-26 20:58:55.668580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.190 [2024-11-26 20:58:55.682588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.190 [2024-11-26 20:58:55.682636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.190 [2024-11-26 20:58:55.692134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.190 [2024-11-26 20:58:55.692160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.190 [2024-11-26 20:58:55.704088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.190 [2024-11-26 20:58:55.704114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.190 [2024-11-26 20:58:55.719922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.190 [2024-11-26 20:58:55.719948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.190 [2024-11-26 20:58:55.729142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.190 [2024-11-26 20:58:55.729167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.190 [2024-11-26 20:58:55.742927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.190 [2024-11-26 20:58:55.742952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.190 [2024-11-26 20:58:55.752440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.190 [2024-11-26 20:58:55.752466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.190 [2024-11-26 20:58:55.764029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.190 [2024-11-26 20:58:55.764069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.190 [2024-11-26 20:58:55.779619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.190 [2024-11-26 20:58:55.779645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.190 [2024-11-26 20:58:55.788942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.190 [2024-11-26 20:58:55.788983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.190 [2024-11-26 20:58:55.800639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.190 [2024-11-26 20:58:55.800664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.190 [2024-11-26 20:58:55.814376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.190 [2024-11-26 20:58:55.814402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.190 [2024-11-26 20:58:55.823995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.190 [2024-11-26 20:58:55.824022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.190 [2024-11-26 20:58:55.839419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.190 [2024-11-26 20:58:55.839446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.190 [2024-11-26 20:58:55.848987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.190 [2024-11-26 20:58:55.849013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.190 [2024-11-26 20:58:55.860487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.190 [2024-11-26 20:58:55.860529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.190 [2024-11-26 20:58:55.875763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.190 [2024-11-26 20:58:55.875790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.448 [2024-11-26 20:58:55.885408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.449 [2024-11-26 20:58:55.885435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.449 [2024-11-26 20:58:55.898547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.449 [2024-11-26 20:58:55.898574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.449 [2024-11-26 20:58:55.908097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.449 [2024-11-26 20:58:55.908131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.449 [2024-11-26 20:58:55.919555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.449 [2024-11-26 20:58:55.919596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.449 [2024-11-26 20:58:55.930028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.449 [2024-11-26 20:58:55.930053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.449 [2024-11-26 20:58:55.942909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.449 [2024-11-26 20:58:55.942935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.449 [2024-11-26 20:58:55.952412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.449 [2024-11-26 20:58:55.952440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.449 [2024-11-26 20:58:55.964088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.449 [2024-11-26 20:58:55.964115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.449 [2024-11-26 20:58:55.979592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.449 [2024-11-26 20:58:55.979632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.449 [2024-11-26 20:58:55.988567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.449 [2024-11-26 20:58:55.988594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.449 [2024-11-26 20:58:56.000150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.449 [2024-11-26 20:58:56.000175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.449 [2024-11-26 20:58:56.016239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.449 [2024-11-26 20:58:56.016266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.449 [2024-11-26 20:58:56.034843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.449 [2024-11-26 20:58:56.034868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.449 [2024-11-26 20:58:56.044903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.449 [2024-11-26 20:58:56.044928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.449 [2024-11-26 20:58:56.059176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.449 [2024-11-26 20:58:56.059202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.449 [2024-11-26 20:58:56.068795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.449 [2024-11-26 20:58:56.068821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.449 [2024-11-26 20:58:56.084334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.449 [2024-11-26 20:58:56.084361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.449 [2024-11-26 20:58:56.101962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.449 [2024-11-26 20:58:56.101988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.449 [2024-11-26 20:58:56.111786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.449 [2024-11-26 20:58:56.111812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.449 [2024-11-26 20:58:56.127562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.449 [2024-11-26 20:58:56.127602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.449 [2024-11-26 20:58:56.136857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.449 [2024-11-26 20:58:56.136883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.707 [2024-11-26 20:58:56.152521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.707 [2024-11-26 20:58:56.152558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.707 [2024-11-26 20:58:56.161826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.707 [2024-11-26 20:58:56.161852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.707 [2024-11-26 20:58:56.173378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.707 [2024-11-26 20:58:56.173404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.707 [2024-11-26 20:58:56.185793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.707 [2024-11-26 20:58:56.185819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.707 [2024-11-26 20:58:56.195382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.707 [2024-11-26 20:58:56.195409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.707 [2024-11-26 20:58:56.206859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.707 [2024-11-26 20:58:56.206884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.707 [2024-11-26 20:58:56.216753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.707 [2024-11-26 20:58:56.216777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.707 [2024-11-26 20:58:56.231629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.707 [2024-11-26 20:58:56.231669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.707 [2024-11-26 20:58:56.241079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.707 [2024-11-26 20:58:56.241104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.707 [2024-11-26 20:58:56.256945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.707 [2024-11-26 20:58:56.256971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.707 [2024-11-26 20:58:56.272655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.707 [2024-11-26 20:58:56.272683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.707 [2024-11-26 20:58:56.282488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.707 [2024-11-26 20:58:56.282515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.707 [2024-11-26 20:58:56.294101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.707 [2024-11-26 20:58:56.294126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.707 [2024-11-26 20:58:56.305379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.707 [2024-11-26 20:58:56.305406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.707 [2024-11-26 20:58:56.319192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.707 [2024-11-26 20:58:56.319219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.707 [2024-11-26 20:58:56.328723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.707 [2024-11-26 20:58:56.328748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.707 [2024-11-26 20:58:56.340547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.707 [2024-11-26 20:58:56.340602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.707 [2024-11-26 20:58:56.355411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.707 [2024-11-26 20:58:56.355439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.707 [2024-11-26 20:58:56.364645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.707 [2024-11-26 20:58:56.364671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.707 [2024-11-26 20:58:56.376241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.707 [2024-11-26 20:58:56.376278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.707 [2024-11-26 20:58:56.391684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.707 [2024-11-26 20:58:56.391710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.707 [2024-11-26 20:58:56.400817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.707 [2024-11-26 20:58:56.400844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.966 [2024-11-26 20:58:56.412478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.966 [2024-11-26 20:58:56.412504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.966 [2024-11-26 20:58:56.427950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.966 [2024-11-26 20:58:56.427976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.966 [2024-11-26 20:58:56.437145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.966 [2024-11-26 20:58:56.437171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.966 [2024-11-26 20:58:56.451453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.966 [2024-11-26 20:58:56.451480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.966 [2024-11-26 20:58:56.461560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.966 [2024-11-26 20:58:56.461602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.966 [2024-11-26 20:58:56.475405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.966 [2024-11-26 20:58:56.475431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.966 [2024-11-26 20:58:56.485276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.966 [2024-11-26 20:58:56.485312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.966 [2024-11-26 20:58:56.499968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.966 [2024-11-26 20:58:56.499994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.966 [2024-11-26 20:58:56.509435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.966 [2024-11-26 20:58:56.509462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.966 [2024-11-26 20:58:56.524191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.966 [2024-11-26 20:58:56.524217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.966 [2024-11-26 20:58:56.542849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.966 [2024-11-26 20:58:56.542875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.966 [2024-11-26 20:58:56.552399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.966 [2024-11-26 20:58:56.552426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.966 [2024-11-26 20:58:56.564106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.967 [2024-11-26 20:58:56.564132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.967 [2024-11-26 20:58:56.579798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.967 [2024-11-26 20:58:56.579838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.967 [2024-11-26 20:58:56.589463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.967 [2024-11-26 20:58:56.589490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.967 [2024-11-26 20:58:56.603599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.967 [2024-11-26 20:58:56.603626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.967 [2024-11-26 20:58:56.613090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.967 [2024-11-26 20:58:56.613117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.967 [2024-11-26 20:58:56.627761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.967 [2024-11-26 20:58:56.627787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.967 [2024-11-26 20:58:56.637735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.967 [2024-11-26 20:58:56.637761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.967 11757.00 IOPS, 91.85 MiB/s [2024-11-26T19:58:56.664Z] [2024-11-26 20:58:56.651142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.967 [2024-11-26 20:58:56.651168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:52.967 [2024-11-26 20:58:56.660409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:52.967 [2024-11-26 20:58:56.660436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.225 [2024-11-26 20:58:56.672017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.225 [2024-11-26 20:58:56.672044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.225 [2024-11-26 20:58:56.688740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.225 [2024-11-26 20:58:56.688783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.225 [2024-11-26 20:58:56.703936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.225 [2024-11-26 20:58:56.703963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.225 [2024-11-26 20:58:56.713393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.225 [2024-11-26 20:58:56.713429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.225 [2024-11-26 20:58:56.727163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.225 [2024-11-26 20:58:56.727203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.225 [2024-11-26 20:58:56.736646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.225 [2024-11-26 20:58:56.736673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.225 [2024-11-26 20:58:56.748386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.225 [2024-11-26 20:58:56.748413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.225 [2024-11-26 20:58:56.761190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.225 [2024-11-26 20:58:56.761216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.225 [2024-11-26 20:58:56.776616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.225 [2024-11-26 20:58:56.776658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.225 [2024-11-26 20:58:56.785807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.225 [2024-11-26 20:58:56.785832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.225 [2024-11-26 20:58:56.797671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.225 [2024-11-26 20:58:56.797697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.225 [2024-11-26 20:58:56.811731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.225 [2024-11-26 20:58:56.811758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.225 [2024-11-26 20:58:56.820895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.225 [2024-11-26 20:58:56.820921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.225 [2024-11-26 20:58:56.836175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.225 [2024-11-26 20:58:56.836199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.225 [2024-11-26 20:58:56.852560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.225 [2024-11-26 20:58:56.852596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.225 [2024-11-26 20:58:56.862520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.225 [2024-11-26 20:58:56.862547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.225 [2024-11-26 20:58:56.874381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.225 [2024-11-26 20:58:56.874421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.225 [2024-11-26 20:58:56.885159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.225 [2024-11-26 20:58:56.885184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.225 [2024-11-26 20:58:56.899562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.225 [2024-11-26 20:58:56.899588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.225 [2024-11-26 20:58:56.909321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.225 [2024-11-26 20:58:56.909348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.484 [2024-11-26 20:58:56.922407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.484 [2024-11-26 20:58:56.922434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.484 [2024-11-26 20:58:56.931992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.484 [2024-11-26 20:58:56.932017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.484 [2024-11-26 20:58:56.943684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.484 [2024-11-26 20:58:56.943708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.484 [2024-11-26 20:58:56.953345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.484 [2024-11-26 20:58:56.953372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.484 [2024-11-26 20:58:56.966895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.484 [2024-11-26 20:58:56.966919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.484 [2024-11-26 20:58:56.976786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.484 [2024-11-26 20:58:56.976811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.484 [2024-11-26 20:58:56.988556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.484 [2024-11-26 20:58:56.988582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.484 [2024-11-26 20:58:57.005017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.484 [2024-11-26 20:58:57.005043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.484 [2024-11-26 20:58:57.020663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.484 [2024-11-26 20:58:57.020689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.484 [2024-11-26 20:58:57.030397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.484 [2024-11-26 20:58:57.030423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.484 [2024-11-26 20:58:57.041963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.484 [2024-11-26 20:58:57.041988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.484 [2024-11-26 20:58:57.052602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.484 [2024-11-26 20:58:57.052625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.484 [2024-11-26 20:58:57.066728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.484 [2024-11-26 20:58:57.066776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.484 [2024-11-26 20:58:57.076326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.484 [2024-11-26 20:58:57.076352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.484 [2024-11-26 20:58:57.088230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.484 [2024-11-26 20:58:57.088256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.484 [2024-11-26 20:58:57.102615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.484 [2024-11-26 20:58:57.102655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.484 [2024-11-26 20:58:57.112188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.484 [2024-11-26 20:58:57.112214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.484 [2024-11-26 20:58:57.124096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.484 [2024-11-26 20:58:57.124121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.484 [2024-11-26 20:58:57.139977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.484 [2024-11-26 20:58:57.140003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.484 [2024-11-26 20:58:57.149362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.484 [2024-11-26 20:58:57.149389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.484 [2024-11-26 20:58:57.163188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.484 [2024-11-26 20:58:57.163214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.484 [2024-11-26 20:58:57.172893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.484 [2024-11-26 20:58:57.172917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.742 [2024-11-26 20:58:57.186379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.742 [2024-11-26 20:58:57.186406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.742 [2024-11-26 20:58:57.195922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.742 [2024-11-26 20:58:57.195948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.742 [2024-11-26 20:58:57.211545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.742 [2024-11-26 20:58:57.211571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.742 [2024-11-26 20:58:57.220875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.742 [2024-11-26 20:58:57.220899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.742 [2024-11-26 20:58:57.236733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.742 [2024-11-26 20:58:57.236759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.742 [2024-11-26 20:58:57.254499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.742 [2024-11-26 20:58:57.254524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.742 [2024-11-26 20:58:57.264011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.742 [2024-11-26 20:58:57.264037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.742 [2024-11-26 20:58:57.275971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.742 [2024-11-26 20:58:57.275996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.742 [2024-11-26 20:58:57.292197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.742 [2024-11-26 20:58:57.292223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.742 [2024-11-26 20:58:57.300870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.742 [2024-11-26 20:58:57.300904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.742 [2024-11-26 20:58:57.314326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.742 [2024-11-26 20:58:57.314362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.742 [2024-11-26 20:58:57.323718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.742 [2024-11-26 20:58:57.323743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.742 [2024-11-26 20:58:57.335628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.743 [2024-11-26 20:58:57.335653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.743 [2024-11-26 20:58:57.351825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.743 [2024-11-26 20:58:57.351865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.743 [2024-11-26 20:58:57.361392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.743 [2024-11-26 20:58:57.361419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.743 [2024-11-26 20:58:57.375290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.743 [2024-11-26 20:58:57.375324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.743 [2024-11-26 20:58:57.384956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.743 [2024-11-26 20:58:57.384981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.743 [2024-11-26 20:58:57.399943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.743 [2024-11-26 20:58:57.399968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.743 [2024-11-26 20:58:57.409575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.743 [2024-11-26 20:58:57.409600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.743 [2024-11-26 20:58:57.423408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.743 [2024-11-26 20:58:57.423433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:53.743 [2024-11-26 20:58:57.432479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:53.743 [2024-11-26 20:58:57.432505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.001 [2024-11-26 20:58:57.443890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.001 [2024-11-26 20:58:57.443930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.001 [2024-11-26 20:58:57.461126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.001 [2024-11-26 20:58:57.461150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.001 [2024-11-26 20:58:57.476565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.001 [2024-11-26 20:58:57.476607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.001 [2024-11-26 20:58:57.486169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.001 [2024-11-26 20:58:57.486205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.001 [2024-11-26 20:58:57.497765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.001 [2024-11-26 20:58:57.497791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.001 [2024-11-26 20:58:57.508330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.001 [2024-11-26 20:58:57.508356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.001 [2024-11-26 20:58:57.523839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.001 [2024-11-26 20:58:57.523865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.001 [2024-11-26 20:58:57.533314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.001 [2024-11-26 20:58:57.533348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.001 [2024-11-26 20:58:57.547702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.001 [2024-11-26 20:58:57.547727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.001 [2024-11-26 20:58:57.557132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.001 [2024-11-26 20:58:57.557157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.001 [2024-11-26 20:58:57.568864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.001 [2024-11-26 20:58:57.568889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.001 [2024-11-26 20:58:57.583074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.001 [2024-11-26 20:58:57.583101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.001 [2024-11-26 20:58:57.593029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.001 [2024-11-26 20:58:57.593070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.001 [2024-11-26 20:58:57.607035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.001 [2024-11-26 20:58:57.607060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.001 [2024-11-26 20:58:57.616863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.001 [2024-11-26 20:58:57.616889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.001 [2024-11-26 20:58:57.628596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.001 [2024-11-26 20:58:57.628621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.001 [2024-11-26 20:58:57.643463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.001 [2024-11-26 20:58:57.643504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.001 11742.00 IOPS, 91.73 MiB/s [2024-11-26T19:58:57.698Z] [2024-11-26 20:58:57.653027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.001 [2024-11-26 20:58:57.653054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.001 [2024-11-26 20:58:57.665068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.001 [2024-11-26 20:58:57.665093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.001 [2024-11-26 20:58:57.680805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.001 [2024-11-26 20:58:57.680829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.259 [2024-11-26 20:58:57.698161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.259 [2024-11-26 20:58:57.698187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.259 [2024-11-26 20:58:57.707504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.259 [2024-11-26 20:58:57.707530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.259 [2024-11-26 20:58:57.719050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.259 [2024-11-26 20:58:57.719088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.260 [2024-11-26 20:58:57.729849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.260 [2024-11-26 20:58:57.729889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.260 [2024-11-26 20:58:57.742417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.260 [2024-11-26 20:58:57.742443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.260 [2024-11-26 20:58:57.751204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.260 [2024-11-26 20:58:57.751228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.260 [2024-11-26 20:58:57.763059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.260 [2024-11-26 20:58:57.763083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.260 [2024-11-26 20:58:57.773663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.260 [2024-11-26 20:58:57.773701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.260 [2024-11-26 20:58:57.786940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.260 [2024-11-26 20:58:57.786967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.260 [2024-11-26 20:58:57.796123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.260 [2024-11-26 20:58:57.796150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.260 [2024-11-26 20:58:57.807828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.260 [2024-11-26 20:58:57.807853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.260 [2024-11-26 20:58:57.823732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.260 [2024-11-26 20:58:57.823757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.260 [2024-11-26 20:58:57.833211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.260 [2024-11-26 20:58:57.833235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.260 [2024-11-26 20:58:57.847671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.260 [2024-11-26 20:58:57.847696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.260 [2024-11-26 20:58:57.857453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.260 [2024-11-26 20:58:57.857481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.260 [2024-11-26 20:58:57.871768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.260 [2024-11-26 20:58:57.871793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.260 [2024-11-26 20:58:57.881017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.260 [2024-11-26 20:58:57.881041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.260 [2024-11-26 20:58:57.895288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.260 [2024-11-26 20:58:57.895324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.260 [2024-11-26 20:58:57.905067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.260 [2024-11-26 20:58:57.905092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.260 [2024-11-26 20:58:57.919773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.260 [2024-11-26 20:58:57.919797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.260 [2024-11-26 20:58:57.929709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.260 [2024-11-26 20:58:57.929734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.260 [2024-11-26 20:58:57.943694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.260 [2024-11-26 20:58:57.943733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.260 [2024-11-26 20:58:57.952891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.260 [2024-11-26 20:58:57.952918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.518 [2024-11-26 20:58:57.964190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.518 [2024-11-26 20:58:57.964229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.518 [2024-11-26 20:58:57.982249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.518 [2024-11-26 20:58:57.982275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.518 [2024-11-26 20:58:57.991513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.518 [2024-11-26 20:58:57.991539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.518 [2024-11-26 20:58:58.003239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.518 [2024-11-26 20:58:58.003263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.518 [2024-11-26 20:58:58.014063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.518 [2024-11-26 20:58:58.014088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.518 [2024-11-26 20:58:58.028839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.518 [2024-11-26 20:58:58.028880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.518 [2024-11-26 20:58:58.046424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.518 [2024-11-26 20:58:58.046450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.518 [2024-11-26 20:58:58.056099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.518 [2024-11-26 20:58:58.056124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.518 [2024-11-26 20:58:58.067800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.518 [2024-11-26 20:58:58.067825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.518 [2024-11-26 20:58:58.084341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.518 [2024-11-26 20:58:58.084367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.518 [2024-11-26 20:58:58.093535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.518 [2024-11-26 20:58:58.093562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.518 [2024-11-26 20:58:58.109053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.518 [2024-11-26 20:58:58.109079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.518 [2024-11-26 20:58:58.123836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.518 [2024-11-26 20:58:58.123862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.518 [2024-11-26 20:58:58.133810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.518 [2024-11-26 20:58:58.133836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.518 [2024-11-26 20:58:58.145407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.518 [2024-11-26 20:58:58.145434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.518 [2024-11-26 20:58:58.159020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.518 [2024-11-26 20:58:58.159046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.518 [2024-11-26 20:58:58.168460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.518 [2024-11-26 20:58:58.168486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.518 [2024-11-26 20:58:58.180213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.518 [2024-11-26 20:58:58.180239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.518 [2024-11-26 20:58:58.196085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.518 [2024-11-26 20:58:58.196111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.518 [2024-11-26 20:58:58.205323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.518 [2024-11-26 20:58:58.205350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.777 [2024-11-26 20:58:58.219351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.777 [2024-11-26 20:58:58.219378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.777 [2024-11-26 20:58:58.228847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.777 [2024-11-26 20:58:58.228874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.777 [2024-11-26 20:58:58.244639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.777 [2024-11-26 20:58:58.244664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.777 [2024-11-26 20:58:58.260796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.777 [2024-11-26 20:58:58.260837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.777 [2024-11-26 20:58:58.270066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.777 [2024-11-26 20:58:58.270092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.777 [2024-11-26 20:58:58.281569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.777 [2024-11-26 20:58:58.281609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.777 [2024-11-26 20:58:58.296916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.777 [2024-11-26 20:58:58.296958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.777 [2024-11-26 20:58:58.312786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.777 [2024-11-26 20:58:58.312813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.777 [2024-11-26 20:58:58.322392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.777 [2024-11-26 20:58:58.322418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.777 [2024-11-26 20:58:58.334336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.777 [2024-11-26 20:58:58.334367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.777 [2024-11-26 20:58:58.345266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.777 [2024-11-26 20:58:58.345312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.777 [2024-11-26 20:58:58.359519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.777 [2024-11-26 20:58:58.359546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.777 [2024-11-26 20:58:58.369315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.777 [2024-11-26 20:58:58.369340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.777 [2024-11-26 20:58:58.383591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.777 [2024-11-26 20:58:58.383617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.777 [2024-11-26 20:58:58.392877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.777 [2024-11-26 20:58:58.392903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.777 [2024-11-26 20:58:58.408872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.777 [2024-11-26 20:58:58.408896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.777 [2024-11-26 20:58:58.424981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.777 [2024-11-26 20:58:58.425022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.777 [2024-11-26 20:58:58.434494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.777 [2024-11-26 20:58:58.434519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.777 [2024-11-26 20:58:58.445718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.777 [2024-11-26 20:58:58.445758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.777 [2024-11-26 20:58:58.460089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.777 [2024-11-26 20:58:58.460130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:54.777 [2024-11-26 20:58:58.469474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:54.777 [2024-11-26 20:58:58.469501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.036 [2024-11-26 20:58:58.483865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.036 [2024-11-26 20:58:58.483890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.036 [2024-11-26 20:58:58.492635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.036 [2024-11-26 20:58:58.492676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.036 [2024-11-26 20:58:58.504357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.037 [2024-11-26 20:58:58.504384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.037 [2024-11-26 20:58:58.519511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.037 [2024-11-26 20:58:58.519537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.037 [2024-11-26 20:58:58.528842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.037 [2024-11-26 20:58:58.528868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.037 [2024-11-26 20:58:58.542613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.037 [2024-11-26 20:58:58.542638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.037 [2024-11-26 20:58:58.553334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.037 [2024-11-26 20:58:58.553360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.037 [2024-11-26 20:58:58.568226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.037 [2024-11-26 20:58:58.568253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.037 [2024-11-26 20:58:58.577495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.037 [2024-11-26 20:58:58.577521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.037 [2024-11-26 20:58:58.591575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.037 [2024-11-26 20:58:58.591616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.037 [2024-11-26 20:58:58.600700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.037 [2024-11-26 20:58:58.600725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.037 [2024-11-26 20:58:58.612269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.037 [2024-11-26 20:58:58.612317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.037 [2024-11-26 20:58:58.627224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.037 [2024-11-26 20:58:58.627250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.037 [2024-11-26 20:58:58.636374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.037 [2024-11-26 20:58:58.636400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.037 [2024-11-26 20:58:58.648132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.037 [2024-11-26 20:58:58.648159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.037 11767.25 IOPS, 91.93 MiB/s [2024-11-26T19:58:58.734Z] [2024-11-26 20:58:58.658489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.037 [2024-11-26 20:58:58.658515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.037 [2024-11-26 20:58:58.669517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.037 [2024-11-26 20:58:58.669545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.037 [2024-11-26 20:58:58.680128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.037 [2024-11-26 20:58:58.680177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.037 [2024-11-26 20:58:58.693819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.037 [2024-11-26 20:58:58.693860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.037 [2024-11-26 20:58:58.706993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.037 [2024-11-26 20:58:58.707020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.037 [2024-11-26 20:58:58.717050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.037 [2024-11-26 20:58:58.717076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.296 [2024-11-26 20:58:58.731657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.296 [2024-11-26 20:58:58.731686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.296 [2024-11-26 20:58:58.741908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.296 [2024-11-26 20:58:58.741933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.296 [2024-11-26 20:58:58.752966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.296 [2024-11-26 20:58:58.752992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.296 [2024-11-26 20:58:58.765783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.296 [2024-11-26 20:58:58.765810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.296 [2024-11-26 20:58:58.779755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.296 [2024-11-26 20:58:58.779783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.296 [2024-11-26 20:58:58.789432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.296 [2024-11-26 20:58:58.789468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.296 [2024-11-26 20:58:58.803935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.296 [2024-11-26 20:58:58.803959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.296 [2024-11-26 20:58:58.812805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.296 [2024-11-26 20:58:58.812845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.296 [2024-11-26 20:58:58.824908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.296 [2024-11-26 20:58:58.824948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.296 [2024-11-26 20:58:58.840819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.296 [2024-11-26 20:58:58.840844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.296 [2024-11-26 20:58:58.850540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.296 [2024-11-26 20:58:58.850567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.296 [2024-11-26 20:58:58.862131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.296 [2024-11-26 20:58:58.862170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.296 [2024-11-26 20:58:58.872842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.296 [2024-11-26 20:58:58.872870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.296 [2024-11-26 20:58:58.888732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.296 [2024-11-26 20:58:58.888759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.296 [2024-11-26 20:58:58.906691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.296 [2024-11-26 20:58:58.906716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.296 [2024-11-26 20:58:58.916723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.296 [2024-11-26 20:58:58.916755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.296 [2024-11-26 20:58:58.932839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.296 [2024-11-26 20:58:58.932864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.296 [2024-11-26 20:58:58.948745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.296 [2024-11-26 20:58:58.948770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.296 [2024-11-26 20:58:58.958171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.296 [2024-11-26 20:58:58.958196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.296 [2024-11-26 20:58:58.969898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.296 [2024-11-26 20:58:58.969922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.296 [2024-11-26 20:58:58.983458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.296 [2024-11-26 20:58:58.983485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.555 [2024-11-26 20:58:58.992915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.555 [2024-11-26 20:58:58.992940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.555 [2024-11-26 20:58:59.008118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.555 [2024-11-26 20:58:59.008143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.555 [2024-11-26 20:58:59.017385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.555 [2024-11-26 20:58:59.017427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.555 [2024-11-26 20:58:59.031757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.555 [2024-11-26 20:58:59.031782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.555 [2024-11-26 20:58:59.040964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.555 [2024-11-26 20:58:59.040989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.555 [2024-11-26 20:58:59.054474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.555 [2024-11-26 20:58:59.054500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.555 [2024-11-26 20:58:59.063418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.555 [2024-11-26 20:58:59.063444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.555 [2024-11-26 20:58:59.074970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.555 [2024-11-26 20:58:59.074994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.555 [2024-11-26 20:58:59.085527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.555 [2024-11-26 20:58:59.085553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.555 [2024-11-26 20:58:59.099091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.555 [2024-11-26 20:58:59.099118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.555 [2024-11-26 20:58:59.108374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.555 [2024-11-26 20:58:59.108400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.555 [2024-11-26 20:58:59.119861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.555 [2024-11-26 20:58:59.119885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.555 [2024-11-26 20:58:59.129423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.555 [2024-11-26 20:58:59.129464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.555 [2024-11-26 20:58:59.144240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.555 [2024-11-26 20:58:59.144287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.555 [2024-11-26 20:58:59.153459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.555 [2024-11-26 20:58:59.153501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.555 [2024-11-26 20:58:59.167214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.555 [2024-11-26 20:58:59.167239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.555 [2024-11-26 20:58:59.176386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.555 [2024-11-26 20:58:59.176412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.555 [2024-11-26 20:58:59.188195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.555 [2024-11-26 20:58:59.188220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.555 [2024-11-26 20:58:59.202870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.555 [2024-11-26 20:58:59.202897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.555 [2024-11-26 20:58:59.212185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.555 [2024-11-26 20:58:59.212224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.555 [2024-11-26 20:58:59.223673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.555 [2024-11-26 20:58:59.223712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.555 [2024-11-26 20:58:59.234532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.555 [2024-11-26 20:58:59.234558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.555 [2024-11-26 20:58:59.245667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.555 [2024-11-26 20:58:59.245691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.813 [2024-11-26 20:58:59.258700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.813 [2024-11-26 20:58:59.258727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.813 [2024-11-26 20:58:59.268146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.813 [2024-11-26 20:58:59.268170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.813 [2024-11-26 20:58:59.279725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.813 [2024-11-26 20:58:59.279750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.813 [2024-11-26 20:58:59.289861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.813 [2024-11-26 20:58:59.289885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.813 [2024-11-26 20:58:59.301230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.813 [2024-11-26 20:58:59.301254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.813 [2024-11-26 20:58:59.316069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.813 [2024-11-26 20:58:59.316095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.813 [2024-11-26 20:58:59.325476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.813 [2024-11-26 20:58:59.325502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.813 [2024-11-26 20:58:59.339436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.813 [2024-11-26 20:58:59.339461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.813 [2024-11-26 20:58:59.348952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.813 [2024-11-26 20:58:59.348978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.813 [2024-11-26 20:58:59.363327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.813 [2024-11-26 20:58:59.363367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.813 [2024-11-26 20:58:59.372468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.813 [2024-11-26 20:58:59.372494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.813 [2024-11-26 20:58:59.383952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.813 [2024-11-26 20:58:59.383975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.813 [2024-11-26 20:58:59.394312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.813 [2024-11-26 20:58:59.394351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.813 [2024-11-26 20:58:59.405017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.813 [2024-11-26 20:58:59.405041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.813 [2024-11-26 20:58:59.420207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.813 [2024-11-26 20:58:59.420231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.813 [2024-11-26 20:58:59.429672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.813 [2024-11-26 20:58:59.429696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.813 [2024-11-26 20:58:59.443805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.813 [2024-11-26 20:58:59.443828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.813 [2024-11-26 20:58:59.453231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.813 [2024-11-26 20:58:59.453256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.813 [2024-11-26 20:58:59.466891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.813 [2024-11-26 20:58:59.466918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.813 [2024-11-26 20:58:59.476726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.813 [2024-11-26 20:58:59.476752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.813 [2024-11-26 20:58:59.488614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.813 [2024-11-26 20:58:59.488639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:55.813 [2024-11-26 20:58:59.503814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:55.813 [2024-11-26 20:58:59.503839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.071 [2024-11-26 20:58:59.512988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.071 [2024-11-26 20:58:59.513028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.071 [2024-11-26 20:58:59.527296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.071 [2024-11-26 20:58:59.527327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.071 [2024-11-26 20:58:59.536957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.071 [2024-11-26 20:58:59.536981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.071 [2024-11-26 20:58:59.551429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.071 [2024-11-26 20:58:59.551454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.071 [2024-11-26 20:58:59.560845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.071 [2024-11-26 20:58:59.560870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.071 [2024-11-26 20:58:59.575855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.071 [2024-11-26 20:58:59.575881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.071 [2024-11-26 20:58:59.585831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.071 [2024-11-26 20:58:59.585854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.071 [2024-11-26 20:58:59.597185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.071 [2024-11-26 20:58:59.597210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.071 [2024-11-26 20:58:59.612560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.071 [2024-11-26 20:58:59.612586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.071 [2024-11-26 20:58:59.621859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.071 [2024-11-26 20:58:59.621884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.071 [2024-11-26 20:58:59.633855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.071 [2024-11-26 20:58:59.633880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.071 [2024-11-26 20:58:59.647715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.071 [2024-11-26 20:58:59.647740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.071 11780.80 IOPS, 92.04 MiB/s [2024-11-26T19:58:59.768Z] [2024-11-26 20:58:59.657172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.071 [2024-11-26 20:58:59.657196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.071 [2024-11-26 20:58:59.662976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.071 [2024-11-26 20:58:59.663002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.071 00:28:56.071 Latency(us) 00:28:56.071 [2024-11-26T19:58:59.768Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:56.071 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:28:56.071 Nvme1n1 : 5.01 11781.95 92.05 0.00 0.00 10849.02 2936.98 17961.72 00:28:56.071 [2024-11-26T19:58:59.768Z] =================================================================================================================== 00:28:56.071 [2024-11-26T19:58:59.768Z] Total : 11781.95 92.05 0.00 0.00 10849.02 2936.98 17961.72 00:28:56.071 [2024-11-26 20:58:59.670453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.071 [2024-11-26 20:58:59.670476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.071 [2024-11-26 20:58:59.678454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.071 [2024-11-26 20:58:59.678477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.071 [2024-11-26 20:58:59.686454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.071 [2024-11-26 20:58:59.686477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.071 [2024-11-26 20:58:59.694516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.071 [2024-11-26 20:58:59.694559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.071 [2024-11-26 20:58:59.702508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.071 [2024-11-26 20:58:59.702549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.071 [2024-11-26 20:58:59.710508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.071 [2024-11-26 20:58:59.710550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.071 [2024-11-26 20:58:59.718501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.071 [2024-11-26 20:58:59.718541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.071 [2024-11-26 20:58:59.726498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.071 [2024-11-26 20:58:59.726550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.071 [2024-11-26 20:58:59.734498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.071 [2024-11-26 20:58:59.734536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.071 [2024-11-26 20:58:59.742505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.071 [2024-11-26 20:58:59.742545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.071 [2024-11-26 20:58:59.750501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.071 [2024-11-26 20:58:59.750540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.072 [2024-11-26 20:58:59.758502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.072 [2024-11-26 20:58:59.758540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.330 [2024-11-26 20:58:59.766516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.330 [2024-11-26 20:58:59.766557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.330 [2024-11-26 20:58:59.774508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.330 [2024-11-26 20:58:59.774546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.330 [2024-11-26 20:58:59.782501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.330 [2024-11-26 20:58:59.782542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.330 [2024-11-26 20:58:59.790504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.330 [2024-11-26 20:58:59.790544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.330 [2024-11-26 20:58:59.798502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.330 [2024-11-26 20:58:59.798543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.330 [2024-11-26 20:58:59.806501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.330 [2024-11-26 20:58:59.806541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.330 [2024-11-26 20:58:59.814477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.330 [2024-11-26 20:58:59.814513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.330 [2024-11-26 20:58:59.822452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.330 [2024-11-26 20:58:59.822473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.330 [2024-11-26 20:58:59.830447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.330 [2024-11-26 20:58:59.830467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.330 [2024-11-26 20:58:59.838446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.330 [2024-11-26 20:58:59.838466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.330 [2024-11-26 20:58:59.846446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.330 [2024-11-26 20:58:59.846466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.330 [2024-11-26 20:58:59.854507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.330 [2024-11-26 20:58:59.854547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.330 [2024-11-26 20:58:59.866553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.330 [2024-11-26 20:58:59.866600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.330 [2024-11-26 20:58:59.874447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.330 [2024-11-26 20:58:59.874467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.330 [2024-11-26 20:58:59.882447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.330 [2024-11-26 20:58:59.882475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.330 [2024-11-26 20:58:59.890449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.330 [2024-11-26 20:58:59.890470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.330 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1810060) - No such process 00:28:56.330 20:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1810060 00:28:56.330 20:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:56.330 20:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.330 20:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:56.330 20:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.330 20:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:56.330 20:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.330 20:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:56.330 delay0 00:28:56.330 20:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.330 20:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:28:56.330 20:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.330 20:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:56.330 20:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.330 20:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:28:56.588 [2024-11-26 20:59:00.049472] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:29:04.693 Initializing NVMe Controllers 00:29:04.693 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:04.693 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:04.693 Initialization complete. Launching workers. 00:29:04.693 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 236, failed: 21099 00:29:04.693 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 21207, failed to submit 128 00:29:04.693 success 21134, unsuccessful 73, failed 0 00:29:04.693 20:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:29:04.693 20:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:29:04.693 20:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:04.693 20:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:29:04.693 20:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:04.693 20:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:29:04.693 20:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:04.693 20:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:04.693 rmmod nvme_tcp 00:29:04.693 rmmod nvme_fabrics 00:29:04.693 rmmod nvme_keyring 00:29:04.693 20:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:04.693 20:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:29:04.693 20:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:29:04.693 20:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1808742 ']' 00:29:04.693 20:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1808742 00:29:04.693 20:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1808742 ']' 00:29:04.693 20:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1808742 00:29:04.693 20:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:29:04.693 20:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:04.693 20:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1808742 00:29:04.693 20:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:04.693 20:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:04.693 20:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1808742' 00:29:04.693 killing process with pid 1808742 00:29:04.693 20:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1808742 00:29:04.693 20:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1808742 00:29:04.693 20:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:04.693 20:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:04.693 20:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:04.693 20:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:29:04.693 20:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:29:04.693 20:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:04.693 20:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:29:04.693 20:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:04.693 20:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:04.693 20:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:04.693 20:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:04.693 20:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:06.071 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:06.071 00:29:06.071 real 0m28.788s 00:29:06.071 user 0m40.760s 00:29:06.071 sys 0m10.154s 00:29:06.071 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:06.071 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:06.071 ************************************ 00:29:06.071 END TEST nvmf_zcopy 00:29:06.071 ************************************ 00:29:06.071 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:29:06.071 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:06.071 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:06.071 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:06.071 ************************************ 00:29:06.071 START TEST nvmf_nmic 00:29:06.071 ************************************ 00:29:06.071 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:29:06.071 * Looking for test storage... 00:29:06.071 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:06.071 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:06.071 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:29:06.071 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:06.071 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:06.071 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:06.071 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:06.071 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:06.071 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:29:06.071 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:29:06.071 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:29:06.071 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:29:06.071 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:29:06.071 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:29:06.071 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:29:06.071 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:06.071 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:29:06.071 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:29:06.071 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:06.071 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:06.071 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:29:06.071 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:29:06.071 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:06.071 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:29:06.071 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:29:06.071 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:29:06.071 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:29:06.071 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:06.071 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:29:06.071 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:29:06.071 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:06.071 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:06.071 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:29:06.071 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:06.072 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:06.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:06.072 --rc genhtml_branch_coverage=1 00:29:06.072 --rc genhtml_function_coverage=1 00:29:06.072 --rc genhtml_legend=1 00:29:06.072 --rc geninfo_all_blocks=1 00:29:06.072 --rc geninfo_unexecuted_blocks=1 00:29:06.072 00:29:06.072 ' 00:29:06.072 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:06.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:06.072 --rc genhtml_branch_coverage=1 00:29:06.072 --rc genhtml_function_coverage=1 00:29:06.072 --rc genhtml_legend=1 00:29:06.072 --rc geninfo_all_blocks=1 00:29:06.072 --rc geninfo_unexecuted_blocks=1 00:29:06.072 00:29:06.072 ' 00:29:06.072 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:06.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:06.072 --rc genhtml_branch_coverage=1 00:29:06.072 --rc genhtml_function_coverage=1 00:29:06.072 --rc genhtml_legend=1 00:29:06.072 --rc geninfo_all_blocks=1 00:29:06.072 --rc geninfo_unexecuted_blocks=1 00:29:06.072 00:29:06.072 ' 00:29:06.072 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:06.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:06.072 --rc genhtml_branch_coverage=1 00:29:06.072 --rc genhtml_function_coverage=1 00:29:06.072 --rc genhtml_legend=1 00:29:06.072 --rc geninfo_all_blocks=1 00:29:06.072 --rc geninfo_unexecuted_blocks=1 00:29:06.072 00:29:06.072 ' 00:29:06.072 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:06.072 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:29:06.072 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:06.072 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:06.072 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:06.072 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:06.072 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:06.072 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:06.072 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:06.072 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:06.072 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:06.072 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:06.072 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:06.072 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:06.072 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:06.072 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:06.072 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:06.072 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:06.072 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:06.072 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:29:06.072 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:06.072 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:06.072 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:06.072 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.072 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.072 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.072 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:29:06.072 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.072 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:29:06.072 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:06.072 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:06.072 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:06.072 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:06.072 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:06.072 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:06.072 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:06.072 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:06.072 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:06.072 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:06.072 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:06.072 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:06.072 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:29:06.072 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:06.072 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:06.072 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:06.073 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:06.073 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:06.073 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:06.073 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:06.073 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:06.073 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:06.073 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:06.073 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:29:06.073 20:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:29:08.603 Found 0000:09:00.0 (0x8086 - 0x159b) 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:29:08.603 Found 0000:09:00.1 (0x8086 - 0x159b) 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:29:08.603 Found net devices under 0000:09:00.0: cvl_0_0 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:29:08.603 Found net devices under 0000:09:00.1: cvl_0_1 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:08.603 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:08.604 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:08.604 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:08.604 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:08.604 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:08.604 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:08.604 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:08.604 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:08.604 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:08.604 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:08.604 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:08.604 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:08.604 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:08.604 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:08.604 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:08.604 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:08.604 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:08.604 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:08.604 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:08.604 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:08.604 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:08.604 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:08.604 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:08.604 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:08.604 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:29:08.604 00:29:08.604 --- 10.0.0.2 ping statistics --- 00:29:08.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:08.604 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:29:08.604 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:08.604 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:08.604 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:29:08.604 00:29:08.604 --- 10.0.0.1 ping statistics --- 00:29:08.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:08.604 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:29:08.604 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:08.604 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:29:08.604 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:08.604 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:08.604 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:08.604 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:08.604 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:08.604 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:08.604 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:08.604 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:29:08.604 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:08.604 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:08.604 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:08.604 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1813450 00:29:08.604 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:29:08.604 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1813450 00:29:08.604 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1813450 ']' 00:29:08.604 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:08.604 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:08.604 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:08.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:08.604 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:08.604 20:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:08.604 [2024-11-26 20:59:12.021873] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:08.604 [2024-11-26 20:59:12.022941] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:29:08.604 [2024-11-26 20:59:12.023004] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:08.604 [2024-11-26 20:59:12.093144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:08.604 [2024-11-26 20:59:12.148394] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:08.604 [2024-11-26 20:59:12.148448] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:08.604 [2024-11-26 20:59:12.148477] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:08.604 [2024-11-26 20:59:12.148489] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:08.604 [2024-11-26 20:59:12.148499] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:08.604 [2024-11-26 20:59:12.150108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:08.604 [2024-11-26 20:59:12.150169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:08.604 [2024-11-26 20:59:12.150238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:08.604 [2024-11-26 20:59:12.150241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:08.604 [2024-11-26 20:59:12.236761] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:08.604 [2024-11-26 20:59:12.237021] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:08.604 [2024-11-26 20:59:12.237357] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:08.604 [2024-11-26 20:59:12.237967] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:08.604 [2024-11-26 20:59:12.238192] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:08.604 20:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:08.604 20:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:29:08.604 20:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:08.604 20:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:08.604 20:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:08.604 20:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:08.604 20:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:08.604 20:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.604 20:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:08.604 [2024-11-26 20:59:12.295037] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:08.863 20:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.863 20:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:08.863 20:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.863 20:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:08.863 Malloc0 00:29:08.863 20:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.863 20:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:29:08.863 20:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.863 20:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:08.863 20:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.863 20:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:08.863 20:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.863 20:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:08.863 20:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.863 20:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:08.863 20:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.863 20:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:08.863 [2024-11-26 20:59:12.367119] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:08.863 20:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.863 20:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:29:08.863 test case1: single bdev can't be used in multiple subsystems 00:29:08.863 20:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:29:08.863 20:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.863 20:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:08.863 20:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.863 20:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:08.863 20:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.864 20:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:08.864 20:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.864 20:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:29:08.864 20:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:29:08.864 20:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.864 20:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:08.864 [2024-11-26 20:59:12.390869] bdev.c:8326:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:29:08.864 [2024-11-26 20:59:12.390897] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:29:08.864 [2024-11-26 20:59:12.390927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:08.864 request: 00:29:08.864 { 00:29:08.864 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:29:08.864 "namespace": { 00:29:08.864 "bdev_name": "Malloc0", 00:29:08.864 "no_auto_visible": false 00:29:08.864 }, 00:29:08.864 "method": "nvmf_subsystem_add_ns", 00:29:08.864 "req_id": 1 00:29:08.864 } 00:29:08.864 Got JSON-RPC error response 00:29:08.864 response: 00:29:08.864 { 00:29:08.864 "code": -32602, 00:29:08.864 "message": "Invalid parameters" 00:29:08.864 } 00:29:08.864 20:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:08.864 20:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:29:08.864 20:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:29:08.864 20:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:29:08.864 Adding namespace failed - expected result. 00:29:08.864 20:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:29:08.864 test case2: host connect to nvmf target in multiple paths 00:29:08.864 20:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:08.864 20:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.864 20:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:08.864 [2024-11-26 20:59:12.398960] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:08.864 20:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.864 20:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:29:09.122 20:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:29:09.380 20:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:29:09.380 20:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:29:09.380 20:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:29:09.380 20:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:29:09.380 20:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:29:11.276 20:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:29:11.276 20:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:29:11.276 20:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:29:11.276 20:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:29:11.276 20:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:29:11.276 20:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:29:11.276 20:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:29:11.276 [global] 00:29:11.276 thread=1 00:29:11.276 invalidate=1 00:29:11.276 rw=write 00:29:11.276 time_based=1 00:29:11.276 runtime=1 00:29:11.276 ioengine=libaio 00:29:11.276 direct=1 00:29:11.276 bs=4096 00:29:11.276 iodepth=1 00:29:11.276 norandommap=0 00:29:11.276 numjobs=1 00:29:11.276 00:29:11.276 verify_dump=1 00:29:11.276 verify_backlog=512 00:29:11.276 verify_state_save=0 00:29:11.276 do_verify=1 00:29:11.276 verify=crc32c-intel 00:29:11.276 [job0] 00:29:11.276 filename=/dev/nvme0n1 00:29:11.276 Could not set queue depth (nvme0n1) 00:29:11.533 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:11.533 fio-3.35 00:29:11.533 Starting 1 thread 00:29:12.904 00:29:12.904 job0: (groupid=0, jobs=1): err= 0: pid=1813951: Tue Nov 26 20:59:16 2024 00:29:12.904 read: IOPS=21, BW=86.6KiB/s (88.7kB/s)(88.0KiB/1016msec) 00:29:12.904 slat (nsec): min=6771, max=39543, avg=23587.05, stdev=10877.34 00:29:12.904 clat (usec): min=40550, max=41081, avg=40947.88, stdev=102.80 00:29:12.904 lat (usec): min=40557, max=41095, avg=40971.47, stdev=103.92 00:29:12.904 clat percentiles (usec): 00:29:12.904 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:29:12.904 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:29:12.904 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:29:12.904 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:29:12.904 | 99.99th=[41157] 00:29:12.904 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:29:12.904 slat (usec): min=7, max=25913, avg=59.00, stdev=1144.85 00:29:12.904 clat (usec): min=143, max=233, avg=155.17, stdev= 8.35 00:29:12.904 lat (usec): min=151, max=26096, avg=214.17, stdev=1146.13 00:29:12.904 clat percentiles (usec): 00:29:12.904 | 1.00th=[ 147], 5.00th=[ 147], 10.00th=[ 149], 20.00th=[ 149], 00:29:12.904 | 30.00th=[ 151], 40.00th=[ 151], 50.00th=[ 153], 60.00th=[ 155], 00:29:12.904 | 70.00th=[ 155], 80.00th=[ 161], 90.00th=[ 167], 95.00th=[ 172], 00:29:12.904 | 99.00th=[ 180], 99.50th=[ 184], 99.90th=[ 233], 99.95th=[ 233], 00:29:12.904 | 99.99th=[ 233] 00:29:12.904 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:29:12.904 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:29:12.904 lat (usec) : 250=95.88% 00:29:12.904 lat (msec) : 50=4.12% 00:29:12.904 cpu : usr=0.30%, sys=0.59%, ctx=538, majf=0, minf=1 00:29:12.904 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:12.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:12.904 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:12.904 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:12.904 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:12.904 00:29:12.904 Run status group 0 (all jobs): 00:29:12.904 READ: bw=86.6KiB/s (88.7kB/s), 86.6KiB/s-86.6KiB/s (88.7kB/s-88.7kB/s), io=88.0KiB (90.1kB), run=1016-1016msec 00:29:12.904 WRITE: bw=2016KiB/s (2064kB/s), 2016KiB/s-2016KiB/s (2064kB/s-2064kB/s), io=2048KiB (2097kB), run=1016-1016msec 00:29:12.904 00:29:12.904 Disk stats (read/write): 00:29:12.904 nvme0n1: ios=45/512, merge=0/0, ticks=1765/80, in_queue=1845, util=98.40% 00:29:12.904 20:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:29:12.904 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:29:12.904 20:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:29:12.904 20:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:29:12.904 20:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:29:12.905 20:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:12.905 20:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:29:12.905 20:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:12.905 20:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:29:12.905 20:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:29:12.905 20:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:29:12.905 20:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:12.905 20:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:29:12.905 20:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:12.905 20:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:29:12.905 20:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:12.905 20:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:12.905 rmmod nvme_tcp 00:29:12.905 rmmod nvme_fabrics 00:29:12.905 rmmod nvme_keyring 00:29:12.905 20:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:12.905 20:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:29:12.905 20:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:29:12.905 20:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1813450 ']' 00:29:12.905 20:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1813450 00:29:12.905 20:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1813450 ']' 00:29:12.905 20:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1813450 00:29:12.905 20:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:29:12.905 20:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:12.905 20:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1813450 00:29:12.905 20:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:12.905 20:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:12.905 20:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1813450' 00:29:12.905 killing process with pid 1813450 00:29:12.905 20:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1813450 00:29:12.905 20:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1813450 00:29:13.162 20:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:13.162 20:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:13.162 20:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:13.163 20:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:29:13.163 20:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:29:13.163 20:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:13.163 20:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:29:13.163 20:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:13.163 20:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:13.163 20:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:13.163 20:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:13.163 20:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:15.723 20:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:15.723 00:29:15.723 real 0m9.311s 00:29:15.723 user 0m17.431s 00:29:15.723 sys 0m3.316s 00:29:15.723 20:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:15.723 20:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:15.723 ************************************ 00:29:15.723 END TEST nvmf_nmic 00:29:15.723 ************************************ 00:29:15.723 20:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:29:15.723 20:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:15.723 20:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:15.723 20:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:15.723 ************************************ 00:29:15.723 START TEST nvmf_fio_target 00:29:15.723 ************************************ 00:29:15.723 20:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:29:15.723 * Looking for test storage... 00:29:15.723 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:15.723 20:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:15.723 20:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:29:15.723 20:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:15.723 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:15.723 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:15.723 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:15.723 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:15.723 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:15.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.724 --rc genhtml_branch_coverage=1 00:29:15.724 --rc genhtml_function_coverage=1 00:29:15.724 --rc genhtml_legend=1 00:29:15.724 --rc geninfo_all_blocks=1 00:29:15.724 --rc geninfo_unexecuted_blocks=1 00:29:15.724 00:29:15.724 ' 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:15.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.724 --rc genhtml_branch_coverage=1 00:29:15.724 --rc genhtml_function_coverage=1 00:29:15.724 --rc genhtml_legend=1 00:29:15.724 --rc geninfo_all_blocks=1 00:29:15.724 --rc geninfo_unexecuted_blocks=1 00:29:15.724 00:29:15.724 ' 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:15.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.724 --rc genhtml_branch_coverage=1 00:29:15.724 --rc genhtml_function_coverage=1 00:29:15.724 --rc genhtml_legend=1 00:29:15.724 --rc geninfo_all_blocks=1 00:29:15.724 --rc geninfo_unexecuted_blocks=1 00:29:15.724 00:29:15.724 ' 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:15.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.724 --rc genhtml_branch_coverage=1 00:29:15.724 --rc genhtml_function_coverage=1 00:29:15.724 --rc genhtml_legend=1 00:29:15.724 --rc geninfo_all_blocks=1 00:29:15.724 --rc geninfo_unexecuted_blocks=1 00:29:15.724 00:29:15.724 ' 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:29:15.724 20:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:29:17.631 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:17.631 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:29:17.631 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:17.631 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:17.631 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:17.631 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:17.631 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:17.631 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:29:17.631 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:17.631 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:29:17.631 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:29:17.631 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:29:17.631 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:29:17.631 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:29:17.631 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:29:17.631 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:17.631 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:17.631 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:17.631 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:17.631 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:17.631 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:17.631 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:29:17.632 Found 0000:09:00.0 (0x8086 - 0x159b) 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:29:17.632 Found 0000:09:00.1 (0x8086 - 0x159b) 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:29:17.632 Found net devices under 0000:09:00.0: cvl_0_0 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:29:17.632 Found net devices under 0000:09:00.1: cvl_0_1 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:17.632 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:17.891 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:17.891 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:17.891 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:17.891 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:17.891 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:29:17.891 00:29:17.891 --- 10.0.0.2 ping statistics --- 00:29:17.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:17.891 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:29:17.891 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:17.891 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:17.891 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:29:17.891 00:29:17.891 --- 10.0.0.1 ping statistics --- 00:29:17.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:17.891 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:29:17.891 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:17.891 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:29:17.891 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:17.891 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:17.891 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:17.891 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:17.891 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:17.891 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:17.891 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:17.891 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:29:17.891 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:17.891 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:17.892 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:29:17.892 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1816027 00:29:17.892 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:29:17.892 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1816027 00:29:17.892 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1816027 ']' 00:29:17.892 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:17.892 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:17.892 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:17.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:17.892 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:17.892 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:29:17.892 [2024-11-26 20:59:21.418440] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:17.892 [2024-11-26 20:59:21.419518] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:29:17.892 [2024-11-26 20:59:21.419581] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:17.892 [2024-11-26 20:59:21.491514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:17.892 [2024-11-26 20:59:21.550004] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:17.892 [2024-11-26 20:59:21.550054] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:17.892 [2024-11-26 20:59:21.550082] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:17.892 [2024-11-26 20:59:21.550093] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:17.892 [2024-11-26 20:59:21.550102] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:17.892 [2024-11-26 20:59:21.551724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:17.892 [2024-11-26 20:59:21.551790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:17.892 [2024-11-26 20:59:21.551840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:17.892 [2024-11-26 20:59:21.551843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:18.150 [2024-11-26 20:59:21.643107] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:18.150 [2024-11-26 20:59:21.643348] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:18.150 [2024-11-26 20:59:21.644183] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:18.150 [2024-11-26 20:59:21.644245] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:18.150 [2024-11-26 20:59:21.644524] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:18.150 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:18.150 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:29:18.150 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:18.150 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:18.150 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:29:18.150 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:18.150 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:18.408 [2024-11-26 20:59:21.952595] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:18.408 20:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:18.666 20:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:29:18.666 20:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:18.924 20:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:29:18.924 20:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:19.183 20:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:29:19.183 20:59:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:19.748 20:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:29:19.748 20:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:29:19.748 20:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:20.314 20:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:29:20.314 20:59:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:20.314 20:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:29:20.572 20:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:20.829 20:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:29:20.829 20:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:29:21.088 20:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:29:21.346 20:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:29:21.347 20:59:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:21.604 20:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:29:21.604 20:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:21.861 20:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:22.171 [2024-11-26 20:59:25.676749] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:22.171 20:59:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:29:22.457 20:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:29:22.714 20:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:29:22.972 20:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:29:22.972 20:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:29:22.973 20:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:29:22.973 20:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:29:22.973 20:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:29:22.973 20:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:29:25.500 20:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:29:25.500 20:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:29:25.500 20:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:29:25.500 20:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:29:25.500 20:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:29:25.500 20:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:29:25.500 20:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:29:25.500 [global] 00:29:25.500 thread=1 00:29:25.500 invalidate=1 00:29:25.500 rw=write 00:29:25.500 time_based=1 00:29:25.500 runtime=1 00:29:25.500 ioengine=libaio 00:29:25.500 direct=1 00:29:25.500 bs=4096 00:29:25.500 iodepth=1 00:29:25.500 norandommap=0 00:29:25.500 numjobs=1 00:29:25.500 00:29:25.500 verify_dump=1 00:29:25.500 verify_backlog=512 00:29:25.500 verify_state_save=0 00:29:25.500 do_verify=1 00:29:25.500 verify=crc32c-intel 00:29:25.500 [job0] 00:29:25.500 filename=/dev/nvme0n1 00:29:25.500 [job1] 00:29:25.500 filename=/dev/nvme0n2 00:29:25.500 [job2] 00:29:25.500 filename=/dev/nvme0n3 00:29:25.500 [job3] 00:29:25.500 filename=/dev/nvme0n4 00:29:25.500 Could not set queue depth (nvme0n1) 00:29:25.500 Could not set queue depth (nvme0n2) 00:29:25.500 Could not set queue depth (nvme0n3) 00:29:25.500 Could not set queue depth (nvme0n4) 00:29:25.500 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:25.500 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:25.500 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:25.500 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:25.500 fio-3.35 00:29:25.500 Starting 4 threads 00:29:26.432 00:29:26.432 job0: (groupid=0, jobs=1): err= 0: pid=1817095: Tue Nov 26 20:59:30 2024 00:29:26.432 read: IOPS=238, BW=955KiB/s (978kB/s)(956KiB/1001msec) 00:29:26.432 slat (nsec): min=5753, max=32997, avg=12533.96, stdev=5587.30 00:29:26.432 clat (usec): min=213, max=41018, avg=3732.27, stdev=11202.84 00:29:26.432 lat (usec): min=220, max=41032, avg=3744.81, stdev=11202.89 00:29:26.432 clat percentiles (usec): 00:29:26.432 | 1.00th=[ 231], 5.00th=[ 258], 10.00th=[ 277], 20.00th=[ 302], 00:29:26.432 | 30.00th=[ 310], 40.00th=[ 318], 50.00th=[ 326], 60.00th=[ 338], 00:29:26.432 | 70.00th=[ 367], 80.00th=[ 429], 90.00th=[ 545], 95.00th=[41157], 00:29:26.432 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:29:26.432 | 99.99th=[41157] 00:29:26.432 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:29:26.432 slat (nsec): min=7175, max=28257, avg=8365.84, stdev=1877.22 00:29:26.432 clat (usec): min=155, max=420, avg=191.91, stdev=24.06 00:29:26.432 lat (usec): min=163, max=429, avg=200.28, stdev=24.39 00:29:26.432 clat percentiles (usec): 00:29:26.432 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 178], 00:29:26.432 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 190], 00:29:26.432 | 70.00th=[ 196], 80.00th=[ 200], 90.00th=[ 217], 95.00th=[ 247], 00:29:26.432 | 99.00th=[ 273], 99.50th=[ 285], 99.90th=[ 420], 99.95th=[ 420], 00:29:26.432 | 99.99th=[ 420] 00:29:26.432 bw ( KiB/s): min= 4096, max= 4096, per=28.15%, avg=4096.00, stdev= 0.00, samples=1 00:29:26.432 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:29:26.432 lat (usec) : 250=66.44%, 500=28.76%, 750=1.86% 00:29:26.432 lat (msec) : 2=0.13%, 4=0.13%, 50=2.66% 00:29:26.432 cpu : usr=0.20%, sys=1.30%, ctx=751, majf=0, minf=1 00:29:26.432 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:26.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:26.432 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:26.432 issued rwts: total=239,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:26.432 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:26.432 job1: (groupid=0, jobs=1): err= 0: pid=1817096: Tue Nov 26 20:59:30 2024 00:29:26.432 read: IOPS=21, BW=86.2KiB/s (88.3kB/s)(88.0KiB/1021msec) 00:29:26.432 slat (nsec): min=7493, max=29956, avg=13585.45, stdev=3917.61 00:29:26.432 clat (usec): min=40905, max=41090, avg=40989.24, stdev=53.51 00:29:26.432 lat (usec): min=40918, max=41105, avg=41002.82, stdev=53.10 00:29:26.432 clat percentiles (usec): 00:29:26.432 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:29:26.432 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:29:26.432 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:29:26.432 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:29:26.432 | 99.99th=[41157] 00:29:26.432 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:29:26.432 slat (nsec): min=7332, max=39097, avg=8554.37, stdev=2196.01 00:29:26.432 clat (usec): min=158, max=4151, avg=219.99, stdev=177.98 00:29:26.432 lat (usec): min=169, max=4180, avg=228.54, stdev=178.94 00:29:26.432 clat percentiles (usec): 00:29:26.432 | 1.00th=[ 167], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 188], 00:29:26.432 | 30.00th=[ 194], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 208], 00:29:26.432 | 70.00th=[ 229], 80.00th=[ 241], 90.00th=[ 251], 95.00th=[ 265], 00:29:26.432 | 99.00th=[ 314], 99.50th=[ 400], 99.90th=[ 4146], 99.95th=[ 4146], 00:29:26.432 | 99.99th=[ 4146] 00:29:26.432 bw ( KiB/s): min= 4096, max= 4096, per=28.15%, avg=4096.00, stdev= 0.00, samples=1 00:29:26.432 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:29:26.432 lat (usec) : 250=85.77%, 500=9.74%, 750=0.19% 00:29:26.432 lat (msec) : 10=0.19%, 50=4.12% 00:29:26.432 cpu : usr=0.59%, sys=0.29%, ctx=534, majf=0, minf=1 00:29:26.432 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:26.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:26.432 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:26.432 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:26.432 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:26.432 job2: (groupid=0, jobs=1): err= 0: pid=1817097: Tue Nov 26 20:59:30 2024 00:29:26.432 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:29:26.432 slat (nsec): min=5761, max=43206, avg=7911.28, stdev=3907.21 00:29:26.432 clat (usec): min=220, max=565, avg=254.70, stdev=23.14 00:29:26.432 lat (usec): min=226, max=572, avg=262.61, stdev=24.33 00:29:26.432 clat percentiles (usec): 00:29:26.432 | 1.00th=[ 223], 5.00th=[ 227], 10.00th=[ 229], 20.00th=[ 237], 00:29:26.432 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 258], 00:29:26.432 | 70.00th=[ 265], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 293], 00:29:26.432 | 99.00th=[ 330], 99.50th=[ 347], 99.90th=[ 404], 99.95th=[ 465], 00:29:26.432 | 99.99th=[ 570] 00:29:26.432 write: IOPS=2175, BW=8703KiB/s (8912kB/s)(8712KiB/1001msec); 0 zone resets 00:29:26.432 slat (nsec): min=7617, max=47983, avg=9908.56, stdev=3786.29 00:29:26.432 clat (usec): min=157, max=1880, avg=197.30, stdev=45.68 00:29:26.432 lat (usec): min=166, max=1890, avg=207.21, stdev=46.23 00:29:26.432 clat percentiles (usec): 00:29:26.432 | 1.00th=[ 161], 5.00th=[ 165], 10.00th=[ 167], 20.00th=[ 169], 00:29:26.432 | 30.00th=[ 182], 40.00th=[ 190], 50.00th=[ 196], 60.00th=[ 200], 00:29:26.432 | 70.00th=[ 208], 80.00th=[ 215], 90.00th=[ 229], 95.00th=[ 237], 00:29:26.432 | 99.00th=[ 281], 99.50th=[ 322], 99.90th=[ 404], 99.95th=[ 685], 00:29:26.432 | 99.99th=[ 1876] 00:29:26.432 bw ( KiB/s): min= 8880, max= 8880, per=61.03%, avg=8880.00, stdev= 0.00, samples=1 00:29:26.432 iops : min= 2220, max= 2220, avg=2220.00, stdev= 0.00, samples=1 00:29:26.432 lat (usec) : 250=73.28%, 500=26.64%, 750=0.05% 00:29:26.432 lat (msec) : 2=0.02% 00:29:26.432 cpu : usr=2.50%, sys=5.20%, ctx=4229, majf=0, minf=1 00:29:26.432 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:26.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:26.432 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:26.432 issued rwts: total=2048,2178,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:26.432 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:26.432 job3: (groupid=0, jobs=1): err= 0: pid=1817098: Tue Nov 26 20:59:30 2024 00:29:26.432 read: IOPS=54, BW=220KiB/s (225kB/s)(220KiB/1001msec) 00:29:26.432 slat (nsec): min=6980, max=27166, avg=13920.78, stdev=2624.22 00:29:26.432 clat (usec): min=349, max=41043, avg=15875.74, stdev=19890.80 00:29:26.432 lat (usec): min=364, max=41060, avg=15889.66, stdev=19890.85 00:29:26.432 clat percentiles (usec): 00:29:26.432 | 1.00th=[ 351], 5.00th=[ 351], 10.00th=[ 359], 20.00th=[ 363], 00:29:26.432 | 30.00th=[ 363], 40.00th=[ 367], 50.00th=[ 371], 60.00th=[ 420], 00:29:26.432 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:29:26.433 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:29:26.433 | 99.99th=[41157] 00:29:26.433 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:29:26.433 slat (usec): min=6, max=16367, avg=40.47, stdev=722.99 00:29:26.433 clat (usec): min=156, max=831, avg=205.42, stdev=40.96 00:29:26.433 lat (usec): min=164, max=16574, avg=245.90, stdev=724.22 00:29:26.433 clat percentiles (usec): 00:29:26.433 | 1.00th=[ 163], 5.00th=[ 174], 10.00th=[ 184], 20.00th=[ 192], 00:29:26.433 | 30.00th=[ 196], 40.00th=[ 200], 50.00th=[ 202], 60.00th=[ 206], 00:29:26.433 | 70.00th=[ 210], 80.00th=[ 212], 90.00th=[ 221], 95.00th=[ 235], 00:29:26.433 | 99.00th=[ 285], 99.50th=[ 412], 99.90th=[ 832], 99.95th=[ 832], 00:29:26.433 | 99.99th=[ 832] 00:29:26.433 bw ( KiB/s): min= 4096, max= 4096, per=28.15%, avg=4096.00, stdev= 0.00, samples=1 00:29:26.433 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:29:26.433 lat (usec) : 250=88.71%, 500=7.05%, 750=0.18%, 1000=0.18% 00:29:26.433 lat (msec) : 2=0.18%, 50=3.70% 00:29:26.433 cpu : usr=0.20%, sys=0.60%, ctx=570, majf=0, minf=1 00:29:26.433 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:26.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:26.433 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:26.433 issued rwts: total=55,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:26.433 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:26.433 00:29:26.433 Run status group 0 (all jobs): 00:29:26.433 READ: bw=9262KiB/s (9484kB/s), 86.2KiB/s-8184KiB/s (88.3kB/s-8380kB/s), io=9456KiB (9683kB), run=1001-1021msec 00:29:26.433 WRITE: bw=14.2MiB/s (14.9MB/s), 2006KiB/s-8703KiB/s (2054kB/s-8912kB/s), io=14.5MiB (15.2MB), run=1001-1021msec 00:29:26.433 00:29:26.433 Disk stats (read/write): 00:29:26.433 nvme0n1: ios=69/512, merge=0/0, ticks=751/97, in_queue=848, util=86.87% 00:29:26.433 nvme0n2: ios=67/512, merge=0/0, ticks=765/111, in_queue=876, util=90.85% 00:29:26.433 nvme0n3: ios=1659/2048, merge=0/0, ticks=1314/395, in_queue=1709, util=93.74% 00:29:26.433 nvme0n4: ios=66/512, merge=0/0, ticks=1088/100, in_queue=1188, util=95.07% 00:29:26.433 20:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:29:26.433 [global] 00:29:26.433 thread=1 00:29:26.433 invalidate=1 00:29:26.433 rw=randwrite 00:29:26.433 time_based=1 00:29:26.433 runtime=1 00:29:26.433 ioengine=libaio 00:29:26.433 direct=1 00:29:26.433 bs=4096 00:29:26.433 iodepth=1 00:29:26.433 norandommap=0 00:29:26.433 numjobs=1 00:29:26.433 00:29:26.433 verify_dump=1 00:29:26.433 verify_backlog=512 00:29:26.433 verify_state_save=0 00:29:26.433 do_verify=1 00:29:26.433 verify=crc32c-intel 00:29:26.433 [job0] 00:29:26.433 filename=/dev/nvme0n1 00:29:26.433 [job1] 00:29:26.433 filename=/dev/nvme0n2 00:29:26.433 [job2] 00:29:26.433 filename=/dev/nvme0n3 00:29:26.433 [job3] 00:29:26.433 filename=/dev/nvme0n4 00:29:26.690 Could not set queue depth (nvme0n1) 00:29:26.690 Could not set queue depth (nvme0n2) 00:29:26.690 Could not set queue depth (nvme0n3) 00:29:26.690 Could not set queue depth (nvme0n4) 00:29:26.690 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:26.690 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:26.690 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:26.690 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:26.690 fio-3.35 00:29:26.690 Starting 4 threads 00:29:28.060 00:29:28.060 job0: (groupid=0, jobs=1): err= 0: pid=1817325: Tue Nov 26 20:59:31 2024 00:29:28.060 read: IOPS=21, BW=85.4KiB/s (87.5kB/s)(88.0KiB/1030msec) 00:29:28.060 slat (nsec): min=6199, max=33643, avg=14880.95, stdev=4662.16 00:29:28.060 clat (usec): min=40843, max=41232, avg=40991.89, stdev=82.40 00:29:28.060 lat (usec): min=40876, max=41238, avg=41006.78, stdev=79.55 00:29:28.060 clat percentiles (usec): 00:29:28.060 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:29:28.060 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:29:28.060 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:29:28.060 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:29:28.060 | 99.99th=[41157] 00:29:28.060 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:29:28.060 slat (nsec): min=5973, max=48768, avg=13853.96, stdev=6243.33 00:29:28.060 clat (usec): min=158, max=366, avg=230.91, stdev=25.46 00:29:28.060 lat (usec): min=169, max=383, avg=244.76, stdev=25.83 00:29:28.060 clat percentiles (usec): 00:29:28.060 | 1.00th=[ 169], 5.00th=[ 186], 10.00th=[ 200], 20.00th=[ 217], 00:29:28.060 | 30.00th=[ 221], 40.00th=[ 227], 50.00th=[ 233], 60.00th=[ 237], 00:29:28.060 | 70.00th=[ 243], 80.00th=[ 247], 90.00th=[ 258], 95.00th=[ 273], 00:29:28.060 | 99.00th=[ 306], 99.50th=[ 314], 99.90th=[ 367], 99.95th=[ 367], 00:29:28.060 | 99.99th=[ 367] 00:29:28.060 bw ( KiB/s): min= 4096, max= 4096, per=17.74%, avg=4096.00, stdev= 0.00, samples=1 00:29:28.060 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:29:28.060 lat (usec) : 250=79.78%, 500=16.10% 00:29:28.060 lat (msec) : 50=4.12% 00:29:28.060 cpu : usr=0.39%, sys=0.58%, ctx=536, majf=0, minf=2 00:29:28.060 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:28.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:28.060 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:28.060 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:28.060 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:28.060 job1: (groupid=0, jobs=1): err= 0: pid=1817326: Tue Nov 26 20:59:31 2024 00:29:28.060 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:29:28.060 slat (nsec): min=5800, max=48718, avg=9804.91, stdev=5167.71 00:29:28.060 clat (usec): min=232, max=41069, avg=369.65, stdev=1787.42 00:29:28.060 lat (usec): min=239, max=41077, avg=379.46, stdev=1787.36 00:29:28.060 clat percentiles (usec): 00:29:28.060 | 1.00th=[ 239], 5.00th=[ 247], 10.00th=[ 253], 20.00th=[ 262], 00:29:28.060 | 30.00th=[ 269], 40.00th=[ 281], 50.00th=[ 289], 60.00th=[ 297], 00:29:28.060 | 70.00th=[ 302], 80.00th=[ 310], 90.00th=[ 322], 95.00th=[ 338], 00:29:28.060 | 99.00th=[ 506], 99.50th=[ 586], 99.90th=[40633], 99.95th=[41157], 00:29:28.060 | 99.99th=[41157] 00:29:28.060 write: IOPS=1849, BW=7397KiB/s (7574kB/s)(7404KiB/1001msec); 0 zone resets 00:29:28.060 slat (nsec): min=7399, max=63304, avg=12503.50, stdev=6808.66 00:29:28.060 clat (usec): min=158, max=1333, avg=206.94, stdev=55.49 00:29:28.060 lat (usec): min=169, max=1341, avg=219.44, stdev=57.75 00:29:28.060 clat percentiles (usec): 00:29:28.060 | 1.00th=[ 167], 5.00th=[ 174], 10.00th=[ 176], 20.00th=[ 178], 00:29:28.060 | 30.00th=[ 182], 40.00th=[ 188], 50.00th=[ 194], 60.00th=[ 202], 00:29:28.060 | 70.00th=[ 212], 80.00th=[ 225], 90.00th=[ 245], 95.00th=[ 265], 00:29:28.060 | 99.00th=[ 424], 99.50th=[ 457], 99.90th=[ 938], 99.95th=[ 1336], 00:29:28.060 | 99.99th=[ 1336] 00:29:28.060 bw ( KiB/s): min= 8175, max= 8175, per=35.40%, avg=8175.00, stdev= 0.00, samples=1 00:29:28.060 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:29:28.060 lat (usec) : 250=53.32%, 500=46.06%, 750=0.41%, 1000=0.09% 00:29:28.060 lat (msec) : 2=0.03%, 50=0.09% 00:29:28.060 cpu : usr=3.10%, sys=4.80%, ctx=3389, majf=0, minf=1 00:29:28.060 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:28.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:28.060 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:28.060 issued rwts: total=1536,1851,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:28.060 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:28.060 job2: (groupid=0, jobs=1): err= 0: pid=1817327: Tue Nov 26 20:59:31 2024 00:29:28.060 read: IOPS=1362, BW=5451KiB/s (5581kB/s)(5456KiB/1001msec) 00:29:28.060 slat (nsec): min=4573, max=66507, avg=11351.02, stdev=5467.36 00:29:28.060 clat (usec): min=217, max=41050, avg=452.02, stdev=2677.62 00:29:28.060 lat (usec): min=223, max=41068, avg=463.37, stdev=2677.73 00:29:28.060 clat percentiles (usec): 00:29:28.060 | 1.00th=[ 223], 5.00th=[ 229], 10.00th=[ 233], 20.00th=[ 239], 00:29:28.060 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 260], 00:29:28.060 | 70.00th=[ 265], 80.00th=[ 277], 90.00th=[ 355], 95.00th=[ 469], 00:29:28.060 | 99.00th=[ 545], 99.50th=[ 578], 99.90th=[41157], 99.95th=[41157], 00:29:28.060 | 99.99th=[41157] 00:29:28.060 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:29:28.060 slat (usec): min=6, max=37398, avg=36.89, stdev=953.94 00:29:28.060 clat (usec): min=159, max=678, avg=196.33, stdev=35.68 00:29:28.060 lat (usec): min=166, max=38076, avg=233.21, stdev=966.94 00:29:28.060 clat percentiles (usec): 00:29:28.060 | 1.00th=[ 163], 5.00th=[ 167], 10.00th=[ 169], 20.00th=[ 176], 00:29:28.060 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 192], 00:29:28.060 | 70.00th=[ 200], 80.00th=[ 219], 90.00th=[ 243], 95.00th=[ 251], 00:29:28.060 | 99.00th=[ 273], 99.50th=[ 314], 99.90th=[ 652], 99.95th=[ 676], 00:29:28.060 | 99.99th=[ 676] 00:29:28.060 bw ( KiB/s): min= 7936, max= 7936, per=34.36%, avg=7936.00, stdev= 0.00, samples=1 00:29:28.061 iops : min= 1984, max= 1984, avg=1984.00, stdev= 0.00, samples=1 00:29:28.061 lat (usec) : 250=69.28%, 500=29.41%, 750=1.10% 00:29:28.061 lat (msec) : 50=0.21% 00:29:28.061 cpu : usr=1.20%, sys=4.80%, ctx=2903, majf=0, minf=1 00:29:28.061 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:28.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:28.061 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:28.061 issued rwts: total=1364,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:28.061 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:28.061 job3: (groupid=0, jobs=1): err= 0: pid=1817328: Tue Nov 26 20:59:31 2024 00:29:28.061 read: IOPS=1672, BW=6689KiB/s (6850kB/s)(6696KiB/1001msec) 00:29:28.061 slat (nsec): min=5766, max=34849, avg=8203.51, stdev=3074.41 00:29:28.061 clat (usec): min=213, max=41117, avg=337.20, stdev=1766.55 00:29:28.061 lat (usec): min=219, max=41127, avg=345.40, stdev=1766.89 00:29:28.061 clat percentiles (usec): 00:29:28.061 | 1.00th=[ 217], 5.00th=[ 223], 10.00th=[ 225], 20.00th=[ 231], 00:29:28.061 | 30.00th=[ 235], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 253], 00:29:28.061 | 70.00th=[ 260], 80.00th=[ 265], 90.00th=[ 277], 95.00th=[ 314], 00:29:28.061 | 99.00th=[ 457], 99.50th=[ 676], 99.90th=[41157], 99.95th=[41157], 00:29:28.061 | 99.99th=[41157] 00:29:28.061 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:29:28.061 slat (nsec): min=6703, max=50389, avg=10905.38, stdev=4808.07 00:29:28.061 clat (usec): min=153, max=3738, avg=190.31, stdev=108.43 00:29:28.061 lat (usec): min=161, max=3748, avg=201.22, stdev=108.88 00:29:28.061 clat percentiles (usec): 00:29:28.061 | 1.00th=[ 157], 5.00th=[ 161], 10.00th=[ 163], 20.00th=[ 169], 00:29:28.061 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 184], 60.00th=[ 190], 00:29:28.061 | 70.00th=[ 198], 80.00th=[ 206], 90.00th=[ 215], 95.00th=[ 223], 00:29:28.061 | 99.00th=[ 245], 99.50th=[ 255], 99.90th=[ 289], 99.95th=[ 3458], 00:29:28.061 | 99.99th=[ 3752] 00:29:28.061 bw ( KiB/s): min= 8192, max= 8192, per=35.47%, avg=8192.00, stdev= 0.00, samples=1 00:29:28.061 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:29:28.061 lat (usec) : 250=79.20%, 500=20.37%, 750=0.19%, 1000=0.08% 00:29:28.061 lat (msec) : 4=0.05%, 20=0.03%, 50=0.08% 00:29:28.061 cpu : usr=2.30%, sys=4.60%, ctx=3724, majf=0, minf=2 00:29:28.061 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:28.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:28.061 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:28.061 issued rwts: total=1674,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:28.061 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:28.061 00:29:28.061 Run status group 0 (all jobs): 00:29:28.061 READ: bw=17.4MiB/s (18.3MB/s), 85.4KiB/s-6689KiB/s (87.5kB/s-6850kB/s), io=18.0MiB (18.8MB), run=1001-1030msec 00:29:28.061 WRITE: bw=22.6MiB/s (23.6MB/s), 1988KiB/s-8184KiB/s (2036kB/s-8380kB/s), io=23.2MiB (24.4MB), run=1001-1030msec 00:29:28.061 00:29:28.061 Disk stats (read/write): 00:29:28.061 nvme0n1: ios=42/512, merge=0/0, ticks=1561/105, in_queue=1666, util=85.97% 00:29:28.061 nvme0n2: ios=1316/1536, merge=0/0, ticks=1357/310, in_queue=1667, util=90.05% 00:29:28.061 nvme0n3: ios=1078/1501, merge=0/0, ticks=1000/286, in_queue=1286, util=94.91% 00:29:28.061 nvme0n4: ios=1561/1543, merge=0/0, ticks=1414/274, in_queue=1688, util=94.35% 00:29:28.061 20:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:29:28.061 [global] 00:29:28.061 thread=1 00:29:28.061 invalidate=1 00:29:28.061 rw=write 00:29:28.061 time_based=1 00:29:28.061 runtime=1 00:29:28.061 ioengine=libaio 00:29:28.061 direct=1 00:29:28.061 bs=4096 00:29:28.061 iodepth=128 00:29:28.061 norandommap=0 00:29:28.061 numjobs=1 00:29:28.061 00:29:28.061 verify_dump=1 00:29:28.061 verify_backlog=512 00:29:28.061 verify_state_save=0 00:29:28.061 do_verify=1 00:29:28.061 verify=crc32c-intel 00:29:28.061 [job0] 00:29:28.061 filename=/dev/nvme0n1 00:29:28.061 [job1] 00:29:28.061 filename=/dev/nvme0n2 00:29:28.061 [job2] 00:29:28.061 filename=/dev/nvme0n3 00:29:28.061 [job3] 00:29:28.061 filename=/dev/nvme0n4 00:29:28.061 Could not set queue depth (nvme0n1) 00:29:28.061 Could not set queue depth (nvme0n2) 00:29:28.061 Could not set queue depth (nvme0n3) 00:29:28.061 Could not set queue depth (nvme0n4) 00:29:28.319 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:28.319 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:28.319 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:28.319 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:28.319 fio-3.35 00:29:28.319 Starting 4 threads 00:29:29.691 00:29:29.691 job0: (groupid=0, jobs=1): err= 0: pid=1817556: Tue Nov 26 20:59:33 2024 00:29:29.691 read: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec) 00:29:29.691 slat (usec): min=2, max=9549, avg=93.80, stdev=644.35 00:29:29.691 clat (usec): min=5496, max=52975, avg=13078.37, stdev=7137.00 00:29:29.691 lat (usec): min=5504, max=53903, avg=13172.17, stdev=7196.44 00:29:29.691 clat percentiles (usec): 00:29:29.691 | 1.00th=[ 6063], 5.00th=[ 7308], 10.00th=[ 8291], 20.00th=[ 8848], 00:29:29.691 | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[10421], 60.00th=[10814], 00:29:29.691 | 70.00th=[11994], 80.00th=[15401], 90.00th=[24773], 95.00th=[25560], 00:29:29.691 | 99.00th=[44827], 99.50th=[46400], 99.90th=[51119], 99.95th=[52691], 00:29:29.691 | 99.99th=[53216] 00:29:29.691 write: IOPS=3216, BW=12.6MiB/s (13.2MB/s)(12.7MiB/1007msec); 0 zone resets 00:29:29.691 slat (usec): min=3, max=49019, avg=201.80, stdev=1384.16 00:29:29.691 clat (usec): min=3293, max=82040, avg=23851.64, stdev=19164.24 00:29:29.691 lat (usec): min=3299, max=84822, avg=24053.44, stdev=19314.49 00:29:29.691 clat percentiles (usec): 00:29:29.691 | 1.00th=[ 3392], 5.00th=[ 6259], 10.00th=[ 6849], 20.00th=[ 9634], 00:29:29.691 | 30.00th=[10290], 40.00th=[10814], 50.00th=[11600], 60.00th=[19006], 00:29:29.691 | 70.00th=[31065], 80.00th=[44303], 90.00th=[56361], 95.00th=[60556], 00:29:29.691 | 99.00th=[67634], 99.50th=[74974], 99.90th=[82314], 99.95th=[82314], 00:29:29.691 | 99.99th=[82314] 00:29:29.691 bw ( KiB/s): min= 9416, max=15480, per=21.39%, avg=12448.00, stdev=4287.90, samples=2 00:29:29.691 iops : min= 2354, max= 3870, avg=3112.00, stdev=1071.97, samples=2 00:29:29.691 lat (msec) : 4=1.09%, 10=33.09%, 20=37.81%, 50=19.30%, 100=8.71% 00:29:29.691 cpu : usr=2.98%, sys=6.26%, ctx=333, majf=0, minf=1 00:29:29.691 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:29:29.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:29.691 issued rwts: total=3072,3239,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:29.691 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:29.691 job1: (groupid=0, jobs=1): err= 0: pid=1817558: Tue Nov 26 20:59:33 2024 00:29:29.691 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:29:29.691 slat (usec): min=3, max=19965, avg=133.86, stdev=947.23 00:29:29.691 clat (usec): min=7071, max=53958, avg=17183.61, stdev=8603.28 00:29:29.691 lat (usec): min=7088, max=53999, avg=17317.47, stdev=8704.19 00:29:29.691 clat percentiles (usec): 00:29:29.691 | 1.00th=[ 7898], 5.00th=[ 8848], 10.00th=[ 9372], 20.00th=[10159], 00:29:29.691 | 30.00th=[10814], 40.00th=[11469], 50.00th=[12387], 60.00th=[17171], 00:29:29.691 | 70.00th=[21627], 80.00th=[25035], 90.00th=[29754], 95.00th=[34341], 00:29:29.691 | 99.00th=[38536], 99.50th=[38536], 99.90th=[49021], 99.95th=[49546], 00:29:29.691 | 99.99th=[53740] 00:29:29.691 write: IOPS=3793, BW=14.8MiB/s (15.5MB/s)(14.9MiB/1005msec); 0 zone resets 00:29:29.691 slat (usec): min=4, max=9652, avg=124.68, stdev=624.59 00:29:29.691 clat (usec): min=334, max=52546, avg=17203.54, stdev=9627.19 00:29:29.691 lat (usec): min=4354, max=52559, avg=17328.21, stdev=9699.79 00:29:29.691 clat percentiles (usec): 00:29:29.691 | 1.00th=[ 4817], 5.00th=[ 9110], 10.00th=[10028], 20.00th=[10552], 00:29:29.691 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11338], 60.00th=[15270], 00:29:29.691 | 70.00th=[20579], 80.00th=[24249], 90.00th=[31851], 95.00th=[35914], 00:29:29.691 | 99.00th=[46924], 99.50th=[47449], 99.90th=[50594], 99.95th=[52691], 00:29:29.691 | 99.99th=[52691] 00:29:29.691 bw ( KiB/s): min= 9784, max=19688, per=25.32%, avg=14736.00, stdev=7003.19, samples=2 00:29:29.691 iops : min= 2446, max= 4922, avg=3684.00, stdev=1750.80, samples=2 00:29:29.691 lat (usec) : 500=0.01% 00:29:29.691 lat (msec) : 10=14.32%, 20=49.77%, 50=35.75%, 100=0.15% 00:29:29.691 cpu : usr=6.87%, sys=8.76%, ctx=327, majf=0, minf=1 00:29:29.691 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:29:29.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:29.691 issued rwts: total=3584,3812,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:29.691 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:29.691 job2: (groupid=0, jobs=1): err= 0: pid=1817559: Tue Nov 26 20:59:33 2024 00:29:29.691 read: IOPS=2631, BW=10.3MiB/s (10.8MB/s)(10.7MiB/1045msec) 00:29:29.691 slat (usec): min=3, max=15698, avg=174.02, stdev=1048.67 00:29:29.691 clat (usec): min=11901, max=58370, avg=24616.63, stdev=9330.06 00:29:29.691 lat (usec): min=11909, max=61056, avg=24790.65, stdev=9369.05 00:29:29.691 clat percentiles (usec): 00:29:29.691 | 1.00th=[12387], 5.00th=[14877], 10.00th=[15664], 20.00th=[16581], 00:29:29.691 | 30.00th=[17957], 40.00th=[21365], 50.00th=[21890], 60.00th=[25035], 00:29:29.691 | 70.00th=[27132], 80.00th=[28967], 90.00th=[35390], 95.00th=[43779], 00:29:29.691 | 99.00th=[57934], 99.50th=[57934], 99.90th=[58459], 99.95th=[58459], 00:29:29.691 | 99.99th=[58459] 00:29:29.691 write: IOPS=2939, BW=11.5MiB/s (12.0MB/s)(12.0MiB/1045msec); 0 zone resets 00:29:29.691 slat (usec): min=3, max=10810, avg=157.05, stdev=808.36 00:29:29.691 clat (usec): min=8661, max=52598, avg=20913.85, stdev=8299.82 00:29:29.691 lat (usec): min=8682, max=52609, avg=21070.90, stdev=8375.90 00:29:29.691 clat percentiles (usec): 00:29:29.691 | 1.00th=[13566], 5.00th=[14091], 10.00th=[14353], 20.00th=[14615], 00:29:29.691 | 30.00th=[14877], 40.00th=[15533], 50.00th=[17957], 60.00th=[19792], 00:29:29.691 | 70.00th=[23987], 80.00th=[25035], 90.00th=[31851], 95.00th=[38536], 00:29:29.691 | 99.00th=[49021], 99.50th=[50594], 99.90th=[51119], 99.95th=[52691], 00:29:29.691 | 99.99th=[52691] 00:29:29.691 bw ( KiB/s): min= 9784, max=14792, per=21.11%, avg=12288.00, stdev=3541.19, samples=2 00:29:29.691 iops : min= 2446, max= 3698, avg=3072.00, stdev=885.30, samples=2 00:29:29.691 lat (msec) : 10=0.10%, 20=47.89%, 50=50.52%, 100=1.49% 00:29:29.691 cpu : usr=4.89%, sys=8.43%, ctx=251, majf=0, minf=1 00:29:29.691 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:29:29.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:29.691 issued rwts: total=2750,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:29.691 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:29.691 job3: (groupid=0, jobs=1): err= 0: pid=1817560: Tue Nov 26 20:59:33 2024 00:29:29.691 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:29:29.691 slat (usec): min=2, max=15565, avg=102.61, stdev=754.91 00:29:29.691 clat (usec): min=3208, max=42543, avg=14058.91, stdev=6848.60 00:29:29.691 lat (usec): min=3217, max=42549, avg=14161.52, stdev=6886.95 00:29:29.691 clat percentiles (usec): 00:29:29.691 | 1.00th=[ 4752], 5.00th=[ 5669], 10.00th=[ 8717], 20.00th=[10683], 00:29:29.691 | 30.00th=[11207], 40.00th=[11600], 50.00th=[11863], 60.00th=[12780], 00:29:29.691 | 70.00th=[14353], 80.00th=[16712], 90.00th=[23725], 95.00th=[29754], 00:29:29.691 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:29:29.691 | 99.99th=[42730] 00:29:29.691 write: IOPS=5066, BW=19.8MiB/s (20.8MB/s)(19.9MiB/1003msec); 0 zone resets 00:29:29.691 slat (usec): min=3, max=14388, avg=98.37, stdev=665.62 00:29:29.691 clat (usec): min=569, max=35616, avg=12230.13, stdev=2957.17 00:29:29.691 lat (usec): min=5290, max=35635, avg=12328.50, stdev=3007.52 00:29:29.691 clat percentiles (usec): 00:29:29.691 | 1.00th=[ 5669], 5.00th=[ 9241], 10.00th=[10290], 20.00th=[10683], 00:29:29.691 | 30.00th=[11338], 40.00th=[11731], 50.00th=[11863], 60.00th=[12125], 00:29:29.691 | 70.00th=[12256], 80.00th=[12649], 90.00th=[14091], 95.00th=[17695], 00:29:29.691 | 99.00th=[26346], 99.50th=[26608], 99.90th=[26870], 99.95th=[26870], 00:29:29.691 | 99.99th=[35390] 00:29:29.691 bw ( KiB/s): min=18808, max=20824, per=34.05%, avg=19816.00, stdev=1425.53, samples=2 00:29:29.691 iops : min= 4702, max= 5206, avg=4954.00, stdev=356.38, samples=2 00:29:29.691 lat (usec) : 750=0.01% 00:29:29.691 lat (msec) : 4=0.22%, 10=10.61%, 20=81.28%, 50=7.88% 00:29:29.691 cpu : usr=3.29%, sys=6.29%, ctx=303, majf=0, minf=1 00:29:29.691 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:29:29.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:29.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:29.691 issued rwts: total=4608,5082,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:29.691 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:29.691 00:29:29.691 Run status group 0 (all jobs): 00:29:29.691 READ: bw=52.4MiB/s (54.9MB/s), 10.3MiB/s-17.9MiB/s (10.8MB/s-18.8MB/s), io=54.7MiB (57.4MB), run=1003-1045msec 00:29:29.691 WRITE: bw=56.8MiB/s (59.6MB/s), 11.5MiB/s-19.8MiB/s (12.0MB/s-20.8MB/s), io=59.4MiB (62.3MB), run=1003-1045msec 00:29:29.691 00:29:29.691 Disk stats (read/write): 00:29:29.691 nvme0n1: ios=2612/3068, merge=0/0, ticks=18778/38920, in_queue=57698, util=91.68% 00:29:29.691 nvme0n2: ios=2610/2991, merge=0/0, ticks=24998/26278, in_queue=51276, util=94.61% 00:29:29.691 nvme0n3: ios=2236/2560, merge=0/0, ticks=25212/25677, in_queue=50889, util=96.34% 00:29:29.691 nvme0n4: ios=4049/4096, merge=0/0, ticks=27113/22959, in_queue=50072, util=94.63% 00:29:29.691 20:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:29:29.691 [global] 00:29:29.691 thread=1 00:29:29.691 invalidate=1 00:29:29.691 rw=randwrite 00:29:29.691 time_based=1 00:29:29.691 runtime=1 00:29:29.691 ioengine=libaio 00:29:29.692 direct=1 00:29:29.692 bs=4096 00:29:29.692 iodepth=128 00:29:29.692 norandommap=0 00:29:29.692 numjobs=1 00:29:29.692 00:29:29.692 verify_dump=1 00:29:29.692 verify_backlog=512 00:29:29.692 verify_state_save=0 00:29:29.692 do_verify=1 00:29:29.692 verify=crc32c-intel 00:29:29.692 [job0] 00:29:29.692 filename=/dev/nvme0n1 00:29:29.692 [job1] 00:29:29.692 filename=/dev/nvme0n2 00:29:29.692 [job2] 00:29:29.692 filename=/dev/nvme0n3 00:29:29.692 [job3] 00:29:29.692 filename=/dev/nvme0n4 00:29:29.692 Could not set queue depth (nvme0n1) 00:29:29.692 Could not set queue depth (nvme0n2) 00:29:29.692 Could not set queue depth (nvme0n3) 00:29:29.692 Could not set queue depth (nvme0n4) 00:29:29.692 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:29.692 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:29.692 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:29.692 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:29.692 fio-3.35 00:29:29.692 Starting 4 threads 00:29:31.065 00:29:31.065 job0: (groupid=0, jobs=1): err= 0: pid=1817904: Tue Nov 26 20:59:34 2024 00:29:31.065 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:29:31.065 slat (usec): min=2, max=14134, avg=126.89, stdev=850.67 00:29:31.065 clat (usec): min=5386, max=41402, avg=16683.69, stdev=7834.59 00:29:31.065 lat (usec): min=5390, max=43877, avg=16810.58, stdev=7881.16 00:29:31.065 clat percentiles (usec): 00:29:31.065 | 1.00th=[ 6063], 5.00th=[ 9110], 10.00th=[10159], 20.00th=[10552], 00:29:31.065 | 30.00th=[10945], 40.00th=[11469], 50.00th=[12911], 60.00th=[15795], 00:29:31.065 | 70.00th=[20317], 80.00th=[21890], 90.00th=[29754], 95.00th=[33817], 00:29:31.065 | 99.00th=[38011], 99.50th=[38011], 99.90th=[41157], 99.95th=[41157], 00:29:31.065 | 99.99th=[41157] 00:29:31.065 write: IOPS=3840, BW=15.0MiB/s (15.7MB/s)(15.1MiB/1004msec); 0 zone resets 00:29:31.065 slat (usec): min=3, max=12592, avg=134.72, stdev=807.49 00:29:31.065 clat (usec): min=1571, max=72322, avg=17536.52, stdev=12094.59 00:29:31.065 lat (usec): min=1580, max=72334, avg=17671.24, stdev=12163.77 00:29:31.065 clat percentiles (usec): 00:29:31.065 | 1.00th=[ 3720], 5.00th=[ 6587], 10.00th=[ 8586], 20.00th=[10683], 00:29:31.065 | 30.00th=[11076], 40.00th=[11600], 50.00th=[11994], 60.00th=[13829], 00:29:31.065 | 70.00th=[17695], 80.00th=[24511], 90.00th=[32900], 95.00th=[45876], 00:29:31.065 | 99.00th=[64226], 99.50th=[65799], 99.90th=[71828], 99.95th=[71828], 00:29:31.065 | 99.99th=[71828] 00:29:31.065 bw ( KiB/s): min= 9352, max=20480, per=25.95%, avg=14916.00, stdev=7868.68, samples=2 00:29:31.065 iops : min= 2338, max= 5120, avg=3729.00, stdev=1967.17, samples=2 00:29:31.065 lat (msec) : 2=0.17%, 4=0.42%, 10=10.86%, 20=58.43%, 50=28.43% 00:29:31.065 lat (msec) : 100=1.69% 00:29:31.065 cpu : usr=2.99%, sys=3.29%, ctx=301, majf=0, minf=1 00:29:31.065 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:29:31.065 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:31.065 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:31.065 issued rwts: total=3584,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:31.065 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:31.065 job1: (groupid=0, jobs=1): err= 0: pid=1817907: Tue Nov 26 20:59:34 2024 00:29:31.065 read: IOPS=3883, BW=15.2MiB/s (15.9MB/s)(15.2MiB/1004msec) 00:29:31.065 slat (usec): min=2, max=10370, avg=116.30, stdev=755.68 00:29:31.065 clat (usec): min=2133, max=40043, avg=14798.50, stdev=4370.65 00:29:31.065 lat (usec): min=4659, max=40059, avg=14914.79, stdev=4433.68 00:29:31.065 clat percentiles (usec): 00:29:31.065 | 1.00th=[ 8455], 5.00th=[ 9372], 10.00th=[10159], 20.00th=[10814], 00:29:31.065 | 30.00th=[11863], 40.00th=[12780], 50.00th=[13960], 60.00th=[15533], 00:29:31.065 | 70.00th=[16909], 80.00th=[18482], 90.00th=[20317], 95.00th=[21365], 00:29:31.065 | 99.00th=[27657], 99.50th=[33817], 99.90th=[40109], 99.95th=[40109], 00:29:31.065 | 99.99th=[40109] 00:29:31.065 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:29:31.065 slat (usec): min=3, max=11786, avg=125.76, stdev=703.92 00:29:31.065 clat (usec): min=6169, max=53487, avg=16941.17, stdev=10266.50 00:29:31.065 lat (usec): min=6186, max=53497, avg=17066.93, stdev=10346.27 00:29:31.065 clat percentiles (usec): 00:29:31.065 | 1.00th=[ 7767], 5.00th=[ 8979], 10.00th=[ 9372], 20.00th=[10683], 00:29:31.065 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11994], 60.00th=[14091], 00:29:31.065 | 70.00th=[17695], 80.00th=[21627], 90.00th=[39060], 95.00th=[41157], 00:29:31.065 | 99.00th=[48497], 99.50th=[49021], 99.90th=[53216], 99.95th=[53740], 00:29:31.065 | 99.99th=[53740] 00:29:31.065 bw ( KiB/s): min=12800, max=19968, per=28.50%, avg=16384.00, stdev=5068.54, samples=2 00:29:31.065 iops : min= 3200, max= 4992, avg=4096.00, stdev=1267.14, samples=2 00:29:31.065 lat (msec) : 4=0.01%, 10=11.71%, 20=70.66%, 50=17.44%, 100=0.19% 00:29:31.065 cpu : usr=4.19%, sys=6.58%, ctx=338, majf=0, minf=1 00:29:31.065 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:29:31.065 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:31.065 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:31.065 issued rwts: total=3899,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:31.065 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:31.065 job2: (groupid=0, jobs=1): err= 0: pid=1817908: Tue Nov 26 20:59:34 2024 00:29:31.065 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:29:31.065 slat (usec): min=2, max=10121, avg=123.61, stdev=800.50 00:29:31.065 clat (usec): min=5694, max=57023, avg=16488.85, stdev=8396.36 00:29:31.065 lat (usec): min=5701, max=57031, avg=16612.46, stdev=8454.99 00:29:31.065 clat percentiles (usec): 00:29:31.065 | 1.00th=[ 7046], 5.00th=[ 8455], 10.00th=[ 9634], 20.00th=[10421], 00:29:31.065 | 30.00th=[10552], 40.00th=[10814], 50.00th=[11207], 60.00th=[14877], 00:29:31.065 | 70.00th=[19268], 80.00th=[24773], 90.00th=[29492], 95.00th=[31851], 00:29:31.065 | 99.00th=[35914], 99.50th=[38536], 99.90th=[56886], 99.95th=[56886], 00:29:31.065 | 99.99th=[56886] 00:29:31.065 write: IOPS=3797, BW=14.8MiB/s (15.6MB/s)(14.9MiB/1004msec); 0 zone resets 00:29:31.065 slat (usec): min=3, max=15640, avg=132.42, stdev=868.24 00:29:31.065 clat (usec): min=572, max=70726, avg=17651.32, stdev=8838.28 00:29:31.065 lat (usec): min=4830, max=70733, avg=17783.73, stdev=8872.82 00:29:31.065 clat percentiles (usec): 00:29:31.065 | 1.00th=[ 5342], 5.00th=[ 8029], 10.00th=[ 9110], 20.00th=[10421], 00:29:31.065 | 30.00th=[10814], 40.00th=[11207], 50.00th=[15795], 60.00th=[20579], 00:29:31.065 | 70.00th=[24249], 80.00th=[25297], 90.00th=[27395], 95.00th=[29754], 00:29:31.065 | 99.00th=[42730], 99.50th=[58459], 99.90th=[70779], 99.95th=[70779], 00:29:31.065 | 99.99th=[70779] 00:29:31.065 bw ( KiB/s): min=11688, max=17792, per=25.64%, avg=14740.00, stdev=4316.18, samples=2 00:29:31.065 iops : min= 2922, max= 4448, avg=3685.00, stdev=1079.04, samples=2 00:29:31.065 lat (usec) : 750=0.01% 00:29:31.065 lat (msec) : 10=11.71%, 20=51.98%, 50=35.64%, 100=0.66% 00:29:31.065 cpu : usr=3.49%, sys=4.89%, ctx=226, majf=0, minf=1 00:29:31.065 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:29:31.065 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:31.065 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:31.065 issued rwts: total=3584,3813,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:31.065 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:31.065 job3: (groupid=0, jobs=1): err= 0: pid=1817909: Tue Nov 26 20:59:34 2024 00:29:31.065 read: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec) 00:29:31.065 slat (usec): min=2, max=14957, avg=181.21, stdev=1094.82 00:29:31.065 clat (usec): min=13104, max=56181, avg=23819.58, stdev=6740.18 00:29:31.065 lat (usec): min=13108, max=56995, avg=24000.79, stdev=6824.93 00:29:31.065 clat percentiles (usec): 00:29:31.065 | 1.00th=[13566], 5.00th=[14484], 10.00th=[15664], 20.00th=[18220], 00:29:31.065 | 30.00th=[20055], 40.00th=[21365], 50.00th=[22676], 60.00th=[24511], 00:29:31.065 | 70.00th=[26084], 80.00th=[28705], 90.00th=[31589], 95.00th=[35390], 00:29:31.065 | 99.00th=[49021], 99.50th=[50070], 99.90th=[56361], 99.95th=[56361], 00:29:31.065 | 99.99th=[56361] 00:29:31.065 write: IOPS=2654, BW=10.4MiB/s (10.9MB/s)(10.4MiB/1003msec); 0 zone resets 00:29:31.065 slat (usec): min=2, max=8964, avg=196.03, stdev=1007.53 00:29:31.065 clat (usec): min=1957, max=62805, avg=24500.41, stdev=11247.38 00:29:31.065 lat (usec): min=10625, max=64554, avg=24696.44, stdev=11342.61 00:29:31.065 clat percentiles (usec): 00:29:31.065 | 1.00th=[11076], 5.00th=[14091], 10.00th=[14746], 20.00th=[15401], 00:29:31.065 | 30.00th=[18744], 40.00th=[20579], 50.00th=[21103], 60.00th=[21627], 00:29:31.065 | 70.00th=[24249], 80.00th=[29492], 90.00th=[40633], 95.00th=[50070], 00:29:31.065 | 99.00th=[61080], 99.50th=[62653], 99.90th=[62653], 99.95th=[62653], 00:29:31.065 | 99.99th=[62653] 00:29:31.065 bw ( KiB/s): min= 8248, max=12288, per=17.86%, avg=10268.00, stdev=2856.71, samples=2 00:29:31.065 iops : min= 2062, max= 3072, avg=2567.00, stdev=714.18, samples=2 00:29:31.065 lat (msec) : 2=0.02%, 20=32.84%, 50=64.23%, 100=2.91% 00:29:31.065 cpu : usr=2.00%, sys=2.20%, ctx=218, majf=0, minf=2 00:29:31.065 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:29:31.065 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:31.065 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:31.065 issued rwts: total=2560,2662,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:31.065 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:31.065 00:29:31.065 Run status group 0 (all jobs): 00:29:31.065 READ: bw=53.0MiB/s (55.6MB/s), 9.97MiB/s-15.2MiB/s (10.5MB/s-15.9MB/s), io=53.2MiB (55.8MB), run=1003-1004msec 00:29:31.065 WRITE: bw=56.1MiB/s (58.9MB/s), 10.4MiB/s-15.9MiB/s (10.9MB/s-16.7MB/s), io=56.4MiB (59.1MB), run=1003-1004msec 00:29:31.065 00:29:31.065 Disk stats (read/write): 00:29:31.065 nvme0n1: ios=3399/3584, merge=0/0, ticks=23639/26525, in_queue=50164, util=99.60% 00:29:31.065 nvme0n2: ios=3635/3807, merge=0/0, ticks=25517/25713, in_queue=51230, util=90.96% 00:29:31.065 nvme0n3: ios=2616/3067, merge=0/0, ticks=17492/19916, in_queue=37408, util=93.12% 00:29:31.065 nvme0n4: ios=2105/2439, merge=0/0, ticks=16513/19183, in_queue=35696, util=94.95% 00:29:31.065 20:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:29:31.065 20:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1818043 00:29:31.066 20:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:29:31.066 20:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:29:31.066 [global] 00:29:31.066 thread=1 00:29:31.066 invalidate=1 00:29:31.066 rw=read 00:29:31.066 time_based=1 00:29:31.066 runtime=10 00:29:31.066 ioengine=libaio 00:29:31.066 direct=1 00:29:31.066 bs=4096 00:29:31.066 iodepth=1 00:29:31.066 norandommap=1 00:29:31.066 numjobs=1 00:29:31.066 00:29:31.066 [job0] 00:29:31.066 filename=/dev/nvme0n1 00:29:31.066 [job1] 00:29:31.066 filename=/dev/nvme0n2 00:29:31.066 [job2] 00:29:31.066 filename=/dev/nvme0n3 00:29:31.066 [job3] 00:29:31.066 filename=/dev/nvme0n4 00:29:31.066 Could not set queue depth (nvme0n1) 00:29:31.066 Could not set queue depth (nvme0n2) 00:29:31.066 Could not set queue depth (nvme0n3) 00:29:31.066 Could not set queue depth (nvme0n4) 00:29:31.066 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:31.066 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:31.066 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:31.066 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:31.066 fio-3.35 00:29:31.066 Starting 4 threads 00:29:34.344 20:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:29:34.344 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=40849408, buflen=4096 00:29:34.344 fio: pid=1818136, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:29:34.344 20:59:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:29:34.602 20:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:34.602 20:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:29:34.602 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=42221568, buflen=4096 00:29:34.602 fio: pid=1818135, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:29:34.860 20:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:34.860 20:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:29:34.860 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=2097152, buflen=4096 00:29:34.860 fio: pid=1818133, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:29:35.118 20:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:35.118 20:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:29:35.118 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=47120384, buflen=4096 00:29:35.118 fio: pid=1818134, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:29:35.118 00:29:35.119 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1818133: Tue Nov 26 20:59:38 2024 00:29:35.119 read: IOPS=145, BW=581KiB/s (595kB/s)(2048KiB/3525msec) 00:29:35.119 slat (usec): min=5, max=12730, avg=57.91, stdev=739.62 00:29:35.119 clat (usec): min=199, max=42152, avg=6779.10, stdev=15081.57 00:29:35.119 lat (usec): min=205, max=53011, avg=6837.03, stdev=15141.31 00:29:35.119 clat percentiles (usec): 00:29:35.119 | 1.00th=[ 202], 5.00th=[ 204], 10.00th=[ 206], 20.00th=[ 210], 00:29:35.119 | 30.00th=[ 217], 40.00th=[ 223], 50.00th=[ 231], 60.00th=[ 243], 00:29:35.119 | 70.00th=[ 277], 80.00th=[ 351], 90.00th=[41681], 95.00th=[42206], 00:29:35.119 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:29:35.119 | 99.99th=[42206] 00:29:35.119 bw ( KiB/s): min= 96, max= 312, per=0.54%, avg=184.00, stdev=89.94, samples=6 00:29:35.119 iops : min= 24, max= 78, avg=46.00, stdev=22.49, samples=6 00:29:35.119 lat (usec) : 250=61.79%, 500=21.64%, 750=0.19%, 1000=0.19% 00:29:35.119 lat (msec) : 20=0.39%, 50=15.59% 00:29:35.119 cpu : usr=0.06%, sys=0.14%, ctx=519, majf=0, minf=1 00:29:35.119 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:35.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:35.119 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:35.119 issued rwts: total=513,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:35.119 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:35.119 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1818134: Tue Nov 26 20:59:38 2024 00:29:35.119 read: IOPS=3029, BW=11.8MiB/s (12.4MB/s)(44.9MiB/3798msec) 00:29:35.119 slat (usec): min=5, max=10731, avg=13.04, stdev=183.66 00:29:35.119 clat (usec): min=179, max=41101, avg=313.62, stdev=1517.87 00:29:35.119 lat (usec): min=185, max=47005, avg=326.66, stdev=1542.72 00:29:35.119 clat percentiles (usec): 00:29:35.119 | 1.00th=[ 208], 5.00th=[ 215], 10.00th=[ 219], 20.00th=[ 225], 00:29:35.119 | 30.00th=[ 231], 40.00th=[ 239], 50.00th=[ 251], 60.00th=[ 265], 00:29:35.119 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 297], 95.00th=[ 314], 00:29:35.119 | 99.00th=[ 429], 99.50th=[ 545], 99.90th=[41157], 99.95th=[41157], 00:29:35.119 | 99.99th=[41157] 00:29:35.119 bw ( KiB/s): min= 2918, max=16200, per=38.38%, avg=13055.71, stdev=4630.21, samples=7 00:29:35.119 iops : min= 729, max= 4050, avg=3263.86, stdev=1157.73, samples=7 00:29:35.119 lat (usec) : 250=49.77%, 500=49.59%, 750=0.35%, 1000=0.14% 00:29:35.119 lat (msec) : 2=0.01%, 50=0.14% 00:29:35.119 cpu : usr=1.66%, sys=4.58%, ctx=11513, majf=0, minf=2 00:29:35.119 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:35.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:35.119 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:35.119 issued rwts: total=11505,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:35.119 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:35.119 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1818135: Tue Nov 26 20:59:38 2024 00:29:35.119 read: IOPS=3194, BW=12.5MiB/s (13.1MB/s)(40.3MiB/3227msec) 00:29:35.119 slat (usec): min=5, max=16072, avg=12.01, stdev=214.54 00:29:35.119 clat (usec): min=218, max=41160, avg=295.65, stdev=821.17 00:29:35.119 lat (usec): min=223, max=41168, avg=307.67, stdev=848.85 00:29:35.119 clat percentiles (usec): 00:29:35.119 | 1.00th=[ 231], 5.00th=[ 237], 10.00th=[ 241], 20.00th=[ 249], 00:29:35.119 | 30.00th=[ 255], 40.00th=[ 265], 50.00th=[ 277], 60.00th=[ 281], 00:29:35.119 | 70.00th=[ 285], 80.00th=[ 293], 90.00th=[ 310], 95.00th=[ 322], 00:29:35.119 | 99.00th=[ 400], 99.50th=[ 433], 99.90th=[ 1123], 99.95th=[ 9503], 00:29:35.119 | 99.99th=[41157] 00:29:35.119 bw ( KiB/s): min= 8696, max=15344, per=37.91%, avg=12894.67, stdev=2227.18, samples=6 00:29:35.119 iops : min= 2174, max= 3836, avg=3223.67, stdev=556.80, samples=6 00:29:35.119 lat (usec) : 250=22.39%, 500=77.28%, 750=0.12%, 1000=0.09% 00:29:35.119 lat (msec) : 2=0.02%, 4=0.01%, 10=0.04%, 20=0.01%, 50=0.04% 00:29:35.119 cpu : usr=1.95%, sys=4.53%, ctx=10311, majf=0, minf=2 00:29:35.119 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:35.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:35.119 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:35.119 issued rwts: total=10309,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:35.119 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:35.119 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1818136: Tue Nov 26 20:59:38 2024 00:29:35.119 read: IOPS=3425, BW=13.4MiB/s (14.0MB/s)(39.0MiB/2912msec) 00:29:35.119 slat (nsec): min=4282, max=59407, avg=9843.20, stdev=5453.63 00:29:35.119 clat (usec): min=201, max=41174, avg=278.49, stdev=578.69 00:29:35.119 lat (usec): min=208, max=41179, avg=288.34, stdev=578.85 00:29:35.119 clat percentiles (usec): 00:29:35.119 | 1.00th=[ 219], 5.00th=[ 231], 10.00th=[ 237], 20.00th=[ 243], 00:29:35.119 | 30.00th=[ 249], 40.00th=[ 255], 50.00th=[ 265], 60.00th=[ 273], 00:29:35.119 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 306], 95.00th=[ 326], 00:29:35.119 | 99.00th=[ 441], 99.50th=[ 474], 99.90th=[ 889], 99.95th=[ 955], 00:29:35.119 | 99.99th=[41157] 00:29:35.119 bw ( KiB/s): min=11944, max=15608, per=41.08%, avg=13972.80, stdev=1333.78, samples=5 00:29:35.119 iops : min= 2986, max= 3902, avg=3493.20, stdev=333.44, samples=5 00:29:35.119 lat (usec) : 250=30.53%, 500=69.23%, 750=0.09%, 1000=0.11% 00:29:35.119 lat (msec) : 2=0.01%, 50=0.02% 00:29:35.119 cpu : usr=1.96%, sys=5.12%, ctx=9975, majf=0, minf=1 00:29:35.119 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:35.119 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:35.119 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:35.119 issued rwts: total=9974,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:35.119 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:35.119 00:29:35.119 Run status group 0 (all jobs): 00:29:35.119 READ: bw=33.2MiB/s (34.8MB/s), 581KiB/s-13.4MiB/s (595kB/s-14.0MB/s), io=126MiB (132MB), run=2912-3798msec 00:29:35.119 00:29:35.119 Disk stats (read/write): 00:29:35.119 nvme0n1: ios=508/0, merge=0/0, ticks=3305/0, in_queue=3305, util=95.42% 00:29:35.119 nvme0n2: ios=11544/0, merge=0/0, ticks=3475/0, in_queue=3475, util=99.09% 00:29:35.119 nvme0n3: ios=9942/0, merge=0/0, ticks=2837/0, in_queue=2837, util=95.86% 00:29:35.119 nvme0n4: ios=9907/0, merge=0/0, ticks=2743/0, in_queue=2743, util=99.05% 00:29:35.378 20:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:35.378 20:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:29:35.635 20:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:35.635 20:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:29:35.893 20:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:35.893 20:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:29:36.151 20:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:36.151 20:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:29:36.718 20:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:29:36.718 20:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 1818043 00:29:36.718 20:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:29:36.718 20:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:29:36.718 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:36.718 20:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:29:36.718 20:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:29:36.718 20:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:29:36.718 20:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:36.718 20:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:29:36.718 20:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:36.718 20:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:29:36.718 20:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:29:36.718 20:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:29:36.718 nvmf hotplug test: fio failed as expected 00:29:36.718 20:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:36.976 20:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:29:36.976 20:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:29:36.976 20:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:29:36.976 20:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:29:36.976 20:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:29:36.976 20:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:36.976 20:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:29:36.976 20:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:36.976 20:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:29:36.976 20:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:36.976 20:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:36.976 rmmod nvme_tcp 00:29:36.976 rmmod nvme_fabrics 00:29:36.976 rmmod nvme_keyring 00:29:36.976 20:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:36.976 20:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:29:36.976 20:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:29:36.976 20:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1816027 ']' 00:29:36.976 20:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1816027 00:29:36.976 20:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1816027 ']' 00:29:36.976 20:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1816027 00:29:36.976 20:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:29:36.976 20:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:36.976 20:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1816027 00:29:36.976 20:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:36.976 20:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:36.976 20:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1816027' 00:29:36.976 killing process with pid 1816027 00:29:36.976 20:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1816027 00:29:36.976 20:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1816027 00:29:37.234 20:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:37.234 20:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:37.234 20:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:37.234 20:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:29:37.234 20:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:29:37.234 20:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:29:37.234 20:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:37.234 20:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:37.234 20:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:37.234 20:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:37.234 20:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:37.234 20:59:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:39.771 20:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:39.771 00:29:39.771 real 0m24.042s 00:29:39.771 user 1m8.136s 00:29:39.771 sys 0m10.777s 00:29:39.771 20:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:39.771 20:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:29:39.771 ************************************ 00:29:39.771 END TEST nvmf_fio_target 00:29:39.771 ************************************ 00:29:39.771 20:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:29:39.771 20:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:39.771 20:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:39.771 20:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:39.771 ************************************ 00:29:39.771 START TEST nvmf_bdevio 00:29:39.771 ************************************ 00:29:39.771 20:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:29:39.771 * Looking for test storage... 00:29:39.771 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:39.771 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:39.771 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:29:39.771 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:39.771 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:39.771 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:39.771 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:39.771 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:39.771 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:29:39.771 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:29:39.771 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:29:39.771 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:29:39.771 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:29:39.771 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:29:39.771 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:29:39.771 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:39.771 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:29:39.771 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:29:39.771 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:39.771 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:39.771 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:29:39.771 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:29:39.771 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:39.771 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:29:39.771 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:29:39.771 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:29:39.771 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:29:39.771 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:39.771 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:29:39.771 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:29:39.771 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:39.771 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:39.771 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:29:39.771 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:39.771 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:39.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.771 --rc genhtml_branch_coverage=1 00:29:39.771 --rc genhtml_function_coverage=1 00:29:39.771 --rc genhtml_legend=1 00:29:39.771 --rc geninfo_all_blocks=1 00:29:39.771 --rc geninfo_unexecuted_blocks=1 00:29:39.771 00:29:39.771 ' 00:29:39.771 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:39.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.771 --rc genhtml_branch_coverage=1 00:29:39.771 --rc genhtml_function_coverage=1 00:29:39.771 --rc genhtml_legend=1 00:29:39.771 --rc geninfo_all_blocks=1 00:29:39.771 --rc geninfo_unexecuted_blocks=1 00:29:39.771 00:29:39.771 ' 00:29:39.771 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:39.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.771 --rc genhtml_branch_coverage=1 00:29:39.771 --rc genhtml_function_coverage=1 00:29:39.771 --rc genhtml_legend=1 00:29:39.771 --rc geninfo_all_blocks=1 00:29:39.771 --rc geninfo_unexecuted_blocks=1 00:29:39.771 00:29:39.771 ' 00:29:39.771 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:39.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.771 --rc genhtml_branch_coverage=1 00:29:39.771 --rc genhtml_function_coverage=1 00:29:39.771 --rc genhtml_legend=1 00:29:39.771 --rc geninfo_all_blocks=1 00:29:39.771 --rc geninfo_unexecuted_blocks=1 00:29:39.771 00:29:39.771 ' 00:29:39.771 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:39.771 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:29:39.771 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:39.771 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:39.771 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:39.771 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:39.771 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:39.771 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:39.771 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:39.771 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:39.771 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:39.771 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:39.771 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:39.771 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:39.771 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:39.771 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:39.771 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:39.771 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:39.771 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:39.771 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:29:39.771 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:39.771 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:39.771 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:39.772 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.772 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.772 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.772 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:29:39.772 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.772 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:29:39.772 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:39.772 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:39.772 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:39.772 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:39.772 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:39.772 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:39.772 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:39.772 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:39.772 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:39.772 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:39.772 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:39.772 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:39.772 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:29:39.772 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:39.772 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:39.772 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:39.772 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:39.772 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:39.772 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:39.772 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:39.772 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:39.772 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:39.772 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:39.772 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:29:39.772 20:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:41.678 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:41.678 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:29:41.678 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:41.678 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:41.678 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:41.678 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:41.678 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:41.678 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:29:41.678 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:41.678 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:29:41.678 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:29:41.678 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:29:41.678 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:29:41.678 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:29:41.678 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:29:41.678 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:41.678 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:41.678 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:41.678 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:41.678 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:41.678 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:41.678 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:41.678 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:41.678 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:41.678 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:41.678 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:41.678 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:41.678 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:41.678 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:41.678 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:41.678 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:41.678 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:41.678 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:41.678 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:29:41.679 Found 0000:09:00.0 (0x8086 - 0x159b) 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:29:41.679 Found 0000:09:00.1 (0x8086 - 0x159b) 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:29:41.679 Found net devices under 0000:09:00.0: cvl_0_0 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:29:41.679 Found net devices under 0000:09:00.1: cvl_0_1 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:41.679 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:41.679 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:29:41.679 00:29:41.679 --- 10.0.0.2 ping statistics --- 00:29:41.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:41.679 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:41.679 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:41.679 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:29:41.679 00:29:41.679 --- 10.0.0.1 ping statistics --- 00:29:41.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:41.679 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:41.679 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:41.938 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:41.938 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:29:41.938 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:41.938 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:41.938 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:41.938 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1820783 00:29:41.938 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:29:41.938 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1820783 00:29:41.938 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1820783 ']' 00:29:41.938 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:41.938 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:41.938 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:41.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:41.938 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:41.938 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:41.938 [2024-11-26 20:59:45.446419] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:41.938 [2024-11-26 20:59:45.447488] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:29:41.938 [2024-11-26 20:59:45.447537] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:41.938 [2024-11-26 20:59:45.519894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:41.938 [2024-11-26 20:59:45.578242] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:41.938 [2024-11-26 20:59:45.578292] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:41.938 [2024-11-26 20:59:45.578335] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:41.938 [2024-11-26 20:59:45.578348] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:41.938 [2024-11-26 20:59:45.578364] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:41.938 [2024-11-26 20:59:45.579974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:41.938 [2024-11-26 20:59:45.580038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:41.938 [2024-11-26 20:59:45.580104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:41.938 [2024-11-26 20:59:45.580107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:42.197 [2024-11-26 20:59:45.671891] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:42.197 [2024-11-26 20:59:45.672123] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:42.197 [2024-11-26 20:59:45.672418] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:42.197 [2024-11-26 20:59:45.673002] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:42.197 [2024-11-26 20:59:45.673217] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:42.197 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:42.197 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:29:42.197 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:42.197 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:42.197 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:42.197 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:42.197 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:42.197 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.197 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:42.197 [2024-11-26 20:59:45.720844] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:42.197 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.197 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:42.197 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.197 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:42.197 Malloc0 00:29:42.197 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.197 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:42.197 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.197 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:42.197 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.197 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:42.197 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.197 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:42.197 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.197 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:42.197 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.197 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:42.197 [2024-11-26 20:59:45.788991] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:42.197 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.197 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:29:42.197 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:29:42.197 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:29:42.197 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:29:42.197 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:42.197 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:42.197 { 00:29:42.197 "params": { 00:29:42.197 "name": "Nvme$subsystem", 00:29:42.197 "trtype": "$TEST_TRANSPORT", 00:29:42.197 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:42.197 "adrfam": "ipv4", 00:29:42.197 "trsvcid": "$NVMF_PORT", 00:29:42.197 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:42.197 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:42.197 "hdgst": ${hdgst:-false}, 00:29:42.197 "ddgst": ${ddgst:-false} 00:29:42.197 }, 00:29:42.197 "method": "bdev_nvme_attach_controller" 00:29:42.197 } 00:29:42.197 EOF 00:29:42.197 )") 00:29:42.197 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:29:42.197 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:29:42.197 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:29:42.197 20:59:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:42.197 "params": { 00:29:42.197 "name": "Nvme1", 00:29:42.197 "trtype": "tcp", 00:29:42.197 "traddr": "10.0.0.2", 00:29:42.197 "adrfam": "ipv4", 00:29:42.197 "trsvcid": "4420", 00:29:42.197 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:42.197 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:42.197 "hdgst": false, 00:29:42.197 "ddgst": false 00:29:42.197 }, 00:29:42.197 "method": "bdev_nvme_attach_controller" 00:29:42.197 }' 00:29:42.197 [2024-11-26 20:59:45.842387] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:29:42.197 [2024-11-26 20:59:45.842480] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1820921 ] 00:29:42.455 [2024-11-26 20:59:45.913787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:42.455 [2024-11-26 20:59:45.978195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:42.455 [2024-11-26 20:59:45.978243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:42.455 [2024-11-26 20:59:45.978247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:42.713 I/O targets: 00:29:42.713 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:29:42.713 00:29:42.713 00:29:42.713 CUnit - A unit testing framework for C - Version 2.1-3 00:29:42.713 http://cunit.sourceforge.net/ 00:29:42.713 00:29:42.713 00:29:42.713 Suite: bdevio tests on: Nvme1n1 00:29:42.713 Test: blockdev write read block ...passed 00:29:42.713 Test: blockdev write zeroes read block ...passed 00:29:42.713 Test: blockdev write zeroes read no split ...passed 00:29:42.713 Test: blockdev write zeroes read split ...passed 00:29:42.713 Test: blockdev write zeroes read split partial ...passed 00:29:42.713 Test: blockdev reset ...[2024-11-26 20:59:46.381182] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:42.713 [2024-11-26 20:59:46.381291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1339cb0 (9): Bad file descriptor 00:29:42.971 [2024-11-26 20:59:46.426646] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:29:42.971 passed 00:29:42.971 Test: blockdev write read 8 blocks ...passed 00:29:42.971 Test: blockdev write read size > 128k ...passed 00:29:42.971 Test: blockdev write read invalid size ...passed 00:29:42.971 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:42.971 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:42.971 Test: blockdev write read max offset ...passed 00:29:42.971 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:42.971 Test: blockdev writev readv 8 blocks ...passed 00:29:42.971 Test: blockdev writev readv 30 x 1block ...passed 00:29:43.229 Test: blockdev writev readv block ...passed 00:29:43.229 Test: blockdev writev readv size > 128k ...passed 00:29:43.229 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:43.229 Test: blockdev comparev and writev ...[2024-11-26 20:59:46.680914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:43.229 [2024-11-26 20:59:46.680951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.230 [2024-11-26 20:59:46.680976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:43.230 [2024-11-26 20:59:46.680994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.230 [2024-11-26 20:59:46.681424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:43.230 [2024-11-26 20:59:46.681448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:43.230 [2024-11-26 20:59:46.681470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:43.230 [2024-11-26 20:59:46.681486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:43.230 [2024-11-26 20:59:46.681906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:43.230 [2024-11-26 20:59:46.681933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:43.230 [2024-11-26 20:59:46.681955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:43.230 [2024-11-26 20:59:46.681972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:43.230 [2024-11-26 20:59:46.682420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:43.230 [2024-11-26 20:59:46.682446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:43.230 [2024-11-26 20:59:46.682468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:43.230 [2024-11-26 20:59:46.682485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:43.230 passed 00:29:43.230 Test: blockdev nvme passthru rw ...passed 00:29:43.230 Test: blockdev nvme passthru vendor specific ...[2024-11-26 20:59:46.764590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:43.230 [2024-11-26 20:59:46.764618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:43.230 [2024-11-26 20:59:46.764763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:43.230 [2024-11-26 20:59:46.764786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:43.230 [2024-11-26 20:59:46.764927] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:43.230 [2024-11-26 20:59:46.764961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:43.230 [2024-11-26 20:59:46.765111] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:43.230 [2024-11-26 20:59:46.765135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:43.230 passed 00:29:43.230 Test: blockdev nvme admin passthru ...passed 00:29:43.230 Test: blockdev copy ...passed 00:29:43.230 00:29:43.230 Run Summary: Type Total Ran Passed Failed Inactive 00:29:43.230 suites 1 1 n/a 0 0 00:29:43.230 tests 23 23 23 0 0 00:29:43.230 asserts 152 152 152 0 n/a 00:29:43.230 00:29:43.230 Elapsed time = 1.260 seconds 00:29:43.488 20:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:43.488 20:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.488 20:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:43.488 20:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.488 20:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:29:43.488 20:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:29:43.488 20:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:43.488 20:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:29:43.488 20:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:43.488 20:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:29:43.488 20:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:43.488 20:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:43.488 rmmod nvme_tcp 00:29:43.488 rmmod nvme_fabrics 00:29:43.488 rmmod nvme_keyring 00:29:43.488 20:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:43.488 20:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:29:43.488 20:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:29:43.488 20:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1820783 ']' 00:29:43.488 20:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1820783 00:29:43.488 20:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1820783 ']' 00:29:43.488 20:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1820783 00:29:43.488 20:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:29:43.488 20:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:43.488 20:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1820783 00:29:43.488 20:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:29:43.488 20:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:29:43.488 20:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1820783' 00:29:43.488 killing process with pid 1820783 00:29:43.488 20:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1820783 00:29:43.488 20:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1820783 00:29:43.749 20:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:43.749 20:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:43.749 20:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:43.749 20:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:29:43.749 20:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:29:43.749 20:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:29:43.749 20:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:43.749 20:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:43.749 20:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:43.749 20:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:43.749 20:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:43.749 20:59:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:46.297 20:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:46.297 00:29:46.297 real 0m6.428s 00:29:46.297 user 0m8.669s 00:29:46.297 sys 0m2.537s 00:29:46.297 20:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:46.297 20:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:46.297 ************************************ 00:29:46.297 END TEST nvmf_bdevio 00:29:46.297 ************************************ 00:29:46.297 20:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:29:46.297 00:29:46.297 real 3m55.114s 00:29:46.297 user 8m52.212s 00:29:46.297 sys 1m25.437s 00:29:46.297 20:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:46.297 20:59:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:46.297 ************************************ 00:29:46.297 END TEST nvmf_target_core_interrupt_mode 00:29:46.297 ************************************ 00:29:46.297 20:59:49 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:29:46.297 20:59:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:46.297 20:59:49 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:46.297 20:59:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:46.297 ************************************ 00:29:46.297 START TEST nvmf_interrupt 00:29:46.297 ************************************ 00:29:46.297 20:59:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:29:46.297 * Looking for test storage... 00:29:46.297 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:46.297 20:59:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:46.297 20:59:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:29:46.297 20:59:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:46.297 20:59:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:46.297 20:59:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:46.297 20:59:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:46.297 20:59:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:46.297 20:59:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:29:46.297 20:59:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:29:46.297 20:59:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:29:46.297 20:59:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:29:46.297 20:59:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:29:46.297 20:59:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:29:46.297 20:59:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:29:46.297 20:59:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:46.297 20:59:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:29:46.297 20:59:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:29:46.297 20:59:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:46.297 20:59:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:46.297 20:59:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:29:46.297 20:59:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:29:46.297 20:59:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:46.297 20:59:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:29:46.297 20:59:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:29:46.297 20:59:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:29:46.297 20:59:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:29:46.297 20:59:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:46.297 20:59:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:29:46.297 20:59:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:29:46.297 20:59:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:46.297 20:59:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:46.297 20:59:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:29:46.297 20:59:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:46.297 20:59:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:46.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.297 --rc genhtml_branch_coverage=1 00:29:46.297 --rc genhtml_function_coverage=1 00:29:46.297 --rc genhtml_legend=1 00:29:46.297 --rc geninfo_all_blocks=1 00:29:46.297 --rc geninfo_unexecuted_blocks=1 00:29:46.297 00:29:46.297 ' 00:29:46.297 20:59:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:46.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.297 --rc genhtml_branch_coverage=1 00:29:46.297 --rc genhtml_function_coverage=1 00:29:46.297 --rc genhtml_legend=1 00:29:46.297 --rc geninfo_all_blocks=1 00:29:46.297 --rc geninfo_unexecuted_blocks=1 00:29:46.297 00:29:46.297 ' 00:29:46.297 20:59:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:46.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.297 --rc genhtml_branch_coverage=1 00:29:46.297 --rc genhtml_function_coverage=1 00:29:46.297 --rc genhtml_legend=1 00:29:46.297 --rc geninfo_all_blocks=1 00:29:46.297 --rc geninfo_unexecuted_blocks=1 00:29:46.297 00:29:46.297 ' 00:29:46.297 20:59:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:46.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:46.297 --rc genhtml_branch_coverage=1 00:29:46.297 --rc genhtml_function_coverage=1 00:29:46.297 --rc genhtml_legend=1 00:29:46.297 --rc geninfo_all_blocks=1 00:29:46.297 --rc geninfo_unexecuted_blocks=1 00:29:46.297 00:29:46.297 ' 00:29:46.297 20:59:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:46.297 20:59:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:29:46.297 20:59:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:46.297 20:59:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:46.297 20:59:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:46.297 20:59:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:46.297 20:59:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:46.297 20:59:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:46.297 20:59:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:46.297 20:59:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:46.297 20:59:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:46.297 20:59:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:46.297 20:59:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:46.297 20:59:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:46.297 20:59:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:46.297 20:59:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:46.297 20:59:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:46.297 20:59:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:46.298 20:59:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:46.298 20:59:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:29:46.298 20:59:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:46.298 20:59:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:46.298 20:59:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:46.298 20:59:49 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.298 20:59:49 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.298 20:59:49 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.298 20:59:49 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:29:46.298 20:59:49 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:46.298 20:59:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:29:46.298 20:59:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:46.298 20:59:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:46.298 20:59:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:46.298 20:59:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:46.298 20:59:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:46.298 20:59:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:46.298 20:59:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:46.298 20:59:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:46.298 20:59:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:46.298 20:59:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:46.298 20:59:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:29:46.298 20:59:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:29:46.298 20:59:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:29:46.298 20:59:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:46.298 20:59:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:46.298 20:59:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:46.298 20:59:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:46.298 20:59:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:46.298 20:59:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:46.298 20:59:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:46.298 20:59:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:46.298 20:59:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:46.298 20:59:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:46.298 20:59:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:29:46.298 20:59:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:29:48.203 Found 0000:09:00.0 (0x8086 - 0x159b) 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:29:48.203 Found 0000:09:00.1 (0x8086 - 0x159b) 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:29:48.203 Found net devices under 0000:09:00.0: cvl_0_0 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:29:48.203 Found net devices under 0000:09:00.1: cvl_0_1 00:29:48.203 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:48.204 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:48.204 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:29:48.204 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:48.204 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:48.204 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:48.204 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:48.204 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:48.204 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:48.204 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:48.204 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:48.204 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:48.204 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:48.204 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:48.204 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:48.204 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:48.204 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:48.204 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:48.204 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:48.204 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:48.204 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:48.204 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:48.204 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:48.204 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:48.204 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:48.204 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:48.204 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:48.204 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:48.204 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:48.204 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:48.204 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:29:48.204 00:29:48.204 --- 10.0.0.2 ping statistics --- 00:29:48.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:48.204 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:29:48.204 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:48.204 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:48.204 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:29:48.204 00:29:48.204 --- 10.0.0.1 ping statistics --- 00:29:48.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:48.204 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:29:48.204 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:48.204 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:29:48.204 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:48.204 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:48.204 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:48.204 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:48.204 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:48.204 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:48.204 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:48.204 20:59:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:29:48.204 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:48.204 20:59:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:48.204 20:59:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:48.204 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=1823016 00:29:48.204 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:29:48.204 20:59:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 1823016 00:29:48.204 20:59:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 1823016 ']' 00:29:48.204 20:59:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:48.204 20:59:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:48.204 20:59:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:48.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:48.204 20:59:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:48.204 20:59:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:48.204 [2024-11-26 20:59:51.825785] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:48.204 [2024-11-26 20:59:51.826856] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:29:48.204 [2024-11-26 20:59:51.826916] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:48.463 [2024-11-26 20:59:51.898631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:48.463 [2024-11-26 20:59:51.955278] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:48.463 [2024-11-26 20:59:51.955335] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:48.463 [2024-11-26 20:59:51.955365] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:48.463 [2024-11-26 20:59:51.955377] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:48.463 [2024-11-26 20:59:51.955387] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:48.463 [2024-11-26 20:59:51.956929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:48.463 [2024-11-26 20:59:51.956935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:48.463 [2024-11-26 20:59:52.047792] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:48.463 [2024-11-26 20:59:52.047809] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:48.463 [2024-11-26 20:59:52.048063] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:48.463 20:59:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:48.463 20:59:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:29:48.463 20:59:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:48.463 20:59:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:48.463 20:59:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:48.463 20:59:52 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:48.463 20:59:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:29:48.463 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:29:48.463 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:29:48.463 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:29:48.463 5000+0 records in 00:29:48.463 5000+0 records out 00:29:48.463 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0142067 s, 721 MB/s 00:29:48.463 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:29:48.463 20:59:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.463 20:59:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:48.463 AIO0 00:29:48.463 20:59:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.463 20:59:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:29:48.463 20:59:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.463 20:59:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:48.723 [2024-11-26 20:59:52.161639] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:48.723 20:59:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.723 20:59:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:29:48.723 20:59:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.723 20:59:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:48.723 20:59:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.723 20:59:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:29:48.723 20:59:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.723 20:59:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:48.723 20:59:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.723 20:59:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:48.723 20:59:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.723 20:59:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:48.723 [2024-11-26 20:59:52.185815] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:48.723 20:59:52 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.723 20:59:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:29:48.723 20:59:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1823016 0 00:29:48.723 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1823016 0 idle 00:29:48.723 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1823016 00:29:48.723 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:29:48.723 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:29:48.723 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:29:48.723 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:29:48.723 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:29:48.723 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:29:48.723 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:29:48.723 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:29:48.723 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:29:48.723 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1823016 -w 256 00:29:48.723 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:29:48.723 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1823016 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.26 reactor_0' 00:29:48.723 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1823016 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.26 reactor_0 00:29:48.723 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:29:48.723 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:29:48.723 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:29:48.723 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:29:48.723 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:29:48.723 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:29:48.723 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:29:48.723 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:29:48.723 20:59:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:29:48.723 20:59:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1823016 1 00:29:48.723 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1823016 1 idle 00:29:48.723 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1823016 00:29:48.723 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:29:48.723 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:29:48.723 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:29:48.723 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:29:48.723 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:29:48.723 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:29:48.723 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:29:48.723 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:29:48.723 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:29:48.723 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1823016 -w 256 00:29:48.723 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:29:48.982 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1823020 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.00 reactor_1' 00:29:48.982 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1823020 root 20 0 128.2g 47616 34944 S 0.0 0.1 0:00.00 reactor_1 00:29:48.982 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:29:48.982 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:29:48.982 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:29:48.982 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:29:48.982 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:29:48.982 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:29:48.982 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:29:48.982 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:29:48.982 20:59:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:29:48.982 20:59:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=1823073 00:29:48.982 20:59:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:48.982 20:59:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:29:48.982 20:59:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:29:48.982 20:59:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1823016 0 00:29:48.982 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1823016 0 busy 00:29:48.982 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1823016 00:29:48.982 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:29:48.982 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:29:48.982 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:29:48.982 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:29:48.982 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:29:48.982 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:29:48.982 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:29:48.982 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:29:48.982 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1823016 -w 256 00:29:48.982 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:29:49.241 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1823016 root 20 0 128.2g 48384 34944 R 87.5 0.1 0:00.40 reactor_0' 00:29:49.241 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1823016 root 20 0 128.2g 48384 34944 R 87.5 0.1 0:00.40 reactor_0 00:29:49.241 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:29:49.241 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:29:49.241 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=87.5 00:29:49.241 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=87 00:29:49.241 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:29:49.241 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:29:49.241 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:29:49.241 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:29:49.241 20:59:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:29:49.241 20:59:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:29:49.241 20:59:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1823016 1 00:29:49.241 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1823016 1 busy 00:29:49.241 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1823016 00:29:49.241 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:29:49.241 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:29:49.241 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:29:49.241 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:29:49.241 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:29:49.241 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:29:49.241 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:29:49.241 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:29:49.241 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1823016 -w 256 00:29:49.241 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:29:49.241 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1823020 root 20 0 128.2g 48384 34944 R 99.9 0.1 0:00.23 reactor_1' 00:29:49.241 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1823020 root 20 0 128.2g 48384 34944 R 99.9 0.1 0:00.23 reactor_1 00:29:49.241 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:29:49.241 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:29:49.241 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:29:49.241 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:29:49.241 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:29:49.241 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:29:49.241 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:29:49.241 20:59:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:29:49.241 20:59:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 1823073 00:29:59.271 Initializing NVMe Controllers 00:29:59.271 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:59.271 Controller IO queue size 256, less than required. 00:29:59.271 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:59.271 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:59.271 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:59.271 Initialization complete. Launching workers. 00:29:59.272 ======================================================== 00:29:59.272 Latency(us) 00:29:59.272 Device Information : IOPS MiB/s Average min max 00:29:59.272 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 13843.40 54.08 18503.68 4103.45 22528.96 00:29:59.272 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 13472.80 52.63 19014.73 4276.71 23241.28 00:29:59.272 ======================================================== 00:29:59.272 Total : 27316.20 106.70 18755.74 4103.45 23241.28 00:29:59.272 00:29:59.272 21:00:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:29:59.272 21:00:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1823016 0 00:29:59.272 21:00:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1823016 0 idle 00:29:59.272 21:00:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1823016 00:29:59.272 21:00:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:29:59.272 21:00:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:29:59.272 21:00:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:29:59.272 21:00:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:29:59.272 21:00:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:29:59.272 21:00:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:29:59.272 21:00:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:29:59.272 21:00:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:29:59.272 21:00:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:29:59.272 21:00:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1823016 -w 256 00:29:59.272 21:00:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:29:59.272 21:00:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1823016 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:20.22 reactor_0' 00:29:59.272 21:00:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1823016 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:20.22 reactor_0 00:29:59.272 21:00:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:29:59.272 21:00:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:29:59.272 21:00:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:29:59.272 21:00:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:29:59.272 21:00:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:29:59.272 21:00:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:29:59.272 21:00:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:29:59.272 21:00:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:29:59.272 21:00:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:29:59.272 21:00:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1823016 1 00:29:59.272 21:00:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1823016 1 idle 00:29:59.272 21:00:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1823016 00:29:59.272 21:00:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:29:59.272 21:00:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:29:59.272 21:00:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:29:59.272 21:00:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:29:59.272 21:00:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:29:59.272 21:00:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:29:59.272 21:00:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:29:59.272 21:00:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:29:59.272 21:00:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:29:59.272 21:00:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1823016 -w 256 00:29:59.272 21:00:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:29:59.531 21:00:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1823020 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:09.98 reactor_1' 00:29:59.531 21:00:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1823020 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:09.98 reactor_1 00:29:59.531 21:00:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:29:59.531 21:00:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:29:59.531 21:00:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:29:59.531 21:00:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:29:59.531 21:00:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:29:59.531 21:00:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:29:59.531 21:00:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:29:59.531 21:00:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:29:59.531 21:00:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:29:59.790 21:00:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:29:59.790 21:00:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:29:59.790 21:00:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:29:59.790 21:00:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:29:59.790 21:00:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:30:01.694 21:00:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:30:01.694 21:00:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:30:01.694 21:00:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:30:01.694 21:00:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:30:01.694 21:00:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:30:01.694 21:00:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:30:01.694 21:00:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:30:01.694 21:00:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1823016 0 00:30:01.694 21:00:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1823016 0 idle 00:30:01.694 21:00:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1823016 00:30:01.694 21:00:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:30:01.694 21:00:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:01.694 21:00:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:01.694 21:00:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:01.694 21:00:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:01.694 21:00:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:01.694 21:00:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:01.694 21:00:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:01.694 21:00:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:01.694 21:00:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1823016 -w 256 00:30:01.694 21:00:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:30:01.955 21:00:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1823016 root 20 0 128.2g 60672 34944 S 0.0 0.1 0:20.31 reactor_0' 00:30:01.955 21:00:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1823016 root 20 0 128.2g 60672 34944 S 0.0 0.1 0:20.31 reactor_0 00:30:01.955 21:00:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:01.955 21:00:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:01.955 21:00:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:01.955 21:00:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:01.955 21:00:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:01.955 21:00:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:01.955 21:00:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:01.955 21:00:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:01.955 21:00:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:30:01.955 21:00:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1823016 1 00:30:01.955 21:00:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1823016 1 idle 00:30:01.955 21:00:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1823016 00:30:01.955 21:00:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:30:01.955 21:00:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:01.955 21:00:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:30:01.955 21:00:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:30:01.955 21:00:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:30:01.955 21:00:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:30:01.955 21:00:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:30:01.955 21:00:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:30:01.955 21:00:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:30:01.955 21:00:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1823016 -w 256 00:30:01.955 21:00:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:30:01.955 21:00:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1823020 root 20 0 128.2g 60672 34944 S 0.0 0.1 0:10.01 reactor_1' 00:30:01.955 21:00:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1823020 root 20 0 128.2g 60672 34944 S 0.0 0.1 0:10.01 reactor_1 00:30:01.955 21:00:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:30:01.955 21:00:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:30:01.955 21:00:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:30:01.955 21:00:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:30:01.955 21:00:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:30:01.955 21:00:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:30:01.955 21:00:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:30:01.955 21:00:05 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:30:01.955 21:00:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:30:02.215 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:30:02.215 21:00:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:30:02.215 21:00:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:30:02.215 21:00:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:30:02.215 21:00:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:02.215 21:00:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:30:02.215 21:00:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:02.215 21:00:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:30:02.215 21:00:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:30:02.215 21:00:05 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:30:02.215 21:00:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:02.215 21:00:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:30:02.215 21:00:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:02.215 21:00:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:30:02.215 21:00:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:02.215 21:00:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:02.215 rmmod nvme_tcp 00:30:02.215 rmmod nvme_fabrics 00:30:02.215 rmmod nvme_keyring 00:30:02.215 21:00:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:02.215 21:00:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:30:02.215 21:00:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:30:02.215 21:00:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 1823016 ']' 00:30:02.215 21:00:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 1823016 00:30:02.215 21:00:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 1823016 ']' 00:30:02.215 21:00:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 1823016 00:30:02.215 21:00:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:30:02.215 21:00:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:02.215 21:00:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1823016 00:30:02.215 21:00:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:02.215 21:00:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:02.215 21:00:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1823016' 00:30:02.215 killing process with pid 1823016 00:30:02.215 21:00:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 1823016 00:30:02.215 21:00:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 1823016 00:30:02.474 21:00:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:02.474 21:00:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:02.474 21:00:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:02.474 21:00:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:30:02.474 21:00:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:30:02.474 21:00:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:02.474 21:00:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:30:02.474 21:00:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:02.474 21:00:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:02.474 21:00:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:02.474 21:00:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:02.474 21:00:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:05.010 21:00:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:05.010 00:30:05.010 real 0m18.706s 00:30:05.010 user 0m37.443s 00:30:05.010 sys 0m6.350s 00:30:05.010 21:00:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:05.010 21:00:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:05.010 ************************************ 00:30:05.010 END TEST nvmf_interrupt 00:30:05.010 ************************************ 00:30:05.010 00:30:05.010 real 24m59.404s 00:30:05.010 user 58m42.735s 00:30:05.010 sys 6m44.041s 00:30:05.010 21:00:08 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:05.010 21:00:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:05.010 ************************************ 00:30:05.010 END TEST nvmf_tcp 00:30:05.010 ************************************ 00:30:05.010 21:00:08 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:30:05.010 21:00:08 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:05.010 21:00:08 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:05.010 21:00:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:05.010 21:00:08 -- common/autotest_common.sh@10 -- # set +x 00:30:05.010 ************************************ 00:30:05.010 START TEST spdkcli_nvmf_tcp 00:30:05.010 ************************************ 00:30:05.010 21:00:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:05.010 * Looking for test storage... 00:30:05.010 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:30:05.010 21:00:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:05.010 21:00:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:30:05.010 21:00:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:05.010 21:00:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:05.010 21:00:08 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:05.010 21:00:08 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:05.010 21:00:08 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:05.010 21:00:08 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:30:05.010 21:00:08 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:30:05.010 21:00:08 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:30:05.010 21:00:08 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:30:05.010 21:00:08 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:30:05.010 21:00:08 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:30:05.010 21:00:08 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:05.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:05.011 --rc genhtml_branch_coverage=1 00:30:05.011 --rc genhtml_function_coverage=1 00:30:05.011 --rc genhtml_legend=1 00:30:05.011 --rc geninfo_all_blocks=1 00:30:05.011 --rc geninfo_unexecuted_blocks=1 00:30:05.011 00:30:05.011 ' 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:05.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:05.011 --rc genhtml_branch_coverage=1 00:30:05.011 --rc genhtml_function_coverage=1 00:30:05.011 --rc genhtml_legend=1 00:30:05.011 --rc geninfo_all_blocks=1 00:30:05.011 --rc geninfo_unexecuted_blocks=1 00:30:05.011 00:30:05.011 ' 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:05.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:05.011 --rc genhtml_branch_coverage=1 00:30:05.011 --rc genhtml_function_coverage=1 00:30:05.011 --rc genhtml_legend=1 00:30:05.011 --rc geninfo_all_blocks=1 00:30:05.011 --rc geninfo_unexecuted_blocks=1 00:30:05.011 00:30:05.011 ' 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:05.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:05.011 --rc genhtml_branch_coverage=1 00:30:05.011 --rc genhtml_function_coverage=1 00:30:05.011 --rc genhtml_legend=1 00:30:05.011 --rc geninfo_all_blocks=1 00:30:05.011 --rc geninfo_unexecuted_blocks=1 00:30:05.011 00:30:05.011 ' 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:05.011 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:05.011 21:00:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:05.012 21:00:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:30:05.012 21:00:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1825542 00:30:05.012 21:00:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:30:05.012 21:00:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1825542 00:30:05.012 21:00:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 1825542 ']' 00:30:05.012 21:00:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:05.012 21:00:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:05.012 21:00:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:05.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:05.012 21:00:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:05.012 21:00:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:05.012 [2024-11-26 21:00:08.436420] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:30:05.012 [2024-11-26 21:00:08.436509] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1825542 ] 00:30:05.012 [2024-11-26 21:00:08.504156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:05.012 [2024-11-26 21:00:08.567039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:05.012 [2024-11-26 21:00:08.567043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:05.012 21:00:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:05.012 21:00:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:30:05.012 21:00:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:30:05.012 21:00:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:05.012 21:00:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:05.270 21:00:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:30:05.270 21:00:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:30:05.270 21:00:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:30:05.270 21:00:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:05.270 21:00:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:05.270 21:00:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:30:05.270 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:30:05.270 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:30:05.270 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:30:05.270 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:30:05.270 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:30:05.270 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:30:05.270 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:05.270 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:30:05.270 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:30:05.270 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:05.270 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:05.270 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:30:05.270 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:05.271 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:05.271 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:30:05.271 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:05.271 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:05.271 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:05.271 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:05.271 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:30:05.271 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:30:05.271 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:05.271 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:30:05.271 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:05.271 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:30:05.271 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:30:05.271 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:30:05.271 ' 00:30:07.801 [2024-11-26 21:00:11.348660] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:09.175 [2024-11-26 21:00:12.621121] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:30:11.704 [2024-11-26 21:00:14.992192] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:30:13.603 [2024-11-26 21:00:17.010345] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:30:14.977 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:30:14.977 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:30:14.977 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:30:14.977 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:30:14.977 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:30:14.977 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:30:14.978 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:30:14.978 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:14.978 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:30:14.978 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:30:14.978 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:14.978 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:14.978 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:30:14.978 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:14.978 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:14.978 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:30:14.978 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:14.978 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:14.978 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:14.978 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:14.978 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:30:14.978 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:30:14.978 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:14.978 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:30:14.978 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:14.978 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:30:14.978 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:30:14.978 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:30:14.978 21:00:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:30:14.978 21:00:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:14.978 21:00:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:15.236 21:00:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:30:15.236 21:00:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:15.236 21:00:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:15.236 21:00:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:30:15.236 21:00:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:30:15.495 21:00:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:30:15.495 21:00:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:30:15.495 21:00:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:30:15.495 21:00:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:15.495 21:00:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:15.495 21:00:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:30:15.495 21:00:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:15.495 21:00:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:15.495 21:00:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:30:15.495 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:30:15.495 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:15.495 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:30:15.495 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:30:15.495 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:30:15.495 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:30:15.495 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:15.495 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:30:15.495 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:30:15.495 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:30:15.495 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:30:15.495 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:30:15.495 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:30:15.495 ' 00:30:20.772 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:30:20.772 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:30:20.772 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:20.772 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:30:20.772 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:30:20.772 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:30:20.772 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:30:20.772 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:20.772 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:30:20.772 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:30:20.772 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:30:20.772 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:30:20.772 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:30:20.772 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:30:21.030 21:00:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:30:21.030 21:00:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:21.030 21:00:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:21.030 21:00:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1825542 00:30:21.030 21:00:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1825542 ']' 00:30:21.030 21:00:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1825542 00:30:21.030 21:00:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:30:21.030 21:00:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:21.030 21:00:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1825542 00:30:21.030 21:00:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:21.030 21:00:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:21.030 21:00:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1825542' 00:30:21.030 killing process with pid 1825542 00:30:21.030 21:00:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 1825542 00:30:21.030 21:00:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 1825542 00:30:21.288 21:00:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:30:21.288 21:00:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:30:21.288 21:00:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1825542 ']' 00:30:21.288 21:00:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1825542 00:30:21.288 21:00:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1825542 ']' 00:30:21.288 21:00:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1825542 00:30:21.288 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1825542) - No such process 00:30:21.288 21:00:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 1825542 is not found' 00:30:21.288 Process with pid 1825542 is not found 00:30:21.288 21:00:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:30:21.288 21:00:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:30:21.288 21:00:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:30:21.288 00:30:21.288 real 0m16.607s 00:30:21.288 user 0m35.407s 00:30:21.288 sys 0m0.747s 00:30:21.288 21:00:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:21.288 21:00:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:21.288 ************************************ 00:30:21.288 END TEST spdkcli_nvmf_tcp 00:30:21.288 ************************************ 00:30:21.288 21:00:24 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:21.288 21:00:24 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:21.288 21:00:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:21.288 21:00:24 -- common/autotest_common.sh@10 -- # set +x 00:30:21.288 ************************************ 00:30:21.288 START TEST nvmf_identify_passthru 00:30:21.288 ************************************ 00:30:21.288 21:00:24 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:21.288 * Looking for test storage... 00:30:21.288 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:21.288 21:00:24 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:21.288 21:00:24 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:30:21.288 21:00:24 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:21.547 21:00:25 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:21.547 21:00:25 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:21.547 21:00:25 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:21.547 21:00:25 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:21.547 21:00:25 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:30:21.547 21:00:25 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:30:21.547 21:00:25 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:30:21.547 21:00:25 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:30:21.547 21:00:25 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:30:21.547 21:00:25 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:30:21.547 21:00:25 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:30:21.547 21:00:25 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:21.547 21:00:25 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:30:21.547 21:00:25 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:30:21.547 21:00:25 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:21.547 21:00:25 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:21.547 21:00:25 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:30:21.547 21:00:25 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:30:21.547 21:00:25 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:21.547 21:00:25 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:30:21.547 21:00:25 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:30:21.547 21:00:25 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:30:21.547 21:00:25 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:30:21.547 21:00:25 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:21.547 21:00:25 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:30:21.547 21:00:25 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:30:21.547 21:00:25 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:21.547 21:00:25 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:21.547 21:00:25 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:30:21.547 21:00:25 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:21.547 21:00:25 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:21.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:21.547 --rc genhtml_branch_coverage=1 00:30:21.547 --rc genhtml_function_coverage=1 00:30:21.547 --rc genhtml_legend=1 00:30:21.547 --rc geninfo_all_blocks=1 00:30:21.547 --rc geninfo_unexecuted_blocks=1 00:30:21.547 00:30:21.547 ' 00:30:21.547 21:00:25 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:21.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:21.547 --rc genhtml_branch_coverage=1 00:30:21.547 --rc genhtml_function_coverage=1 00:30:21.547 --rc genhtml_legend=1 00:30:21.547 --rc geninfo_all_blocks=1 00:30:21.547 --rc geninfo_unexecuted_blocks=1 00:30:21.547 00:30:21.547 ' 00:30:21.547 21:00:25 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:21.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:21.547 --rc genhtml_branch_coverage=1 00:30:21.547 --rc genhtml_function_coverage=1 00:30:21.547 --rc genhtml_legend=1 00:30:21.547 --rc geninfo_all_blocks=1 00:30:21.547 --rc geninfo_unexecuted_blocks=1 00:30:21.547 00:30:21.547 ' 00:30:21.547 21:00:25 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:21.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:21.547 --rc genhtml_branch_coverage=1 00:30:21.547 --rc genhtml_function_coverage=1 00:30:21.547 --rc genhtml_legend=1 00:30:21.547 --rc geninfo_all_blocks=1 00:30:21.547 --rc geninfo_unexecuted_blocks=1 00:30:21.547 00:30:21.547 ' 00:30:21.547 21:00:25 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:21.547 21:00:25 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:30:21.547 21:00:25 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:21.547 21:00:25 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:21.547 21:00:25 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:21.547 21:00:25 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:21.547 21:00:25 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:21.547 21:00:25 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:21.547 21:00:25 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:21.547 21:00:25 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:21.547 21:00:25 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:21.547 21:00:25 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:21.547 21:00:25 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:21.547 21:00:25 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:21.547 21:00:25 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:21.547 21:00:25 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:21.547 21:00:25 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:21.547 21:00:25 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:21.547 21:00:25 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:21.547 21:00:25 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:30:21.547 21:00:25 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:21.548 21:00:25 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:21.548 21:00:25 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:21.548 21:00:25 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.548 21:00:25 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.548 21:00:25 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.548 21:00:25 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:30:21.548 21:00:25 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.548 21:00:25 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:30:21.548 21:00:25 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:21.548 21:00:25 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:21.548 21:00:25 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:21.548 21:00:25 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:21.548 21:00:25 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:21.548 21:00:25 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:21.548 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:21.548 21:00:25 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:21.548 21:00:25 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:21.548 21:00:25 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:21.548 21:00:25 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:21.548 21:00:25 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:30:21.548 21:00:25 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:21.548 21:00:25 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:21.548 21:00:25 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:21.548 21:00:25 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.548 21:00:25 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.548 21:00:25 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.548 21:00:25 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:30:21.548 21:00:25 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.548 21:00:25 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:30:21.548 21:00:25 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:21.548 21:00:25 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:21.548 21:00:25 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:21.548 21:00:25 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:21.548 21:00:25 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:21.548 21:00:25 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:21.548 21:00:25 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:21.548 21:00:25 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:21.548 21:00:25 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:21.548 21:00:25 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:21.548 21:00:25 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:30:21.548 21:00:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:30:23.448 Found 0000:09:00.0 (0x8086 - 0x159b) 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:30:23.448 Found 0000:09:00.1 (0x8086 - 0x159b) 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:30:23.448 Found net devices under 0000:09:00.0: cvl_0_0 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:30:23.448 Found net devices under 0000:09:00.1: cvl_0_1 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:23.448 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:23.706 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:23.706 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:23.706 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:23.706 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:23.706 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:23.706 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:23.706 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:23.706 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:23.706 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:23.706 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.335 ms 00:30:23.706 00:30:23.706 --- 10.0.0.2 ping statistics --- 00:30:23.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:23.706 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:30:23.706 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:23.706 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:23.706 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:30:23.706 00:30:23.706 --- 10.0.0.1 ping statistics --- 00:30:23.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:23.706 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:30:23.706 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:23.706 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:30:23.706 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:23.706 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:23.706 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:23.706 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:23.706 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:23.706 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:23.706 21:00:27 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:23.706 21:00:27 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:30:23.706 21:00:27 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:23.706 21:00:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:23.706 21:00:27 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:30:23.706 21:00:27 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:30:23.706 21:00:27 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:30:23.706 21:00:27 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:30:23.706 21:00:27 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:30:23.706 21:00:27 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:30:23.706 21:00:27 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:30:23.706 21:00:27 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:23.706 21:00:27 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:23.706 21:00:27 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:30:23.706 21:00:27 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:30:23.706 21:00:27 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:0b:00.0 00:30:23.706 21:00:27 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:0b:00.0 00:30:23.706 21:00:27 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:0b:00.0 00:30:23.706 21:00:27 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:0b:00.0 ']' 00:30:23.706 21:00:27 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:30:23.706 21:00:27 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:30:23.706 21:00:27 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:30:27.886 21:00:31 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F4Q1P0FGN 00:30:27.886 21:00:31 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:30:27.886 21:00:31 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:30:27.886 21:00:31 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:30:32.067 21:00:35 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:30:32.067 21:00:35 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:30:32.067 21:00:35 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:32.067 21:00:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:32.067 21:00:35 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:30:32.067 21:00:35 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:32.067 21:00:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:32.067 21:00:35 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1830314 00:30:32.067 21:00:35 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:32.067 21:00:35 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:32.067 21:00:35 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1830314 00:30:32.067 21:00:35 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 1830314 ']' 00:30:32.068 21:00:35 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:32.068 21:00:35 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:32.068 21:00:35 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:32.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:32.068 21:00:35 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:32.068 21:00:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:32.068 [2024-11-26 21:00:35.686023] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:30:32.068 [2024-11-26 21:00:35.686118] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:32.068 [2024-11-26 21:00:35.760157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:32.325 [2024-11-26 21:00:35.819590] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:32.325 [2024-11-26 21:00:35.819641] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:32.325 [2024-11-26 21:00:35.819654] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:32.325 [2024-11-26 21:00:35.819664] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:32.325 [2024-11-26 21:00:35.819673] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:32.325 [2024-11-26 21:00:35.821193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:32.325 [2024-11-26 21:00:35.821258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:32.325 [2024-11-26 21:00:35.821331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:32.325 [2024-11-26 21:00:35.821335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:32.325 21:00:35 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:32.325 21:00:35 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:30:32.325 21:00:35 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:30:32.325 21:00:35 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.325 21:00:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:32.325 INFO: Log level set to 20 00:30:32.325 INFO: Requests: 00:30:32.325 { 00:30:32.325 "jsonrpc": "2.0", 00:30:32.325 "method": "nvmf_set_config", 00:30:32.325 "id": 1, 00:30:32.325 "params": { 00:30:32.325 "admin_cmd_passthru": { 00:30:32.325 "identify_ctrlr": true 00:30:32.325 } 00:30:32.325 } 00:30:32.325 } 00:30:32.325 00:30:32.325 INFO: response: 00:30:32.325 { 00:30:32.325 "jsonrpc": "2.0", 00:30:32.325 "id": 1, 00:30:32.325 "result": true 00:30:32.325 } 00:30:32.325 00:30:32.325 21:00:35 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.325 21:00:35 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:30:32.325 21:00:35 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.325 21:00:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:32.325 INFO: Setting log level to 20 00:30:32.325 INFO: Setting log level to 20 00:30:32.325 INFO: Log level set to 20 00:30:32.325 INFO: Log level set to 20 00:30:32.325 INFO: Requests: 00:30:32.325 { 00:30:32.325 "jsonrpc": "2.0", 00:30:32.325 "method": "framework_start_init", 00:30:32.325 "id": 1 00:30:32.325 } 00:30:32.325 00:30:32.325 INFO: Requests: 00:30:32.325 { 00:30:32.325 "jsonrpc": "2.0", 00:30:32.325 "method": "framework_start_init", 00:30:32.326 "id": 1 00:30:32.326 } 00:30:32.326 00:30:32.583 [2024-11-26 21:00:36.027239] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:30:32.583 INFO: response: 00:30:32.583 { 00:30:32.583 "jsonrpc": "2.0", 00:30:32.583 "id": 1, 00:30:32.583 "result": true 00:30:32.583 } 00:30:32.583 00:30:32.583 INFO: response: 00:30:32.583 { 00:30:32.583 "jsonrpc": "2.0", 00:30:32.583 "id": 1, 00:30:32.583 "result": true 00:30:32.583 } 00:30:32.583 00:30:32.583 21:00:36 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.583 21:00:36 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:32.583 21:00:36 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.583 21:00:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:32.583 INFO: Setting log level to 40 00:30:32.583 INFO: Setting log level to 40 00:30:32.583 INFO: Setting log level to 40 00:30:32.583 [2024-11-26 21:00:36.037252] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:32.583 21:00:36 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:32.583 21:00:36 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:30:32.583 21:00:36 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:32.583 21:00:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:32.583 21:00:36 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:0b:00.0 00:30:32.583 21:00:36 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:32.583 21:00:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:35.929 Nvme0n1 00:30:35.929 21:00:38 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.929 21:00:38 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:30:35.929 21:00:38 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.929 21:00:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:35.929 21:00:38 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.929 21:00:38 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:35.929 21:00:38 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.929 21:00:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:35.929 21:00:38 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.929 21:00:38 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:35.929 21:00:38 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.929 21:00:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:35.929 [2024-11-26 21:00:38.937674] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:35.929 21:00:38 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.929 21:00:38 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:30:35.929 21:00:38 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.929 21:00:38 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:35.929 [ 00:30:35.929 { 00:30:35.929 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:35.929 "subtype": "Discovery", 00:30:35.929 "listen_addresses": [], 00:30:35.929 "allow_any_host": true, 00:30:35.929 "hosts": [] 00:30:35.929 }, 00:30:35.929 { 00:30:35.929 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:35.929 "subtype": "NVMe", 00:30:35.929 "listen_addresses": [ 00:30:35.929 { 00:30:35.929 "trtype": "TCP", 00:30:35.929 "adrfam": "IPv4", 00:30:35.929 "traddr": "10.0.0.2", 00:30:35.929 "trsvcid": "4420" 00:30:35.929 } 00:30:35.929 ], 00:30:35.929 "allow_any_host": true, 00:30:35.929 "hosts": [], 00:30:35.929 "serial_number": "SPDK00000000000001", 00:30:35.929 "model_number": "SPDK bdev Controller", 00:30:35.929 "max_namespaces": 1, 00:30:35.929 "min_cntlid": 1, 00:30:35.929 "max_cntlid": 65519, 00:30:35.929 "namespaces": [ 00:30:35.929 { 00:30:35.929 "nsid": 1, 00:30:35.929 "bdev_name": "Nvme0n1", 00:30:35.929 "name": "Nvme0n1", 00:30:35.929 "nguid": "C6268EEC53094898AAFBB6A69A71B77F", 00:30:35.929 "uuid": "c6268eec-5309-4898-aafb-b6a69a71b77f" 00:30:35.929 } 00:30:35.929 ] 00:30:35.929 } 00:30:35.929 ] 00:30:35.929 21:00:38 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.929 21:00:38 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:35.929 21:00:38 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:30:35.929 21:00:38 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:30:35.929 21:00:39 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F4Q1P0FGN 00:30:35.929 21:00:39 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:35.929 21:00:39 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:30:35.929 21:00:39 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:30:35.929 21:00:39 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:30:35.929 21:00:39 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F4Q1P0FGN '!=' BTLJ72430F4Q1P0FGN ']' 00:30:35.929 21:00:39 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:30:35.929 21:00:39 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:35.929 21:00:39 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.929 21:00:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:35.929 21:00:39 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.929 21:00:39 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:30:35.929 21:00:39 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:30:35.929 21:00:39 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:35.929 21:00:39 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:30:35.929 21:00:39 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:35.929 21:00:39 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:30:35.929 21:00:39 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:35.929 21:00:39 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:35.929 rmmod nvme_tcp 00:30:35.929 rmmod nvme_fabrics 00:30:35.929 rmmod nvme_keyring 00:30:35.929 21:00:39 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:35.929 21:00:39 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:30:35.929 21:00:39 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:30:35.929 21:00:39 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 1830314 ']' 00:30:35.929 21:00:39 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 1830314 00:30:35.929 21:00:39 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 1830314 ']' 00:30:35.930 21:00:39 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 1830314 00:30:35.930 21:00:39 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:30:35.930 21:00:39 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:35.930 21:00:39 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1830314 00:30:35.930 21:00:39 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:35.930 21:00:39 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:35.930 21:00:39 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1830314' 00:30:35.930 killing process with pid 1830314 00:30:35.930 21:00:39 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 1830314 00:30:35.930 21:00:39 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 1830314 00:30:37.308 21:00:40 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:37.308 21:00:40 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:37.308 21:00:40 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:37.308 21:00:40 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:30:37.308 21:00:40 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:30:37.308 21:00:40 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:37.308 21:00:40 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:30:37.308 21:00:40 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:37.308 21:00:40 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:37.308 21:00:40 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:37.308 21:00:40 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:37.308 21:00:40 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:39.840 21:00:42 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:39.840 00:30:39.840 real 0m18.082s 00:30:39.840 user 0m26.116s 00:30:39.840 sys 0m3.172s 00:30:39.840 21:00:42 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:39.840 21:00:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:39.840 ************************************ 00:30:39.840 END TEST nvmf_identify_passthru 00:30:39.840 ************************************ 00:30:39.840 21:00:43 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:39.840 21:00:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:39.840 21:00:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:39.840 21:00:43 -- common/autotest_common.sh@10 -- # set +x 00:30:39.840 ************************************ 00:30:39.840 START TEST nvmf_dif 00:30:39.840 ************************************ 00:30:39.840 21:00:43 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:39.840 * Looking for test storage... 00:30:39.840 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:39.840 21:00:43 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:39.840 21:00:43 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:30:39.840 21:00:43 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:39.840 21:00:43 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:39.840 21:00:43 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:39.840 21:00:43 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:39.840 21:00:43 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:39.840 21:00:43 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:30:39.840 21:00:43 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:30:39.840 21:00:43 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:30:39.840 21:00:43 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:30:39.840 21:00:43 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:30:39.840 21:00:43 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:30:39.840 21:00:43 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:30:39.840 21:00:43 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:39.840 21:00:43 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:30:39.840 21:00:43 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:30:39.840 21:00:43 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:39.840 21:00:43 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:39.840 21:00:43 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:30:39.840 21:00:43 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:30:39.840 21:00:43 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:39.840 21:00:43 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:30:39.840 21:00:43 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:30:39.840 21:00:43 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:30:39.840 21:00:43 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:30:39.840 21:00:43 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:39.840 21:00:43 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:30:39.840 21:00:43 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:30:39.840 21:00:43 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:39.840 21:00:43 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:39.840 21:00:43 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:30:39.840 21:00:43 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:39.840 21:00:43 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:39.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.840 --rc genhtml_branch_coverage=1 00:30:39.840 --rc genhtml_function_coverage=1 00:30:39.840 --rc genhtml_legend=1 00:30:39.840 --rc geninfo_all_blocks=1 00:30:39.840 --rc geninfo_unexecuted_blocks=1 00:30:39.840 00:30:39.840 ' 00:30:39.840 21:00:43 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:39.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.840 --rc genhtml_branch_coverage=1 00:30:39.840 --rc genhtml_function_coverage=1 00:30:39.840 --rc genhtml_legend=1 00:30:39.840 --rc geninfo_all_blocks=1 00:30:39.840 --rc geninfo_unexecuted_blocks=1 00:30:39.840 00:30:39.840 ' 00:30:39.840 21:00:43 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:39.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.840 --rc genhtml_branch_coverage=1 00:30:39.840 --rc genhtml_function_coverage=1 00:30:39.840 --rc genhtml_legend=1 00:30:39.840 --rc geninfo_all_blocks=1 00:30:39.840 --rc geninfo_unexecuted_blocks=1 00:30:39.840 00:30:39.840 ' 00:30:39.840 21:00:43 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:39.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.840 --rc genhtml_branch_coverage=1 00:30:39.840 --rc genhtml_function_coverage=1 00:30:39.840 --rc genhtml_legend=1 00:30:39.840 --rc geninfo_all_blocks=1 00:30:39.840 --rc geninfo_unexecuted_blocks=1 00:30:39.840 00:30:39.840 ' 00:30:39.840 21:00:43 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:39.840 21:00:43 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:30:39.840 21:00:43 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:39.840 21:00:43 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:39.840 21:00:43 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:39.840 21:00:43 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:39.840 21:00:43 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:39.840 21:00:43 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:39.840 21:00:43 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:39.840 21:00:43 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:39.840 21:00:43 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:39.840 21:00:43 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:39.841 21:00:43 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:39.841 21:00:43 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:39.841 21:00:43 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:39.841 21:00:43 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:39.841 21:00:43 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:39.841 21:00:43 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:39.841 21:00:43 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:39.841 21:00:43 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:30:39.841 21:00:43 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:39.841 21:00:43 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:39.841 21:00:43 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:39.841 21:00:43 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.841 21:00:43 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.841 21:00:43 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.841 21:00:43 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:30:39.841 21:00:43 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.841 21:00:43 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:30:39.841 21:00:43 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:39.841 21:00:43 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:39.841 21:00:43 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:39.841 21:00:43 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:39.841 21:00:43 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:39.841 21:00:43 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:39.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:39.841 21:00:43 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:39.841 21:00:43 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:39.841 21:00:43 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:39.841 21:00:43 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:30:39.841 21:00:43 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:30:39.841 21:00:43 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:30:39.841 21:00:43 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:30:39.841 21:00:43 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:30:39.841 21:00:43 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:39.841 21:00:43 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:39.841 21:00:43 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:39.841 21:00:43 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:39.841 21:00:43 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:39.841 21:00:43 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:39.841 21:00:43 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:39.841 21:00:43 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:39.841 21:00:43 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:39.841 21:00:43 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:39.841 21:00:43 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:30:39.841 21:00:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:30:41.747 Found 0000:09:00.0 (0x8086 - 0x159b) 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:30:41.747 Found 0000:09:00.1 (0x8086 - 0x159b) 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:30:41.747 Found net devices under 0000:09:00.0: cvl_0_0 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:30:41.747 Found net devices under 0000:09:00.1: cvl_0_1 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:41.747 21:00:45 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:42.007 21:00:45 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:42.007 21:00:45 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:42.007 21:00:45 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:42.007 21:00:45 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:42.007 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:42.007 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:30:42.007 00:30:42.007 --- 10.0.0.2 ping statistics --- 00:30:42.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:42.007 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:30:42.007 21:00:45 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:42.007 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:42.007 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:30:42.007 00:30:42.007 --- 10.0.0.1 ping statistics --- 00:30:42.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:42.007 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:30:42.007 21:00:45 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:42.007 21:00:45 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:30:42.007 21:00:45 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:30:42.007 21:00:45 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:42.943 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:30:42.943 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:30:42.943 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:30:42.943 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:30:42.943 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:30:42.943 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:30:42.943 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:30:42.943 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:30:42.943 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:30:42.943 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:30:42.943 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:30:42.943 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:30:42.943 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:30:42.943 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:30:42.943 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:30:42.943 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:30:42.943 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:30:43.202 21:00:46 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:43.202 21:00:46 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:43.202 21:00:46 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:43.202 21:00:46 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:43.202 21:00:46 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:43.202 21:00:46 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:43.202 21:00:46 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:30:43.202 21:00:46 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:30:43.202 21:00:46 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:43.202 21:00:46 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:43.202 21:00:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:43.202 21:00:46 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=1833581 00:30:43.202 21:00:46 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:30:43.202 21:00:46 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 1833581 00:30:43.202 21:00:46 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 1833581 ']' 00:30:43.202 21:00:46 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:43.202 21:00:46 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:43.202 21:00:46 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:43.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:43.202 21:00:46 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:43.202 21:00:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:43.202 [2024-11-26 21:00:46.775502] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:30:43.202 [2024-11-26 21:00:46.775596] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:43.202 [2024-11-26 21:00:46.848642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:43.461 [2024-11-26 21:00:46.907658] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:43.461 [2024-11-26 21:00:46.907705] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:43.461 [2024-11-26 21:00:46.907734] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:43.461 [2024-11-26 21:00:46.907745] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:43.461 [2024-11-26 21:00:46.907754] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:43.461 [2024-11-26 21:00:46.908315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:43.461 21:00:47 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:43.461 21:00:47 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:30:43.461 21:00:47 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:43.461 21:00:47 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:43.461 21:00:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:43.461 21:00:47 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:43.461 21:00:47 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:30:43.461 21:00:47 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:30:43.461 21:00:47 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.461 21:00:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:43.461 [2024-11-26 21:00:47.052420] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:43.461 21:00:47 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.461 21:00:47 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:30:43.461 21:00:47 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:43.461 21:00:47 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:43.461 21:00:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:43.461 ************************************ 00:30:43.461 START TEST fio_dif_1_default 00:30:43.461 ************************************ 00:30:43.461 21:00:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:30:43.461 21:00:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:30:43.461 21:00:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:30:43.461 21:00:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:30:43.461 21:00:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:30:43.461 21:00:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:30:43.461 21:00:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:43.461 21:00:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.461 21:00:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:43.461 bdev_null0 00:30:43.461 21:00:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.461 21:00:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:43.461 21:00:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.461 21:00:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:43.461 21:00:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.461 21:00:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:43.461 21:00:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.461 21:00:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:43.461 21:00:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.461 21:00:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:43.461 21:00:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.461 21:00:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:43.461 [2024-11-26 21:00:47.112758] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:43.461 21:00:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.461 21:00:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:30:43.461 21:00:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:30:43.461 21:00:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:43.461 21:00:47 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:30:43.461 21:00:47 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:30:43.461 21:00:47 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:43.461 21:00:47 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:43.461 { 00:30:43.461 "params": { 00:30:43.461 "name": "Nvme$subsystem", 00:30:43.461 "trtype": "$TEST_TRANSPORT", 00:30:43.461 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:43.461 "adrfam": "ipv4", 00:30:43.461 "trsvcid": "$NVMF_PORT", 00:30:43.461 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:43.461 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:43.461 "hdgst": ${hdgst:-false}, 00:30:43.461 "ddgst": ${ddgst:-false} 00:30:43.461 }, 00:30:43.461 "method": "bdev_nvme_attach_controller" 00:30:43.461 } 00:30:43.461 EOF 00:30:43.461 )") 00:30:43.461 21:00:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:43.461 21:00:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:43.461 21:00:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:30:43.461 21:00:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:43.461 21:00:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:30:43.462 21:00:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:30:43.462 21:00:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:43.462 21:00:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:30:43.462 21:00:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:30:43.462 21:00:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:30:43.462 21:00:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:30:43.462 21:00:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:30:43.462 21:00:47 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:30:43.462 21:00:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:43.462 21:00:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:30:43.462 21:00:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:30:43.462 21:00:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:30:43.462 21:00:47 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:30:43.462 21:00:47 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:30:43.462 21:00:47 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:30:43.462 21:00:47 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:43.462 "params": { 00:30:43.462 "name": "Nvme0", 00:30:43.462 "trtype": "tcp", 00:30:43.462 "traddr": "10.0.0.2", 00:30:43.462 "adrfam": "ipv4", 00:30:43.462 "trsvcid": "4420", 00:30:43.462 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:43.462 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:43.462 "hdgst": false, 00:30:43.462 "ddgst": false 00:30:43.462 }, 00:30:43.462 "method": "bdev_nvme_attach_controller" 00:30:43.462 }' 00:30:43.462 21:00:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:30:43.462 21:00:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:30:43.462 21:00:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:30:43.462 21:00:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:43.462 21:00:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:30:43.462 21:00:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:30:43.721 21:00:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:30:43.721 21:00:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:30:43.721 21:00:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:43.721 21:00:47 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:43.721 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:43.721 fio-3.35 00:30:43.721 Starting 1 thread 00:30:55.917 00:30:55.917 filename0: (groupid=0, jobs=1): err= 0: pid=1833818: Tue Nov 26 21:00:58 2024 00:30:55.917 read: IOPS=192, BW=770KiB/s (788kB/s)(7712KiB/10018msec) 00:30:55.917 slat (nsec): min=5027, max=82492, avg=9350.19, stdev=3586.81 00:30:55.917 clat (usec): min=548, max=44791, avg=20754.85, stdev=20200.40 00:30:55.917 lat (usec): min=556, max=44831, avg=20764.20, stdev=20200.16 00:30:55.917 clat percentiles (usec): 00:30:55.917 | 1.00th=[ 570], 5.00th=[ 586], 10.00th=[ 603], 20.00th=[ 627], 00:30:55.917 | 30.00th=[ 652], 40.00th=[ 685], 50.00th=[ 807], 60.00th=[41157], 00:30:55.917 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:30:55.917 | 99.00th=[41681], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:30:55.917 | 99.99th=[44827] 00:30:55.917 bw ( KiB/s): min= 384, max= 3456, per=99.89%, avg=769.60, stdev=853.14, samples=20 00:30:55.917 iops : min= 96, max= 864, avg=192.40, stdev=213.28, samples=20 00:30:55.917 lat (usec) : 750=49.53%, 1000=0.67% 00:30:55.917 lat (msec) : 50=49.79% 00:30:55.917 cpu : usr=91.15%, sys=8.56%, ctx=23, majf=0, minf=259 00:30:55.917 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:55.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.917 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:55.917 issued rwts: total=1928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:55.917 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:55.917 00:30:55.917 Run status group 0 (all jobs): 00:30:55.917 READ: bw=770KiB/s (788kB/s), 770KiB/s-770KiB/s (788kB/s-788kB/s), io=7712KiB (7897kB), run=10018-10018msec 00:30:55.917 21:00:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:30:55.917 21:00:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:30:55.917 21:00:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:30:55.917 21:00:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:55.917 21:00:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:30:55.917 21:00:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.918 00:30:55.918 real 0m11.239s 00:30:55.918 user 0m10.357s 00:30:55.918 sys 0m1.142s 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:55.918 ************************************ 00:30:55.918 END TEST fio_dif_1_default 00:30:55.918 ************************************ 00:30:55.918 21:00:58 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:30:55.918 21:00:58 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:55.918 21:00:58 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:55.918 21:00:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:55.918 ************************************ 00:30:55.918 START TEST fio_dif_1_multi_subsystems 00:30:55.918 ************************************ 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:55.918 bdev_null0 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:55.918 [2024-11-26 21:00:58.404718] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:55.918 bdev_null1 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:55.918 { 00:30:55.918 "params": { 00:30:55.918 "name": "Nvme$subsystem", 00:30:55.918 "trtype": "$TEST_TRANSPORT", 00:30:55.918 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:55.918 "adrfam": "ipv4", 00:30:55.918 "trsvcid": "$NVMF_PORT", 00:30:55.918 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:55.918 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:55.918 "hdgst": ${hdgst:-false}, 00:30:55.918 "ddgst": ${ddgst:-false} 00:30:55.918 }, 00:30:55.918 "method": "bdev_nvme_attach_controller" 00:30:55.918 } 00:30:55.918 EOF 00:30:55.918 )") 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:55.918 { 00:30:55.918 "params": { 00:30:55.918 "name": "Nvme$subsystem", 00:30:55.918 "trtype": "$TEST_TRANSPORT", 00:30:55.918 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:55.918 "adrfam": "ipv4", 00:30:55.918 "trsvcid": "$NVMF_PORT", 00:30:55.918 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:55.918 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:55.918 "hdgst": ${hdgst:-false}, 00:30:55.918 "ddgst": ${ddgst:-false} 00:30:55.918 }, 00:30:55.918 "method": "bdev_nvme_attach_controller" 00:30:55.918 } 00:30:55.918 EOF 00:30:55.918 )") 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:30:55.918 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:55.918 "params": { 00:30:55.918 "name": "Nvme0", 00:30:55.918 "trtype": "tcp", 00:30:55.918 "traddr": "10.0.0.2", 00:30:55.918 "adrfam": "ipv4", 00:30:55.918 "trsvcid": "4420", 00:30:55.918 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:55.919 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:55.919 "hdgst": false, 00:30:55.919 "ddgst": false 00:30:55.919 }, 00:30:55.919 "method": "bdev_nvme_attach_controller" 00:30:55.919 },{ 00:30:55.919 "params": { 00:30:55.919 "name": "Nvme1", 00:30:55.919 "trtype": "tcp", 00:30:55.919 "traddr": "10.0.0.2", 00:30:55.919 "adrfam": "ipv4", 00:30:55.919 "trsvcid": "4420", 00:30:55.919 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:55.919 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:55.919 "hdgst": false, 00:30:55.919 "ddgst": false 00:30:55.919 }, 00:30:55.919 "method": "bdev_nvme_attach_controller" 00:30:55.919 }' 00:30:55.919 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:30:55.919 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:30:55.919 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:30:55.919 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:55.919 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:30:55.919 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:30:55.919 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:30:55.919 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:30:55.919 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:55.919 21:00:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:55.919 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:55.919 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:55.919 fio-3.35 00:30:55.919 Starting 2 threads 00:31:05.886 00:31:05.886 filename0: (groupid=0, jobs=1): err= 0: pid=1835219: Tue Nov 26 21:01:09 2024 00:31:05.886 read: IOPS=205, BW=824KiB/s (843kB/s)(8256KiB/10023msec) 00:31:05.886 slat (nsec): min=7019, max=74883, avg=9258.29, stdev=3771.76 00:31:05.886 clat (usec): min=538, max=44060, avg=19394.05, stdev=20262.97 00:31:05.886 lat (usec): min=545, max=44090, avg=19403.31, stdev=20263.02 00:31:05.886 clat percentiles (usec): 00:31:05.886 | 1.00th=[ 578], 5.00th=[ 594], 10.00th=[ 627], 20.00th=[ 676], 00:31:05.886 | 30.00th=[ 750], 40.00th=[ 791], 50.00th=[ 848], 60.00th=[41157], 00:31:05.886 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:31:05.886 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43779], 99.95th=[44303], 00:31:05.886 | 99.99th=[44303] 00:31:05.886 bw ( KiB/s): min= 704, max= 1088, per=49.25%, avg=824.00, stdev=91.62, samples=20 00:31:05.886 iops : min= 176, max= 272, avg=206.00, stdev=22.90, samples=20 00:31:05.886 lat (usec) : 750=29.65%, 1000=24.32% 00:31:05.886 lat (msec) : 2=0.10%, 50=45.93% 00:31:05.886 cpu : usr=94.66%, sys=5.00%, ctx=26, majf=0, minf=190 00:31:05.886 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:05.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.886 issued rwts: total=2064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:05.886 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:05.886 filename1: (groupid=0, jobs=1): err= 0: pid=1835220: Tue Nov 26 21:01:09 2024 00:31:05.886 read: IOPS=212, BW=850KiB/s (871kB/s)(8512KiB/10009msec) 00:31:05.886 slat (nsec): min=7008, max=32395, avg=8804.84, stdev=2657.83 00:31:05.886 clat (usec): min=506, max=44091, avg=18785.45, stdev=20263.32 00:31:05.886 lat (usec): min=514, max=44122, avg=18794.26, stdev=20263.21 00:31:05.886 clat percentiles (usec): 00:31:05.886 | 1.00th=[ 537], 5.00th=[ 570], 10.00th=[ 578], 20.00th=[ 594], 00:31:05.886 | 30.00th=[ 619], 40.00th=[ 644], 50.00th=[ 693], 60.00th=[41157], 00:31:05.886 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:31:05.886 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:31:05.886 | 99.99th=[44303] 00:31:05.886 bw ( KiB/s): min= 768, max= 1088, per=50.75%, avg=849.60, stdev=98.21, samples=20 00:31:05.886 iops : min= 192, max= 272, avg=212.40, stdev=24.55, samples=20 00:31:05.886 lat (usec) : 750=52.77%, 1000=2.68% 00:31:05.886 lat (msec) : 50=44.55% 00:31:05.886 cpu : usr=94.69%, sys=4.98%, ctx=14, majf=0, minf=72 00:31:05.886 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:05.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.886 issued rwts: total=2128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:05.886 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:05.886 00:31:05.886 Run status group 0 (all jobs): 00:31:05.886 READ: bw=1673KiB/s (1713kB/s), 824KiB/s-850KiB/s (843kB/s-871kB/s), io=16.4MiB (17.2MB), run=10009-10023msec 00:31:06.147 21:01:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:31:06.147 21:01:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:31:06.147 21:01:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:06.147 21:01:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:06.147 21:01:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:31:06.147 21:01:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:06.147 21:01:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.147 21:01:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:06.147 21:01:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.147 21:01:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:06.147 21:01:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.147 21:01:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:06.147 21:01:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.147 21:01:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:06.147 21:01:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:06.147 21:01:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:31:06.147 21:01:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:06.147 21:01:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.147 21:01:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:06.147 21:01:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.147 21:01:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:06.147 21:01:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.147 21:01:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:06.147 21:01:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.147 00:31:06.147 real 0m11.463s 00:31:06.147 user 0m20.511s 00:31:06.147 sys 0m1.289s 00:31:06.147 21:01:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:06.147 21:01:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:06.147 ************************************ 00:31:06.147 END TEST fio_dif_1_multi_subsystems 00:31:06.147 ************************************ 00:31:06.407 21:01:09 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:31:06.407 21:01:09 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:06.407 21:01:09 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:06.407 21:01:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:06.407 ************************************ 00:31:06.407 START TEST fio_dif_rand_params 00:31:06.407 ************************************ 00:31:06.407 21:01:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:31:06.407 21:01:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:31:06.407 21:01:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:31:06.407 21:01:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:31:06.407 21:01:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:31:06.407 21:01:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:31:06.407 21:01:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:31:06.407 21:01:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:31:06.407 21:01:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:31:06.407 21:01:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:06.407 21:01:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:06.407 21:01:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:06.407 21:01:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:06.407 21:01:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:06.407 21:01:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.407 21:01:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:06.407 bdev_null0 00:31:06.407 21:01:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.407 21:01:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:06.407 21:01:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.407 21:01:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:06.407 21:01:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.407 21:01:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:06.407 21:01:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.407 21:01:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:06.407 21:01:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.407 21:01:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:06.407 21:01:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.407 21:01:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:06.407 [2024-11-26 21:01:09.920758] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:06.407 21:01:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.407 21:01:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:31:06.407 21:01:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:31:06.407 21:01:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:06.407 21:01:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:31:06.407 21:01:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:31:06.407 21:01:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:06.407 21:01:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:06.407 { 00:31:06.407 "params": { 00:31:06.407 "name": "Nvme$subsystem", 00:31:06.407 "trtype": "$TEST_TRANSPORT", 00:31:06.407 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:06.407 "adrfam": "ipv4", 00:31:06.407 "trsvcid": "$NVMF_PORT", 00:31:06.407 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:06.407 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:06.407 "hdgst": ${hdgst:-false}, 00:31:06.407 "ddgst": ${ddgst:-false} 00:31:06.407 }, 00:31:06.407 "method": "bdev_nvme_attach_controller" 00:31:06.407 } 00:31:06.407 EOF 00:31:06.407 )") 00:31:06.407 21:01:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:06.407 21:01:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:06.407 21:01:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:06.407 21:01:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:06.407 21:01:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:06.407 21:01:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:06.407 21:01:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:06.407 21:01:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:06.407 21:01:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:06.407 21:01:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:31:06.407 21:01:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:06.407 21:01:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:06.407 21:01:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:31:06.407 21:01:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:06.407 21:01:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:06.407 21:01:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:31:06.407 21:01:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:06.407 21:01:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:06.407 21:01:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:31:06.407 21:01:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:31:06.407 21:01:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:06.407 "params": { 00:31:06.407 "name": "Nvme0", 00:31:06.407 "trtype": "tcp", 00:31:06.407 "traddr": "10.0.0.2", 00:31:06.407 "adrfam": "ipv4", 00:31:06.407 "trsvcid": "4420", 00:31:06.407 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:06.407 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:06.407 "hdgst": false, 00:31:06.407 "ddgst": false 00:31:06.407 }, 00:31:06.407 "method": "bdev_nvme_attach_controller" 00:31:06.407 }' 00:31:06.407 21:01:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:06.407 21:01:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:06.407 21:01:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:06.407 21:01:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:06.408 21:01:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:06.408 21:01:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:06.408 21:01:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:06.408 21:01:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:06.408 21:01:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:06.408 21:01:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:06.665 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:06.665 ... 00:31:06.665 fio-3.35 00:31:06.665 Starting 3 threads 00:31:13.222 00:31:13.222 filename0: (groupid=0, jobs=1): err= 0: pid=1836622: Tue Nov 26 21:01:15 2024 00:31:13.222 read: IOPS=214, BW=26.8MiB/s (28.1MB/s)(134MiB/5004msec) 00:31:13.222 slat (usec): min=7, max=127, avg=13.12, stdev= 4.04 00:31:13.222 clat (usec): min=4594, max=51759, avg=13970.96, stdev=4328.66 00:31:13.223 lat (usec): min=4606, max=51774, avg=13984.08, stdev=4328.84 00:31:13.223 clat percentiles (usec): 00:31:13.223 | 1.00th=[ 7439], 5.00th=[ 9110], 10.00th=[11076], 20.00th=[12649], 00:31:13.223 | 30.00th=[13042], 40.00th=[13435], 50.00th=[13829], 60.00th=[14222], 00:31:13.223 | 70.00th=[14615], 80.00th=[15139], 90.00th=[15795], 95.00th=[16319], 00:31:13.223 | 99.00th=[47973], 99.50th=[51119], 99.90th=[51643], 99.95th=[51643], 00:31:13.223 | 99.99th=[51643] 00:31:13.223 bw ( KiB/s): min=25344, max=29952, per=31.26%, avg=27417.60, stdev=1445.39, samples=10 00:31:13.223 iops : min= 198, max= 234, avg=214.20, stdev=11.29, samples=10 00:31:13.223 lat (msec) : 10=7.36%, 20=91.52%, 50=0.56%, 100=0.56% 00:31:13.223 cpu : usr=93.46%, sys=6.02%, ctx=9, majf=0, minf=180 00:31:13.223 IO depths : 1=0.9%, 2=99.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:13.223 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.223 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.223 issued rwts: total=1073,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.223 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:13.223 filename0: (groupid=0, jobs=1): err= 0: pid=1836623: Tue Nov 26 21:01:15 2024 00:31:13.223 read: IOPS=235, BW=29.4MiB/s (30.8MB/s)(148MiB/5046msec) 00:31:13.223 slat (nsec): min=6834, max=38610, avg=12997.38, stdev=1894.54 00:31:13.223 clat (usec): min=6449, max=52646, avg=12702.44, stdev=5546.61 00:31:13.223 lat (usec): min=6457, max=52661, avg=12715.44, stdev=5546.64 00:31:13.223 clat percentiles (usec): 00:31:13.223 | 1.00th=[ 7570], 5.00th=[ 9503], 10.00th=[10028], 20.00th=[10552], 00:31:13.223 | 30.00th=[10945], 40.00th=[11469], 50.00th=[11994], 60.00th=[12518], 00:31:13.223 | 70.00th=[12911], 80.00th=[13566], 90.00th=[14222], 95.00th=[14746], 00:31:13.223 | 99.00th=[51643], 99.50th=[52167], 99.90th=[52691], 99.95th=[52691], 00:31:13.223 | 99.99th=[52691] 00:31:13.223 bw ( KiB/s): min=18688, max=35840, per=34.55%, avg=30310.40, stdev=4450.77, samples=10 00:31:13.223 iops : min= 146, max= 280, avg=236.80, stdev=34.77, samples=10 00:31:13.223 lat (msec) : 10=9.27%, 20=88.80%, 50=0.84%, 100=1.10% 00:31:13.223 cpu : usr=92.84%, sys=6.66%, ctx=10, majf=0, minf=55 00:31:13.223 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:13.223 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.223 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.223 issued rwts: total=1187,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.223 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:13.223 filename0: (groupid=0, jobs=1): err= 0: pid=1836624: Tue Nov 26 21:01:15 2024 00:31:13.223 read: IOPS=237, BW=29.7MiB/s (31.1MB/s)(150MiB/5043msec) 00:31:13.223 slat (nsec): min=7504, max=35971, avg=13072.63, stdev=1971.80 00:31:13.223 clat (usec): min=4501, max=54163, avg=12577.09, stdev=4492.09 00:31:13.223 lat (usec): min=4513, max=54175, avg=12590.17, stdev=4492.05 00:31:13.223 clat percentiles (usec): 00:31:13.223 | 1.00th=[ 5735], 5.00th=[ 8455], 10.00th=[ 9765], 20.00th=[10552], 00:31:13.223 | 30.00th=[11207], 40.00th=[11731], 50.00th=[12387], 60.00th=[12780], 00:31:13.223 | 70.00th=[13304], 80.00th=[14091], 90.00th=[14746], 95.00th=[15270], 00:31:13.223 | 99.00th=[45876], 99.50th=[51119], 99.90th=[53740], 99.95th=[54264], 00:31:13.223 | 99.99th=[54264] 00:31:13.223 bw ( KiB/s): min=26880, max=32768, per=34.90%, avg=30617.60, stdev=1757.95, samples=10 00:31:13.223 iops : min= 210, max= 256, avg=239.20, stdev=13.73, samples=10 00:31:13.223 lat (msec) : 10=11.60%, 20=87.23%, 50=0.50%, 100=0.67% 00:31:13.223 cpu : usr=92.72%, sys=6.78%, ctx=9, majf=0, minf=123 00:31:13.223 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:13.223 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.223 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.223 issued rwts: total=1198,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.223 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:13.223 00:31:13.223 Run status group 0 (all jobs): 00:31:13.223 READ: bw=85.7MiB/s (89.8MB/s), 26.8MiB/s-29.7MiB/s (28.1MB/s-31.1MB/s), io=432MiB (453MB), run=5004-5046msec 00:31:13.223 21:01:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:31:13.223 21:01:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:13.223 21:01:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:13.223 21:01:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:13.223 21:01:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:13.223 21:01:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:13.223 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.223 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:13.223 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.223 21:01:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:13.223 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.223 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:13.223 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.223 21:01:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:31:13.223 21:01:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:31:13.223 21:01:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:31:13.223 21:01:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:31:13.223 21:01:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:31:13.223 21:01:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:31:13.223 21:01:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:31:13.223 21:01:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:13.223 21:01:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:13.223 21:01:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:13.223 21:01:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:13.223 21:01:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:31:13.223 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.223 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:13.223 bdev_null0 00:31:13.223 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.223 21:01:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:13.223 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.223 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:13.223 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.223 21:01:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:13.223 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.223 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:13.223 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.223 21:01:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:13.223 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.223 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:13.223 [2024-11-26 21:01:16.179170] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:13.223 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.223 21:01:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:13.223 21:01:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:13.223 21:01:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:13.223 21:01:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:31:13.223 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.223 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:13.223 bdev_null1 00:31:13.223 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.223 21:01:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:13.223 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.223 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:13.223 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.223 21:01:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:13.223 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.223 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:13.223 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.223 21:01:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:13.223 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:13.224 bdev_null2 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:13.224 { 00:31:13.224 "params": { 00:31:13.224 "name": "Nvme$subsystem", 00:31:13.224 "trtype": "$TEST_TRANSPORT", 00:31:13.224 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:13.224 "adrfam": "ipv4", 00:31:13.224 "trsvcid": "$NVMF_PORT", 00:31:13.224 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:13.224 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:13.224 "hdgst": ${hdgst:-false}, 00:31:13.224 "ddgst": ${ddgst:-false} 00:31:13.224 }, 00:31:13.224 "method": "bdev_nvme_attach_controller" 00:31:13.224 } 00:31:13.224 EOF 00:31:13.224 )") 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:13.224 { 00:31:13.224 "params": { 00:31:13.224 "name": "Nvme$subsystem", 00:31:13.224 "trtype": "$TEST_TRANSPORT", 00:31:13.224 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:13.224 "adrfam": "ipv4", 00:31:13.224 "trsvcid": "$NVMF_PORT", 00:31:13.224 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:13.224 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:13.224 "hdgst": ${hdgst:-false}, 00:31:13.224 "ddgst": ${ddgst:-false} 00:31:13.224 }, 00:31:13.224 "method": "bdev_nvme_attach_controller" 00:31:13.224 } 00:31:13.224 EOF 00:31:13.224 )") 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:13.224 { 00:31:13.224 "params": { 00:31:13.224 "name": "Nvme$subsystem", 00:31:13.224 "trtype": "$TEST_TRANSPORT", 00:31:13.224 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:13.224 "adrfam": "ipv4", 00:31:13.224 "trsvcid": "$NVMF_PORT", 00:31:13.224 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:13.224 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:13.224 "hdgst": ${hdgst:-false}, 00:31:13.224 "ddgst": ${ddgst:-false} 00:31:13.224 }, 00:31:13.224 "method": "bdev_nvme_attach_controller" 00:31:13.224 } 00:31:13.224 EOF 00:31:13.224 )") 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:13.224 "params": { 00:31:13.224 "name": "Nvme0", 00:31:13.224 "trtype": "tcp", 00:31:13.224 "traddr": "10.0.0.2", 00:31:13.224 "adrfam": "ipv4", 00:31:13.224 "trsvcid": "4420", 00:31:13.224 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:13.224 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:13.224 "hdgst": false, 00:31:13.224 "ddgst": false 00:31:13.224 }, 00:31:13.224 "method": "bdev_nvme_attach_controller" 00:31:13.224 },{ 00:31:13.224 "params": { 00:31:13.224 "name": "Nvme1", 00:31:13.224 "trtype": "tcp", 00:31:13.224 "traddr": "10.0.0.2", 00:31:13.224 "adrfam": "ipv4", 00:31:13.224 "trsvcid": "4420", 00:31:13.224 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:13.224 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:13.224 "hdgst": false, 00:31:13.224 "ddgst": false 00:31:13.224 }, 00:31:13.224 "method": "bdev_nvme_attach_controller" 00:31:13.224 },{ 00:31:13.224 "params": { 00:31:13.224 "name": "Nvme2", 00:31:13.224 "trtype": "tcp", 00:31:13.224 "traddr": "10.0.0.2", 00:31:13.224 "adrfam": "ipv4", 00:31:13.224 "trsvcid": "4420", 00:31:13.224 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:13.224 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:13.224 "hdgst": false, 00:31:13.224 "ddgst": false 00:31:13.224 }, 00:31:13.224 "method": "bdev_nvme_attach_controller" 00:31:13.224 }' 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:13.224 21:01:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:13.224 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:13.224 ... 00:31:13.224 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:13.224 ... 00:31:13.224 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:13.224 ... 00:31:13.224 fio-3.35 00:31:13.224 Starting 24 threads 00:31:25.425 00:31:25.425 filename0: (groupid=0, jobs=1): err= 0: pid=1837483: Tue Nov 26 21:01:27 2024 00:31:25.425 read: IOPS=481, BW=1924KiB/s (1971kB/s)(18.8MiB/10025msec) 00:31:25.425 slat (nsec): min=5584, max=97094, avg=19969.19, stdev=15286.72 00:31:25.425 clat (usec): min=1465, max=35305, avg=33104.93, stdev=3045.95 00:31:25.425 lat (usec): min=1479, max=35333, avg=33124.90, stdev=3045.65 00:31:25.425 clat percentiles (usec): 00:31:25.425 | 1.00th=[15401], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:31:25.425 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:31:25.425 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:31:25.425 | 99.00th=[34866], 99.50th=[34866], 99.90th=[35390], 99.95th=[35390], 00:31:25.425 | 99.99th=[35390] 00:31:25.425 bw ( KiB/s): min= 1792, max= 2232, per=4.20%, avg=1922.80, stdev=82.70, samples=20 00:31:25.425 iops : min= 448, max= 558, avg=480.70, stdev=20.68, samples=20 00:31:25.425 lat (msec) : 2=0.33%, 4=0.33%, 10=0.15%, 20=0.48%, 50=98.71% 00:31:25.425 cpu : usr=98.43%, sys=1.16%, ctx=13, majf=0, minf=42 00:31:25.425 IO depths : 1=6.2%, 2=12.3%, 4=24.7%, 8=50.4%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:25.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.425 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.425 issued rwts: total=4823,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.425 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:25.425 filename0: (groupid=0, jobs=1): err= 0: pid=1837484: Tue Nov 26 21:01:27 2024 00:31:25.425 read: IOPS=475, BW=1900KiB/s (1946kB/s)(18.6MiB/10003msec) 00:31:25.425 slat (nsec): min=8250, max=75252, avg=26733.95, stdev=9657.76 00:31:25.425 clat (usec): min=13615, max=64723, avg=33437.86, stdev=2737.05 00:31:25.425 lat (usec): min=13633, max=64780, avg=33464.60, stdev=2737.48 00:31:25.425 clat percentiles (usec): 00:31:25.425 | 1.00th=[32375], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:31:25.425 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:31:25.425 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:31:25.425 | 99.00th=[35390], 99.50th=[49021], 99.90th=[64750], 99.95th=[64750], 00:31:25.425 | 99.99th=[64750] 00:31:25.425 bw ( KiB/s): min= 1664, max= 1920, per=4.14%, avg=1893.05, stdev=67.05, samples=19 00:31:25.425 iops : min= 416, max= 480, avg=473.26, stdev=16.76, samples=19 00:31:25.425 lat (msec) : 20=0.93%, 50=98.70%, 100=0.38% 00:31:25.425 cpu : usr=98.64%, sys=0.91%, ctx=26, majf=0, minf=26 00:31:25.425 IO depths : 1=5.7%, 2=11.9%, 4=24.9%, 8=50.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:31:25.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.425 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.425 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.425 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:25.425 filename0: (groupid=0, jobs=1): err= 0: pid=1837485: Tue Nov 26 21:01:27 2024 00:31:25.425 read: IOPS=474, BW=1899KiB/s (1945kB/s)(18.6MiB/10007msec) 00:31:25.425 slat (usec): min=4, max=103, avg=24.27, stdev=12.00 00:31:25.425 clat (usec): min=18627, max=49052, avg=33451.79, stdev=1097.17 00:31:25.425 lat (usec): min=18637, max=49069, avg=33476.05, stdev=1095.70 00:31:25.425 clat percentiles (usec): 00:31:25.425 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[32900], 00:31:25.425 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:31:25.425 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:31:25.425 | 99.00th=[34866], 99.50th=[35390], 99.90th=[49021], 99.95th=[49021], 00:31:25.425 | 99.99th=[49021] 00:31:25.425 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1899.95, stdev=47.58, samples=19 00:31:25.425 iops : min= 448, max= 480, avg=474.95, stdev=11.99, samples=19 00:31:25.425 lat (msec) : 20=0.04%, 50=99.96% 00:31:25.425 cpu : usr=97.67%, sys=1.46%, ctx=148, majf=0, minf=25 00:31:25.425 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:25.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.425 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.425 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.425 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:25.425 filename0: (groupid=0, jobs=1): err= 0: pid=1837486: Tue Nov 26 21:01:27 2024 00:31:25.425 read: IOPS=475, BW=1903KiB/s (1948kB/s)(18.6MiB/10024msec) 00:31:25.425 slat (usec): min=4, max=121, avg=38.99, stdev=12.61 00:31:25.425 clat (usec): min=17303, max=49215, avg=33311.50, stdev=1353.29 00:31:25.425 lat (usec): min=17341, max=49228, avg=33350.49, stdev=1351.52 00:31:25.425 clat percentiles (usec): 00:31:25.425 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:31:25.425 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:31:25.425 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:31:25.425 | 99.00th=[34866], 99.50th=[35390], 99.90th=[49021], 99.95th=[49021], 00:31:25.425 | 99.99th=[49021] 00:31:25.425 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1900.80, stdev=46.89, samples=20 00:31:25.425 iops : min= 448, max= 480, avg=475.20, stdev=11.72, samples=20 00:31:25.425 lat (msec) : 20=0.29%, 50=99.71% 00:31:25.425 cpu : usr=96.73%, sys=1.94%, ctx=271, majf=0, minf=18 00:31:25.425 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:25.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.425 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.425 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.425 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:25.425 filename0: (groupid=0, jobs=1): err= 0: pid=1837487: Tue Nov 26 21:01:27 2024 00:31:25.425 read: IOPS=476, BW=1907KiB/s (1953kB/s)(18.6MiB/10002msec) 00:31:25.425 slat (usec): min=10, max=106, avg=41.92, stdev=13.51 00:31:25.425 clat (usec): min=17131, max=43692, avg=33192.48, stdev=1114.98 00:31:25.425 lat (usec): min=17160, max=43719, avg=33234.40, stdev=1115.19 00:31:25.425 clat percentiles (usec): 00:31:25.425 | 1.00th=[32113], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:31:25.425 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:31:25.425 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[33817], 00:31:25.425 | 99.00th=[34341], 99.50th=[34866], 99.90th=[35390], 99.95th=[35390], 00:31:25.425 | 99.99th=[43779] 00:31:25.425 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1906.53, stdev=40.36, samples=19 00:31:25.425 iops : min= 448, max= 480, avg=476.63, stdev=10.09, samples=19 00:31:25.425 lat (msec) : 20=0.34%, 50=99.66% 00:31:25.425 cpu : usr=97.52%, sys=1.55%, ctx=96, majf=0, minf=28 00:31:25.425 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:25.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.425 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.425 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.425 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:25.425 filename0: (groupid=0, jobs=1): err= 0: pid=1837488: Tue Nov 26 21:01:27 2024 00:31:25.425 read: IOPS=476, BW=1907KiB/s (1953kB/s)(18.6MiB/10001msec) 00:31:25.425 slat (nsec): min=6485, max=81165, avg=39665.10, stdev=12899.26 00:31:25.425 clat (usec): min=15751, max=42531, avg=33189.97, stdev=1124.26 00:31:25.425 lat (usec): min=15762, max=42577, avg=33229.64, stdev=1125.41 00:31:25.425 clat percentiles (usec): 00:31:25.425 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:31:25.425 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:31:25.425 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[33817], 00:31:25.425 | 99.00th=[34341], 99.50th=[34866], 99.90th=[35390], 99.95th=[35390], 00:31:25.425 | 99.99th=[42730] 00:31:25.425 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1906.53, stdev=40.36, samples=19 00:31:25.425 iops : min= 448, max= 480, avg=476.63, stdev=10.09, samples=19 00:31:25.425 lat (msec) : 20=0.34%, 50=99.66% 00:31:25.425 cpu : usr=97.75%, sys=1.44%, ctx=169, majf=0, minf=27 00:31:25.425 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:25.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.425 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.425 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.425 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:25.425 filename0: (groupid=0, jobs=1): err= 0: pid=1837489: Tue Nov 26 21:01:27 2024 00:31:25.425 read: IOPS=475, BW=1901KiB/s (1946kB/s)(18.6MiB/10001msec) 00:31:25.425 slat (nsec): min=4855, max=80711, avg=36666.74, stdev=11222.88 00:31:25.425 clat (usec): min=21572, max=52502, avg=33334.72, stdev=1360.69 00:31:25.425 lat (usec): min=21608, max=52516, avg=33371.38, stdev=1359.92 00:31:25.425 clat percentiles (usec): 00:31:25.425 | 1.00th=[32637], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:31:25.425 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:31:25.425 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[33817], 00:31:25.425 | 99.00th=[34866], 99.50th=[35390], 99.90th=[52691], 99.95th=[52691], 00:31:25.425 | 99.99th=[52691] 00:31:25.425 bw ( KiB/s): min= 1792, max= 1920, per=4.14%, avg=1893.21, stdev=53.30, samples=19 00:31:25.426 iops : min= 448, max= 480, avg=473.26, stdev=13.40, samples=19 00:31:25.426 lat (msec) : 50=99.66%, 100=0.34% 00:31:25.426 cpu : usr=97.67%, sys=1.51%, ctx=59, majf=0, minf=21 00:31:25.426 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:25.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.426 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.426 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.426 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:25.426 filename0: (groupid=0, jobs=1): err= 0: pid=1837490: Tue Nov 26 21:01:27 2024 00:31:25.426 read: IOPS=474, BW=1899KiB/s (1945kB/s)(18.6MiB/10007msec) 00:31:25.426 slat (nsec): min=4098, max=76364, avg=36666.41, stdev=11570.93 00:31:25.426 clat (usec): min=21575, max=67143, avg=33372.38, stdev=1740.50 00:31:25.426 lat (usec): min=21622, max=67154, avg=33409.04, stdev=1738.86 00:31:25.426 clat percentiles (usec): 00:31:25.426 | 1.00th=[32637], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:31:25.426 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:31:25.426 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:31:25.426 | 99.00th=[34866], 99.50th=[35390], 99.90th=[58459], 99.95th=[58459], 00:31:25.426 | 99.99th=[67634] 00:31:25.426 bw ( KiB/s): min= 1788, max= 1920, per=4.14%, avg=1892.84, stdev=54.04, samples=19 00:31:25.426 iops : min= 447, max= 480, avg=473.21, stdev=13.51, samples=19 00:31:25.426 lat (msec) : 50=99.66%, 100=0.34% 00:31:25.426 cpu : usr=97.62%, sys=1.56%, ctx=73, majf=0, minf=23 00:31:25.426 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:25.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.426 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.426 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.426 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:25.426 filename1: (groupid=0, jobs=1): err= 0: pid=1837491: Tue Nov 26 21:01:27 2024 00:31:25.426 read: IOPS=475, BW=1900KiB/s (1946kB/s)(18.6MiB/10002msec) 00:31:25.426 slat (nsec): min=4115, max=79800, avg=36811.62, stdev=11144.42 00:31:25.426 clat (usec): min=21548, max=53529, avg=33348.45, stdev=1412.83 00:31:25.426 lat (usec): min=21586, max=53545, avg=33385.26, stdev=1411.33 00:31:25.426 clat percentiles (usec): 00:31:25.426 | 1.00th=[32637], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:31:25.426 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:31:25.426 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[33817], 00:31:25.426 | 99.00th=[34866], 99.50th=[35390], 99.90th=[53740], 99.95th=[53740], 00:31:25.426 | 99.99th=[53740] 00:31:25.426 bw ( KiB/s): min= 1792, max= 1920, per=4.14%, avg=1893.05, stdev=53.61, samples=19 00:31:25.426 iops : min= 448, max= 480, avg=473.26, stdev=13.40, samples=19 00:31:25.426 lat (msec) : 50=99.66%, 100=0.34% 00:31:25.426 cpu : usr=97.84%, sys=1.41%, ctx=57, majf=0, minf=21 00:31:25.426 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:25.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.426 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.426 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.426 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:25.426 filename1: (groupid=0, jobs=1): err= 0: pid=1837492: Tue Nov 26 21:01:27 2024 00:31:25.426 read: IOPS=475, BW=1903KiB/s (1948kB/s)(18.6MiB/10024msec) 00:31:25.426 slat (nsec): min=4089, max=91745, avg=36915.73, stdev=13526.95 00:31:25.426 clat (usec): min=18025, max=49589, avg=33315.69, stdev=1243.61 00:31:25.426 lat (usec): min=18037, max=49617, avg=33352.61, stdev=1243.10 00:31:25.426 clat percentiles (usec): 00:31:25.426 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:31:25.426 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:31:25.426 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:31:25.426 | 99.00th=[34866], 99.50th=[34866], 99.90th=[49546], 99.95th=[49546], 00:31:25.426 | 99.99th=[49546] 00:31:25.426 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1900.95, stdev=46.53, samples=20 00:31:25.426 iops : min= 448, max= 480, avg=475.20, stdev=11.72, samples=20 00:31:25.426 lat (msec) : 20=0.19%, 50=99.81% 00:31:25.426 cpu : usr=96.68%, sys=2.10%, ctx=220, majf=0, minf=31 00:31:25.426 IO depths : 1=6.2%, 2=12.5%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:25.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.426 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.426 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.426 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:25.426 filename1: (groupid=0, jobs=1): err= 0: pid=1837493: Tue Nov 26 21:01:27 2024 00:31:25.426 read: IOPS=476, BW=1907KiB/s (1952kB/s)(18.6MiB/10003msec) 00:31:25.426 slat (usec): min=7, max=128, avg=49.39, stdev=21.33 00:31:25.426 clat (usec): min=23275, max=35472, avg=33119.82, stdev=866.96 00:31:25.426 lat (usec): min=23283, max=35515, avg=33169.22, stdev=866.63 00:31:25.426 clat percentiles (usec): 00:31:25.426 | 1.00th=[28705], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:31:25.426 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:31:25.426 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[33817], 00:31:25.426 | 99.00th=[34341], 99.50th=[34866], 99.90th=[35390], 99.95th=[35390], 00:31:25.426 | 99.99th=[35390] 00:31:25.426 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1906.53, stdev=40.36, samples=19 00:31:25.426 iops : min= 448, max= 480, avg=476.63, stdev=10.09, samples=19 00:31:25.426 lat (msec) : 50=100.00% 00:31:25.426 cpu : usr=96.83%, sys=1.96%, ctx=152, majf=0, minf=20 00:31:25.426 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:25.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.426 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.426 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.426 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:25.426 filename1: (groupid=0, jobs=1): err= 0: pid=1837494: Tue Nov 26 21:01:27 2024 00:31:25.426 read: IOPS=476, BW=1905KiB/s (1951kB/s)(18.6MiB/10012msec) 00:31:25.426 slat (nsec): min=4309, max=77667, avg=32240.11, stdev=12864.81 00:31:25.426 clat (usec): min=21582, max=35735, avg=33338.72, stdev=873.26 00:31:25.426 lat (usec): min=21629, max=35755, avg=33370.96, stdev=871.41 00:31:25.426 clat percentiles (usec): 00:31:25.426 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[32900], 00:31:25.426 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:31:25.426 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:31:25.426 | 99.00th=[34866], 99.50th=[35390], 99.90th=[35914], 99.95th=[35914], 00:31:25.426 | 99.99th=[35914] 00:31:25.426 bw ( KiB/s): min= 1792, max= 1923, per=4.15%, avg=1900.11, stdev=47.66, samples=19 00:31:25.426 iops : min= 448, max= 480, avg=474.95, stdev=11.99, samples=19 00:31:25.426 lat (msec) : 50=100.00% 00:31:25.426 cpu : usr=97.06%, sys=1.87%, ctx=127, majf=0, minf=31 00:31:25.426 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:25.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.426 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.426 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.426 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:25.426 filename1: (groupid=0, jobs=1): err= 0: pid=1837495: Tue Nov 26 21:01:27 2024 00:31:25.426 read: IOPS=475, BW=1900KiB/s (1946kB/s)(18.6MiB/10002msec) 00:31:25.426 slat (nsec): min=8193, max=78216, avg=27563.00, stdev=9775.14 00:31:25.426 clat (usec): min=13592, max=63645, avg=33407.36, stdev=2167.55 00:31:25.426 lat (usec): min=13601, max=63699, avg=33434.92, stdev=2168.16 00:31:25.426 clat percentiles (usec): 00:31:25.426 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[32900], 00:31:25.426 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:31:25.426 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:31:25.426 | 99.00th=[34866], 99.50th=[35390], 99.90th=[63701], 99.95th=[63701], 00:31:25.426 | 99.99th=[63701] 00:31:25.426 bw ( KiB/s): min= 1667, max= 1920, per=4.14%, avg=1893.21, stdev=67.96, samples=19 00:31:25.426 iops : min= 416, max= 480, avg=473.26, stdev=17.13, samples=19 00:31:25.426 lat (msec) : 20=0.38%, 50=99.28%, 100=0.34% 00:31:25.426 cpu : usr=98.00%, sys=1.29%, ctx=74, majf=0, minf=20 00:31:25.426 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:25.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.426 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.426 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.426 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:25.426 filename1: (groupid=0, jobs=1): err= 0: pid=1837496: Tue Nov 26 21:01:27 2024 00:31:25.426 read: IOPS=476, BW=1905KiB/s (1951kB/s)(18.6MiB/10012msec) 00:31:25.426 slat (nsec): min=6874, max=88945, avg=37111.99, stdev=15396.85 00:31:25.426 clat (usec): min=17130, max=45112, avg=33281.52, stdev=2150.05 00:31:25.426 lat (usec): min=17174, max=45158, avg=33318.64, stdev=2151.61 00:31:25.426 clat percentiles (usec): 00:31:25.426 | 1.00th=[22938], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:31:25.426 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:31:25.426 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:31:25.426 | 99.00th=[43254], 99.50th=[43779], 99.90th=[44827], 99.95th=[44827], 00:31:25.426 | 99.99th=[45351] 00:31:25.426 bw ( KiB/s): min= 1792, max= 1936, per=4.17%, avg=1906.53, stdev=41.06, samples=19 00:31:25.426 iops : min= 448, max= 484, avg=476.63, stdev=10.26, samples=19 00:31:25.426 lat (msec) : 20=0.34%, 50=99.66% 00:31:25.426 cpu : usr=98.40%, sys=1.20%, ctx=16, majf=0, minf=25 00:31:25.426 IO depths : 1=3.2%, 2=9.4%, 4=25.0%, 8=53.1%, 16=9.3%, 32=0.0%, >=64=0.0% 00:31:25.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.426 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.426 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.426 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:25.426 filename1: (groupid=0, jobs=1): err= 0: pid=1837497: Tue Nov 26 21:01:27 2024 00:31:25.426 read: IOPS=474, BW=1899KiB/s (1945kB/s)(18.6MiB/10009msec) 00:31:25.426 slat (usec): min=8, max=160, avg=44.39, stdev=21.12 00:31:25.426 clat (usec): min=20916, max=60250, avg=33257.24, stdev=1764.42 00:31:25.426 lat (usec): min=20931, max=60270, avg=33301.63, stdev=1763.87 00:31:25.426 clat percentiles (usec): 00:31:25.427 | 1.00th=[32113], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:31:25.427 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:31:25.427 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[33817], 00:31:25.427 | 99.00th=[34866], 99.50th=[34866], 99.90th=[60031], 99.95th=[60031], 00:31:25.427 | 99.99th=[60031] 00:31:25.427 bw ( KiB/s): min= 1792, max= 1920, per=4.14%, avg=1893.05, stdev=53.61, samples=19 00:31:25.427 iops : min= 448, max= 480, avg=473.26, stdev=13.40, samples=19 00:31:25.427 lat (msec) : 50=99.66%, 100=0.34% 00:31:25.427 cpu : usr=98.35%, sys=1.20%, ctx=35, majf=0, minf=26 00:31:25.427 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:25.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.427 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.427 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.427 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:25.427 filename1: (groupid=0, jobs=1): err= 0: pid=1837498: Tue Nov 26 21:01:27 2024 00:31:25.427 read: IOPS=476, BW=1905KiB/s (1951kB/s)(18.6MiB/10012msec) 00:31:25.427 slat (usec): min=7, max=124, avg=40.88, stdev=19.30 00:31:25.427 clat (usec): min=11464, max=66110, avg=33230.76, stdev=1910.06 00:31:25.427 lat (usec): min=11475, max=66129, avg=33271.64, stdev=1909.36 00:31:25.427 clat percentiles (usec): 00:31:25.427 | 1.00th=[32113], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:31:25.427 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:31:25.427 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[33817], 00:31:25.427 | 99.00th=[34866], 99.50th=[35390], 99.90th=[51643], 99.95th=[51643], 00:31:25.427 | 99.99th=[66323] 00:31:25.427 bw ( KiB/s): min= 1776, max= 1920, per=4.14%, avg=1893.05, stdev=53.88, samples=19 00:31:25.427 iops : min= 444, max= 480, avg=473.26, stdev=13.47, samples=19 00:31:25.427 lat (msec) : 20=0.34%, 50=99.33%, 100=0.34% 00:31:25.427 cpu : usr=98.31%, sys=1.23%, ctx=13, majf=0, minf=21 00:31:25.427 IO depths : 1=4.5%, 2=10.8%, 4=25.0%, 8=51.7%, 16=8.0%, 32=0.0%, >=64=0.0% 00:31:25.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.427 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.427 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.427 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:25.427 filename2: (groupid=0, jobs=1): err= 0: pid=1837499: Tue Nov 26 21:01:27 2024 00:31:25.427 read: IOPS=474, BW=1899KiB/s (1944kB/s)(18.6MiB/10010msec) 00:31:25.427 slat (nsec): min=5852, max=56219, avg=24548.21, stdev=8943.67 00:31:25.427 clat (usec): min=13667, max=71078, avg=33498.36, stdev=3061.45 00:31:25.427 lat (usec): min=13681, max=71101, avg=33522.91, stdev=3060.79 00:31:25.427 clat percentiles (usec): 00:31:25.427 | 1.00th=[20055], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:31:25.427 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:31:25.427 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:31:25.427 | 99.00th=[46924], 99.50th=[49021], 99.90th=[70779], 99.95th=[70779], 00:31:25.427 | 99.99th=[70779] 00:31:25.427 bw ( KiB/s): min= 1664, max= 1920, per=4.14%, avg=1893.05, stdev=67.05, samples=19 00:31:25.427 iops : min= 416, max= 480, avg=473.26, stdev=16.76, samples=19 00:31:25.427 lat (msec) : 20=1.01%, 50=98.65%, 100=0.34% 00:31:25.427 cpu : usr=98.36%, sys=1.23%, ctx=13, majf=0, minf=19 00:31:25.427 IO depths : 1=5.0%, 2=11.2%, 4=24.8%, 8=51.4%, 16=7.5%, 32=0.0%, >=64=0.0% 00:31:25.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.427 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.427 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.427 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:25.427 filename2: (groupid=0, jobs=1): err= 0: pid=1837500: Tue Nov 26 21:01:27 2024 00:31:25.427 read: IOPS=477, BW=1910KiB/s (1956kB/s)(18.7MiB/10019msec) 00:31:25.427 slat (nsec): min=6924, max=87510, avg=38559.14, stdev=11790.09 00:31:25.427 clat (usec): min=17244, max=35482, avg=33151.68, stdev=1341.98 00:31:25.427 lat (usec): min=17284, max=35525, avg=33190.24, stdev=1343.77 00:31:25.427 clat percentiles (usec): 00:31:25.427 | 1.00th=[23462], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:31:25.427 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:31:25.427 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[33817], 00:31:25.427 | 99.00th=[34341], 99.50th=[34866], 99.90th=[35390], 99.95th=[35390], 00:31:25.427 | 99.99th=[35390] 00:31:25.427 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1905.80, stdev=39.03, samples=20 00:31:25.427 iops : min= 448, max= 480, avg=476.40, stdev= 9.88, samples=20 00:31:25.427 lat (msec) : 20=0.29%, 50=99.71% 00:31:25.427 cpu : usr=98.72%, sys=0.88%, ctx=13, majf=0, minf=18 00:31:25.427 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:25.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.427 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.427 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.427 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:25.427 filename2: (groupid=0, jobs=1): err= 0: pid=1837501: Tue Nov 26 21:01:27 2024 00:31:25.427 read: IOPS=476, BW=1905KiB/s (1951kB/s)(18.6MiB/10011msec) 00:31:25.427 slat (usec): min=6, max=125, avg=47.09, stdev=20.53 00:31:25.427 clat (usec): min=11497, max=51569, avg=33174.82, stdev=1845.96 00:31:25.427 lat (usec): min=11520, max=51588, avg=33221.91, stdev=1843.70 00:31:25.427 clat percentiles (usec): 00:31:25.427 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:31:25.427 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:31:25.427 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[33817], 00:31:25.427 | 99.00th=[34866], 99.50th=[35390], 99.90th=[51643], 99.95th=[51643], 00:31:25.427 | 99.99th=[51643] 00:31:25.427 bw ( KiB/s): min= 1792, max= 1920, per=4.14%, avg=1893.05, stdev=53.61, samples=19 00:31:25.427 iops : min= 448, max= 480, avg=473.26, stdev=13.40, samples=19 00:31:25.427 lat (msec) : 20=0.34%, 50=99.33%, 100=0.34% 00:31:25.427 cpu : usr=98.02%, sys=1.35%, ctx=57, majf=0, minf=24 00:31:25.427 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:25.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.427 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.427 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.427 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:25.427 filename2: (groupid=0, jobs=1): err= 0: pid=1837502: Tue Nov 26 21:01:27 2024 00:31:25.427 read: IOPS=531, BW=2124KiB/s (2175kB/s)(20.8MiB/10047msec) 00:31:25.427 slat (nsec): min=5759, max=86372, avg=18611.49, stdev=13004.29 00:31:25.427 clat (usec): min=12400, max=69726, avg=29945.12, stdev=6223.96 00:31:25.427 lat (usec): min=12412, max=69744, avg=29963.73, stdev=6226.55 00:31:25.427 clat percentiles (usec): 00:31:25.427 | 1.00th=[12649], 5.00th=[19792], 10.00th=[21103], 20.00th=[25035], 00:31:25.427 | 30.00th=[25822], 40.00th=[29754], 50.00th=[33162], 60.00th=[33162], 00:31:25.427 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[39060], 00:31:25.427 | 99.00th=[44827], 99.50th=[48497], 99.90th=[69731], 99.95th=[69731], 00:31:25.427 | 99.99th=[69731] 00:31:25.427 bw ( KiB/s): min= 1667, max= 2512, per=4.64%, avg=2124.79, stdev=211.27, samples=19 00:31:25.427 iops : min= 416, max= 628, avg=531.16, stdev=52.91, samples=19 00:31:25.427 lat (msec) : 20=5.72%, 50=93.80%, 100=0.49% 00:31:25.427 cpu : usr=98.15%, sys=1.43%, ctx=13, majf=0, minf=24 00:31:25.427 IO depths : 1=0.1%, 2=1.6%, 4=8.6%, 8=75.3%, 16=14.5%, 32=0.0%, >=64=0.0% 00:31:25.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.427 complete : 0=0.0%, 4=90.2%, 8=6.2%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.427 issued rwts: total=5336,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.427 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:25.427 filename2: (groupid=0, jobs=1): err= 0: pid=1837503: Tue Nov 26 21:01:27 2024 00:31:25.427 read: IOPS=476, BW=1907KiB/s (1952kB/s)(18.6MiB/10003msec) 00:31:25.427 slat (nsec): min=7955, max=82536, avg=40588.51, stdev=12438.63 00:31:25.427 clat (usec): min=17167, max=35526, avg=33194.15, stdev=1061.39 00:31:25.427 lat (usec): min=17180, max=35550, avg=33234.74, stdev=1062.24 00:31:25.427 clat percentiles (usec): 00:31:25.427 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:31:25.427 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:31:25.427 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[33817], 00:31:25.427 | 99.00th=[34341], 99.50th=[34866], 99.90th=[35390], 99.95th=[35390], 00:31:25.427 | 99.99th=[35390] 00:31:25.427 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1906.53, stdev=40.36, samples=19 00:31:25.427 iops : min= 448, max= 480, avg=476.63, stdev=10.09, samples=19 00:31:25.427 lat (msec) : 20=0.34%, 50=99.66% 00:31:25.427 cpu : usr=98.47%, sys=1.13%, ctx=13, majf=0, minf=21 00:31:25.427 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:25.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.427 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.427 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.427 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:25.427 filename2: (groupid=0, jobs=1): err= 0: pid=1837504: Tue Nov 26 21:01:27 2024 00:31:25.427 read: IOPS=476, BW=1907KiB/s (1953kB/s)(18.6MiB/10002msec) 00:31:25.427 slat (nsec): min=8621, max=85222, avg=40569.29, stdev=12767.02 00:31:25.427 clat (usec): min=17174, max=43914, avg=33213.45, stdev=1370.05 00:31:25.427 lat (usec): min=17217, max=43953, avg=33254.02, stdev=1370.75 00:31:25.427 clat percentiles (usec): 00:31:25.427 | 1.00th=[27919], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:31:25.427 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:31:25.427 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:31:25.427 | 99.00th=[34866], 99.50th=[35390], 99.90th=[43779], 99.95th=[43779], 00:31:25.427 | 99.99th=[43779] 00:31:25.427 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1906.53, stdev=40.36, samples=19 00:31:25.427 iops : min= 448, max= 480, avg=476.63, stdev=10.09, samples=19 00:31:25.427 lat (msec) : 20=0.34%, 50=99.66% 00:31:25.427 cpu : usr=98.63%, sys=0.96%, ctx=13, majf=0, minf=22 00:31:25.427 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:25.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.427 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.427 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.427 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:25.428 filename2: (groupid=0, jobs=1): err= 0: pid=1837505: Tue Nov 26 21:01:27 2024 00:31:25.428 read: IOPS=475, BW=1901KiB/s (1946kB/s)(18.6MiB/10001msec) 00:31:25.428 slat (usec): min=5, max=106, avg=35.78, stdev=10.99 00:31:25.428 clat (usec): min=21641, max=52504, avg=33343.27, stdev=1362.00 00:31:25.428 lat (usec): min=21677, max=52527, avg=33379.04, stdev=1361.16 00:31:25.428 clat percentiles (usec): 00:31:25.428 | 1.00th=[32637], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:31:25.428 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:31:25.428 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[33817], 00:31:25.428 | 99.00th=[34866], 99.50th=[35390], 99.90th=[52167], 99.95th=[52691], 00:31:25.428 | 99.99th=[52691] 00:31:25.428 bw ( KiB/s): min= 1792, max= 1920, per=4.14%, avg=1893.21, stdev=53.30, samples=19 00:31:25.428 iops : min= 448, max= 480, avg=473.26, stdev=13.40, samples=19 00:31:25.428 lat (msec) : 50=99.66%, 100=0.34% 00:31:25.428 cpu : usr=98.52%, sys=1.07%, ctx=12, majf=0, minf=20 00:31:25.428 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:25.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.428 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.428 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.428 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:25.428 filename2: (groupid=0, jobs=1): err= 0: pid=1837506: Tue Nov 26 21:01:27 2024 00:31:25.428 read: IOPS=474, BW=1900KiB/s (1945kB/s)(18.6MiB/10005msec) 00:31:25.428 slat (nsec): min=6293, max=71161, avg=31779.89, stdev=12310.54 00:31:25.428 clat (usec): min=21617, max=57201, avg=33440.53, stdev=1599.31 00:31:25.428 lat (usec): min=21648, max=57224, avg=33472.31, stdev=1597.56 00:31:25.428 clat percentiles (usec): 00:31:25.428 | 1.00th=[32637], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:31:25.428 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:31:25.428 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:31:25.428 | 99.00th=[34866], 99.50th=[35390], 99.90th=[57410], 99.95th=[57410], 00:31:25.428 | 99.99th=[57410] 00:31:25.428 bw ( KiB/s): min= 1779, max= 1920, per=4.14%, avg=1893.21, stdev=53.52, samples=19 00:31:25.428 iops : min= 444, max= 480, avg=473.26, stdev=13.47, samples=19 00:31:25.428 lat (msec) : 50=99.66%, 100=0.34% 00:31:25.428 cpu : usr=98.58%, sys=1.03%, ctx=12, majf=0, minf=26 00:31:25.428 IO depths : 1=4.6%, 2=10.8%, 4=25.0%, 8=51.7%, 16=7.9%, 32=0.0%, >=64=0.0% 00:31:25.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.428 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.428 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.428 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:25.428 00:31:25.428 Run status group 0 (all jobs): 00:31:25.428 READ: bw=44.7MiB/s (46.8MB/s), 1899KiB/s-2124KiB/s (1944kB/s-2175kB/s), io=449MiB (471MB), run=10001-10047msec 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:25.428 bdev_null0 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:25.428 [2024-11-26 21:01:27.927647] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:25.428 bdev_null1 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:31:25.428 21:01:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:31:25.429 21:01:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:25.429 21:01:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:25.429 21:01:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:25.429 { 00:31:25.429 "params": { 00:31:25.429 "name": "Nvme$subsystem", 00:31:25.429 "trtype": "$TEST_TRANSPORT", 00:31:25.429 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:25.429 "adrfam": "ipv4", 00:31:25.429 "trsvcid": "$NVMF_PORT", 00:31:25.429 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:25.429 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:25.429 "hdgst": ${hdgst:-false}, 00:31:25.429 "ddgst": ${ddgst:-false} 00:31:25.429 }, 00:31:25.429 "method": "bdev_nvme_attach_controller" 00:31:25.429 } 00:31:25.429 EOF 00:31:25.429 )") 00:31:25.429 21:01:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:25.429 21:01:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:25.429 21:01:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:25.429 21:01:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:25.429 21:01:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:25.429 21:01:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:25.429 21:01:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:25.429 21:01:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:25.429 21:01:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:31:25.429 21:01:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:25.429 21:01:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:31:25.429 21:01:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:25.429 21:01:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:25.429 21:01:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:25.429 21:01:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:31:25.429 21:01:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:25.429 21:01:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:25.429 21:01:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:25.429 21:01:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:25.429 21:01:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:25.429 { 00:31:25.429 "params": { 00:31:25.429 "name": "Nvme$subsystem", 00:31:25.429 "trtype": "$TEST_TRANSPORT", 00:31:25.429 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:25.429 "adrfam": "ipv4", 00:31:25.429 "trsvcid": "$NVMF_PORT", 00:31:25.429 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:25.429 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:25.429 "hdgst": ${hdgst:-false}, 00:31:25.429 "ddgst": ${ddgst:-false} 00:31:25.429 }, 00:31:25.429 "method": "bdev_nvme_attach_controller" 00:31:25.429 } 00:31:25.429 EOF 00:31:25.429 )") 00:31:25.429 21:01:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:31:25.429 21:01:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:25.429 21:01:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:25.429 21:01:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:31:25.429 21:01:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:31:25.429 21:01:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:25.429 "params": { 00:31:25.429 "name": "Nvme0", 00:31:25.429 "trtype": "tcp", 00:31:25.429 "traddr": "10.0.0.2", 00:31:25.429 "adrfam": "ipv4", 00:31:25.429 "trsvcid": "4420", 00:31:25.429 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:25.429 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:25.429 "hdgst": false, 00:31:25.429 "ddgst": false 00:31:25.429 }, 00:31:25.429 "method": "bdev_nvme_attach_controller" 00:31:25.429 },{ 00:31:25.429 "params": { 00:31:25.429 "name": "Nvme1", 00:31:25.429 "trtype": "tcp", 00:31:25.429 "traddr": "10.0.0.2", 00:31:25.429 "adrfam": "ipv4", 00:31:25.429 "trsvcid": "4420", 00:31:25.429 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:25.429 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:25.429 "hdgst": false, 00:31:25.429 "ddgst": false 00:31:25.429 }, 00:31:25.429 "method": "bdev_nvme_attach_controller" 00:31:25.429 }' 00:31:25.429 21:01:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:25.429 21:01:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:25.429 21:01:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:25.429 21:01:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:25.429 21:01:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:25.429 21:01:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:25.429 21:01:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:25.429 21:01:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:25.429 21:01:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:25.429 21:01:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:25.429 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:25.429 ... 00:31:25.429 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:25.429 ... 00:31:25.429 fio-3.35 00:31:25.429 Starting 4 threads 00:31:30.710 00:31:30.710 filename0: (groupid=0, jobs=1): err= 0: pid=1838880: Tue Nov 26 21:01:34 2024 00:31:30.710 read: IOPS=1846, BW=14.4MiB/s (15.1MB/s)(72.2MiB/5002msec) 00:31:30.710 slat (nsec): min=4619, max=99284, avg=16647.54, stdev=8922.72 00:31:30.710 clat (usec): min=740, max=7691, avg=4275.26, stdev=712.58 00:31:30.710 lat (usec): min=772, max=7707, avg=4291.90, stdev=712.33 00:31:30.710 clat percentiles (usec): 00:31:30.710 | 1.00th=[ 2769], 5.00th=[ 3359], 10.00th=[ 3523], 20.00th=[ 3818], 00:31:30.710 | 30.00th=[ 4015], 40.00th=[ 4113], 50.00th=[ 4178], 60.00th=[ 4293], 00:31:30.710 | 70.00th=[ 4359], 80.00th=[ 4621], 90.00th=[ 5145], 95.00th=[ 5669], 00:31:30.710 | 99.00th=[ 6783], 99.50th=[ 7046], 99.90th=[ 7373], 99.95th=[ 7373], 00:31:30.710 | 99.99th=[ 7701] 00:31:30.710 bw ( KiB/s): min=14192, max=15072, per=24.62%, avg=14776.89, stdev=246.20, samples=9 00:31:30.710 iops : min= 1774, max= 1884, avg=1847.11, stdev=30.78, samples=9 00:31:30.710 lat (usec) : 750=0.01% 00:31:30.710 lat (msec) : 2=0.17%, 4=29.72%, 10=70.10% 00:31:30.710 cpu : usr=91.36%, sys=6.14%, ctx=127, majf=0, minf=0 00:31:30.710 IO depths : 1=0.2%, 2=11.4%, 4=60.0%, 8=28.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:30.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.710 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.710 issued rwts: total=9237,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.710 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:30.710 filename0: (groupid=0, jobs=1): err= 0: pid=1838881: Tue Nov 26 21:01:34 2024 00:31:30.710 read: IOPS=1973, BW=15.4MiB/s (16.2MB/s)(77.1MiB/5002msec) 00:31:30.710 slat (nsec): min=6961, max=62966, avg=12511.63, stdev=6724.21 00:31:30.710 clat (usec): min=805, max=7694, avg=4011.76, stdev=597.69 00:31:30.710 lat (usec): min=818, max=7708, avg=4024.27, stdev=597.91 00:31:30.710 clat percentiles (usec): 00:31:30.710 | 1.00th=[ 2278], 5.00th=[ 3097], 10.00th=[ 3359], 20.00th=[ 3589], 00:31:30.710 | 30.00th=[ 3785], 40.00th=[ 3916], 50.00th=[ 4080], 60.00th=[ 4146], 00:31:30.710 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 4555], 95.00th=[ 4948], 00:31:30.710 | 99.00th=[ 5866], 99.50th=[ 6194], 99.90th=[ 6980], 99.95th=[ 7308], 00:31:30.710 | 99.99th=[ 7701] 00:31:30.710 bw ( KiB/s): min=15424, max=16288, per=26.29%, avg=15779.20, stdev=258.85, samples=10 00:31:30.710 iops : min= 1928, max= 2036, avg=1972.40, stdev=32.36, samples=10 00:31:30.710 lat (usec) : 1000=0.03% 00:31:30.710 lat (msec) : 2=0.51%, 4=45.13%, 10=54.34% 00:31:30.710 cpu : usr=94.64%, sys=4.84%, ctx=10, majf=0, minf=9 00:31:30.710 IO depths : 1=0.4%, 2=11.8%, 4=59.4%, 8=28.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:30.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.710 complete : 0=0.0%, 4=93.0%, 8=7.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.710 issued rwts: total=9870,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.710 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:30.710 filename1: (groupid=0, jobs=1): err= 0: pid=1838882: Tue Nov 26 21:01:34 2024 00:31:30.710 read: IOPS=1876, BW=14.7MiB/s (15.4MB/s)(73.4MiB/5003msec) 00:31:30.710 slat (nsec): min=6739, max=74530, avg=13849.94, stdev=7795.11 00:31:30.710 clat (usec): min=993, max=7853, avg=4216.59, stdev=654.20 00:31:30.710 lat (usec): min=1021, max=7870, avg=4230.44, stdev=654.02 00:31:30.710 clat percentiles (usec): 00:31:30.710 | 1.00th=[ 2671], 5.00th=[ 3326], 10.00th=[ 3556], 20.00th=[ 3818], 00:31:30.710 | 30.00th=[ 3982], 40.00th=[ 4113], 50.00th=[ 4178], 60.00th=[ 4228], 00:31:30.710 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 4948], 95.00th=[ 5407], 00:31:30.710 | 99.00th=[ 6456], 99.50th=[ 6849], 99.90th=[ 7504], 99.95th=[ 7504], 00:31:30.710 | 99.99th=[ 7832] 00:31:30.710 bw ( KiB/s): min=14288, max=15424, per=25.01%, avg=15014.20, stdev=419.00, samples=10 00:31:30.710 iops : min= 1786, max= 1928, avg=1876.70, stdev=52.37, samples=10 00:31:30.710 lat (usec) : 1000=0.01% 00:31:30.710 lat (msec) : 2=0.18%, 4=31.58%, 10=68.23% 00:31:30.710 cpu : usr=94.24%, sys=5.24%, ctx=8, majf=0, minf=9 00:31:30.710 IO depths : 1=0.2%, 2=11.4%, 4=60.1%, 8=28.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:30.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.710 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.710 issued rwts: total=9389,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.710 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:30.710 filename1: (groupid=0, jobs=1): err= 0: pid=1838883: Tue Nov 26 21:01:34 2024 00:31:30.710 read: IOPS=1808, BW=14.1MiB/s (14.8MB/s)(70.6MiB/5001msec) 00:31:30.710 slat (nsec): min=6674, max=74541, avg=15109.65, stdev=8357.32 00:31:30.710 clat (usec): min=873, max=7794, avg=4373.96, stdev=741.58 00:31:30.710 lat (usec): min=885, max=7813, avg=4389.07, stdev=741.06 00:31:30.710 clat percentiles (usec): 00:31:30.710 | 1.00th=[ 2835], 5.00th=[ 3392], 10.00th=[ 3654], 20.00th=[ 3949], 00:31:30.710 | 30.00th=[ 4080], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4359], 00:31:30.710 | 70.00th=[ 4490], 80.00th=[ 4752], 90.00th=[ 5276], 95.00th=[ 5932], 00:31:30.710 | 99.00th=[ 6980], 99.50th=[ 7242], 99.90th=[ 7570], 99.95th=[ 7635], 00:31:30.710 | 99.99th=[ 7767] 00:31:30.710 bw ( KiB/s): min=13904, max=15072, per=24.09%, avg=14458.80, stdev=366.63, samples=10 00:31:30.710 iops : min= 1738, max= 1884, avg=1807.30, stdev=45.89, samples=10 00:31:30.710 lat (usec) : 1000=0.02% 00:31:30.710 lat (msec) : 2=0.17%, 4=23.36%, 10=76.45% 00:31:30.710 cpu : usr=94.94%, sys=4.56%, ctx=8, majf=0, minf=9 00:31:30.710 IO depths : 1=0.1%, 2=10.9%, 4=60.2%, 8=28.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:30.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.710 complete : 0=0.0%, 4=93.3%, 8=6.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.710 issued rwts: total=9042,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.710 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:30.710 00:31:30.710 Run status group 0 (all jobs): 00:31:30.710 READ: bw=58.6MiB/s (61.5MB/s), 14.1MiB/s-15.4MiB/s (14.8MB/s-16.2MB/s), io=293MiB (308MB), run=5001-5003msec 00:31:30.969 21:01:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:31:30.969 21:01:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:30.969 21:01:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:30.969 21:01:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:30.969 21:01:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:30.969 21:01:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:30.969 21:01:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:30.969 21:01:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:30.969 21:01:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:30.969 21:01:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:30.969 21:01:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:30.969 21:01:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:30.970 21:01:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:30.970 21:01:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:30.970 21:01:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:30.970 21:01:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:30.970 21:01:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:30.970 21:01:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:30.970 21:01:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:30.970 21:01:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:30.970 21:01:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:30.970 21:01:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:30.970 21:01:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:30.970 21:01:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:30.970 00:31:30.970 real 0m24.575s 00:31:30.970 user 4m33.140s 00:31:30.970 sys 0m6.319s 00:31:30.970 21:01:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:30.970 21:01:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:30.970 ************************************ 00:31:30.970 END TEST fio_dif_rand_params 00:31:30.970 ************************************ 00:31:30.970 21:01:34 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:31:30.970 21:01:34 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:30.970 21:01:34 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:30.970 21:01:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:30.970 ************************************ 00:31:30.970 START TEST fio_dif_digest 00:31:30.970 ************************************ 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:30.970 bdev_null0 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:30.970 [2024-11-26 21:01:34.544361] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:30.970 { 00:31:30.970 "params": { 00:31:30.970 "name": "Nvme$subsystem", 00:31:30.970 "trtype": "$TEST_TRANSPORT", 00:31:30.970 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:30.970 "adrfam": "ipv4", 00:31:30.970 "trsvcid": "$NVMF_PORT", 00:31:30.970 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:30.970 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:30.970 "hdgst": ${hdgst:-false}, 00:31:30.970 "ddgst": ${ddgst:-false} 00:31:30.970 }, 00:31:30.970 "method": "bdev_nvme_attach_controller" 00:31:30.970 } 00:31:30.970 EOF 00:31:30.970 )") 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:30.970 "params": { 00:31:30.970 "name": "Nvme0", 00:31:30.970 "trtype": "tcp", 00:31:30.970 "traddr": "10.0.0.2", 00:31:30.970 "adrfam": "ipv4", 00:31:30.970 "trsvcid": "4420", 00:31:30.970 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:30.970 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:30.970 "hdgst": true, 00:31:30.970 "ddgst": true 00:31:30.970 }, 00:31:30.970 "method": "bdev_nvme_attach_controller" 00:31:30.970 }' 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:30.970 21:01:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:31.228 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:31.228 ... 00:31:31.228 fio-3.35 00:31:31.229 Starting 3 threads 00:31:43.487 00:31:43.487 filename0: (groupid=0, jobs=1): err= 0: pid=1839640: Tue Nov 26 21:01:45 2024 00:31:43.487 read: IOPS=206, BW=25.8MiB/s (27.1MB/s)(259MiB/10045msec) 00:31:43.487 slat (nsec): min=4645, max=49043, avg=14736.27, stdev=1752.83 00:31:43.487 clat (usec): min=11701, max=53336, avg=14492.31, stdev=1404.90 00:31:43.487 lat (usec): min=11716, max=53350, avg=14507.05, stdev=1404.91 00:31:43.487 clat percentiles (usec): 00:31:43.487 | 1.00th=[12256], 5.00th=[13042], 10.00th=[13304], 20.00th=[13698], 00:31:43.487 | 30.00th=[13960], 40.00th=[14222], 50.00th=[14484], 60.00th=[14615], 00:31:43.487 | 70.00th=[14877], 80.00th=[15139], 90.00th=[15533], 95.00th=[15926], 00:31:43.487 | 99.00th=[16909], 99.50th=[17171], 99.90th=[17695], 99.95th=[44827], 00:31:43.487 | 99.99th=[53216] 00:31:43.487 bw ( KiB/s): min=25344, max=27136, per=32.98%, avg=26521.60, stdev=442.63, samples=20 00:31:43.487 iops : min= 198, max= 212, avg=207.20, stdev= 3.46, samples=20 00:31:43.487 lat (msec) : 20=99.90%, 50=0.05%, 100=0.05% 00:31:43.487 cpu : usr=93.58%, sys=5.90%, ctx=17, majf=0, minf=151 00:31:43.487 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:43.487 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.487 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.487 issued rwts: total=2074,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:43.487 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:43.487 filename0: (groupid=0, jobs=1): err= 0: pid=1839641: Tue Nov 26 21:01:45 2024 00:31:43.487 read: IOPS=221, BW=27.7MiB/s (29.0MB/s)(278MiB/10048msec) 00:31:43.487 slat (nsec): min=4277, max=39466, avg=15188.48, stdev=3331.57 00:31:43.487 clat (usec): min=10207, max=50769, avg=13515.61, stdev=1403.44 00:31:43.487 lat (usec): min=10221, max=50783, avg=13530.79, stdev=1403.37 00:31:43.487 clat percentiles (usec): 00:31:43.487 | 1.00th=[11338], 5.00th=[12125], 10.00th=[12387], 20.00th=[12780], 00:31:43.487 | 30.00th=[13042], 40.00th=[13304], 50.00th=[13566], 60.00th=[13698], 00:31:43.487 | 70.00th=[13960], 80.00th=[14222], 90.00th=[14484], 95.00th=[14877], 00:31:43.487 | 99.00th=[15533], 99.50th=[15795], 99.90th=[17695], 99.95th=[49546], 00:31:43.487 | 99.99th=[50594] 00:31:43.487 bw ( KiB/s): min=28160, max=28928, per=35.37%, avg=28441.60, stdev=274.22, samples=20 00:31:43.487 iops : min= 220, max= 226, avg=222.20, stdev= 2.14, samples=20 00:31:43.487 lat (msec) : 20=99.91%, 50=0.04%, 100=0.04% 00:31:43.487 cpu : usr=93.09%, sys=5.89%, ctx=408, majf=0, minf=123 00:31:43.487 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:43.487 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.487 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.487 issued rwts: total=2224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:43.487 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:43.487 filename0: (groupid=0, jobs=1): err= 0: pid=1839642: Tue Nov 26 21:01:45 2024 00:31:43.487 read: IOPS=200, BW=25.1MiB/s (26.3MB/s)(252MiB/10046msec) 00:31:43.487 slat (nsec): min=4349, max=50184, avg=15127.44, stdev=2756.93 00:31:43.487 clat (usec): min=12041, max=56289, avg=14923.91, stdev=1489.23 00:31:43.487 lat (usec): min=12055, max=56304, avg=14939.04, stdev=1489.16 00:31:43.487 clat percentiles (usec): 00:31:43.487 | 1.00th=[12911], 5.00th=[13435], 10.00th=[13829], 20.00th=[14222], 00:31:43.487 | 30.00th=[14353], 40.00th=[14615], 50.00th=[14877], 60.00th=[15139], 00:31:43.487 | 70.00th=[15270], 80.00th=[15533], 90.00th=[16057], 95.00th=[16450], 00:31:43.487 | 99.00th=[17171], 99.50th=[17433], 99.90th=[18482], 99.95th=[49021], 00:31:43.487 | 99.99th=[56361] 00:31:43.487 bw ( KiB/s): min=25088, max=26368, per=32.03%, avg=25753.60, stdev=304.04, samples=20 00:31:43.488 iops : min= 196, max= 206, avg=201.20, stdev= 2.38, samples=20 00:31:43.488 lat (msec) : 20=99.90%, 50=0.05%, 100=0.05% 00:31:43.488 cpu : usr=86.61%, sys=8.65%, ctx=563, majf=0, minf=120 00:31:43.488 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:43.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.488 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.488 issued rwts: total=2014,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:43.488 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:43.488 00:31:43.488 Run status group 0 (all jobs): 00:31:43.488 READ: bw=78.5MiB/s (82.3MB/s), 25.1MiB/s-27.7MiB/s (26.3MB/s-29.0MB/s), io=789MiB (827MB), run=10045-10048msec 00:31:43.488 21:01:45 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:31:43.488 21:01:45 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:31:43.488 21:01:45 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:31:43.488 21:01:45 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:43.488 21:01:45 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:31:43.488 21:01:45 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:43.488 21:01:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.488 21:01:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:43.488 21:01:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.488 21:01:45 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:43.488 21:01:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.488 21:01:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:43.488 21:01:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.488 00:31:43.488 real 0m11.279s 00:31:43.488 user 0m28.673s 00:31:43.488 sys 0m2.358s 00:31:43.488 21:01:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:43.488 21:01:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:43.488 ************************************ 00:31:43.488 END TEST fio_dif_digest 00:31:43.488 ************************************ 00:31:43.488 21:01:45 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:31:43.488 21:01:45 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:31:43.488 21:01:45 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:43.488 21:01:45 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:31:43.488 21:01:45 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:43.488 21:01:45 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:31:43.488 21:01:45 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:43.488 21:01:45 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:43.488 rmmod nvme_tcp 00:31:43.488 rmmod nvme_fabrics 00:31:43.488 rmmod nvme_keyring 00:31:43.488 21:01:45 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:43.488 21:01:45 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:31:43.488 21:01:45 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:31:43.488 21:01:45 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 1833581 ']' 00:31:43.488 21:01:45 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 1833581 00:31:43.488 21:01:45 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 1833581 ']' 00:31:43.488 21:01:45 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 1833581 00:31:43.488 21:01:45 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:31:43.488 21:01:45 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:43.488 21:01:45 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1833581 00:31:43.488 21:01:45 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:43.488 21:01:45 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:43.488 21:01:45 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1833581' 00:31:43.488 killing process with pid 1833581 00:31:43.488 21:01:45 nvmf_dif -- common/autotest_common.sh@973 -- # kill 1833581 00:31:43.488 21:01:45 nvmf_dif -- common/autotest_common.sh@978 -- # wait 1833581 00:31:43.488 21:01:46 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:31:43.488 21:01:46 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:43.747 Waiting for block devices as requested 00:31:43.747 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:43.747 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:44.006 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:44.006 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:44.006 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:44.006 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:44.264 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:44.264 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:44.264 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:31:44.523 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:44.523 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:44.523 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:44.782 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:44.782 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:44.782 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:44.782 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:45.041 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:45.041 21:01:48 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:45.041 21:01:48 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:45.041 21:01:48 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:31:45.041 21:01:48 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:31:45.041 21:01:48 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:45.041 21:01:48 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:31:45.041 21:01:48 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:45.041 21:01:48 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:45.041 21:01:48 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:45.041 21:01:48 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:45.041 21:01:48 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:47.579 21:01:50 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:47.579 00:31:47.579 real 1m7.666s 00:31:47.579 user 6m30.356s 00:31:47.579 sys 0m18.042s 00:31:47.579 21:01:50 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:47.579 21:01:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:47.579 ************************************ 00:31:47.579 END TEST nvmf_dif 00:31:47.579 ************************************ 00:31:47.579 21:01:50 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:47.579 21:01:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:47.579 21:01:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:47.579 21:01:50 -- common/autotest_common.sh@10 -- # set +x 00:31:47.579 ************************************ 00:31:47.579 START TEST nvmf_abort_qd_sizes 00:31:47.579 ************************************ 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:47.580 * Looking for test storage... 00:31:47.580 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:47.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:47.580 --rc genhtml_branch_coverage=1 00:31:47.580 --rc genhtml_function_coverage=1 00:31:47.580 --rc genhtml_legend=1 00:31:47.580 --rc geninfo_all_blocks=1 00:31:47.580 --rc geninfo_unexecuted_blocks=1 00:31:47.580 00:31:47.580 ' 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:47.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:47.580 --rc genhtml_branch_coverage=1 00:31:47.580 --rc genhtml_function_coverage=1 00:31:47.580 --rc genhtml_legend=1 00:31:47.580 --rc geninfo_all_blocks=1 00:31:47.580 --rc geninfo_unexecuted_blocks=1 00:31:47.580 00:31:47.580 ' 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:47.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:47.580 --rc genhtml_branch_coverage=1 00:31:47.580 --rc genhtml_function_coverage=1 00:31:47.580 --rc genhtml_legend=1 00:31:47.580 --rc geninfo_all_blocks=1 00:31:47.580 --rc geninfo_unexecuted_blocks=1 00:31:47.580 00:31:47.580 ' 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:47.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:47.580 --rc genhtml_branch_coverage=1 00:31:47.580 --rc genhtml_function_coverage=1 00:31:47.580 --rc genhtml_legend=1 00:31:47.580 --rc geninfo_all_blocks=1 00:31:47.580 --rc geninfo_unexecuted_blocks=1 00:31:47.580 00:31:47.580 ' 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:47.580 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:31:47.580 21:01:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:31:49.482 Found 0000:09:00.0 (0x8086 - 0x159b) 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:31:49.482 Found 0000:09:00.1 (0x8086 - 0x159b) 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:31:49.482 Found net devices under 0000:09:00.0: cvl_0_0 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:31:49.482 Found net devices under 0000:09:00.1: cvl_0_1 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:49.482 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:49.483 21:01:52 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:49.483 21:01:53 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:49.483 21:01:53 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:49.483 21:01:53 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:49.483 21:01:53 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:49.483 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:49.483 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.285 ms 00:31:49.483 00:31:49.483 --- 10.0.0.2 ping statistics --- 00:31:49.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:49.483 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:31:49.483 21:01:53 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:49.483 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:49.483 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:31:49.483 00:31:49.483 --- 10.0.0.1 ping statistics --- 00:31:49.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:49.483 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:31:49.483 21:01:53 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:49.483 21:01:53 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:31:49.483 21:01:53 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:31:49.483 21:01:53 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:50.857 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:50.857 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:50.857 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:50.857 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:50.857 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:50.857 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:50.857 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:50.857 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:50.857 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:50.857 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:50.857 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:50.857 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:50.857 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:50.857 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:50.857 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:50.857 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:51.791 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:31:51.791 21:01:55 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:51.791 21:01:55 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:51.791 21:01:55 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:51.791 21:01:55 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:51.791 21:01:55 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:51.791 21:01:55 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:51.791 21:01:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:31:51.791 21:01:55 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:51.791 21:01:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:51.791 21:01:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:51.791 21:01:55 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=1844561 00:31:51.791 21:01:55 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:31:51.791 21:01:55 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 1844561 00:31:51.791 21:01:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 1844561 ']' 00:31:51.791 21:01:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:51.791 21:01:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:51.791 21:01:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:51.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:51.791 21:01:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:51.791 21:01:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:52.050 [2024-11-26 21:01:55.498005] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:31:52.050 [2024-11-26 21:01:55.498080] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:52.050 [2024-11-26 21:01:55.571189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:52.050 [2024-11-26 21:01:55.638058] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:52.050 [2024-11-26 21:01:55.638122] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:52.076 [2024-11-26 21:01:55.638136] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:52.076 [2024-11-26 21:01:55.638148] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:52.076 [2024-11-26 21:01:55.638157] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:52.076 [2024-11-26 21:01:55.642324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:52.076 [2024-11-26 21:01:55.642377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:52.076 [2024-11-26 21:01:55.642381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:52.076 [2024-11-26 21:01:55.642354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:52.336 21:01:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:52.336 21:01:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:31:52.336 21:01:55 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:52.336 21:01:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:52.336 21:01:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:52.336 21:01:55 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:52.336 21:01:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:31:52.336 21:01:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:31:52.336 21:01:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:31:52.336 21:01:55 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:31:52.336 21:01:55 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:31:52.336 21:01:55 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:0b:00.0 ]] 00:31:52.336 21:01:55 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:31:52.336 21:01:55 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:31:52.336 21:01:55 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:0b:00.0 ]] 00:31:52.336 21:01:55 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:31:52.336 21:01:55 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:31:52.336 21:01:55 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:31:52.336 21:01:55 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:31:52.336 21:01:55 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:0b:00.0 00:31:52.336 21:01:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:31:52.336 21:01:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:0b:00.0 00:31:52.336 21:01:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:31:52.336 21:01:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:52.336 21:01:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:52.336 21:01:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:52.336 ************************************ 00:31:52.336 START TEST spdk_target_abort 00:31:52.336 ************************************ 00:31:52.336 21:01:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:31:52.336 21:01:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:31:52.336 21:01:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:0b:00.0 -b spdk_target 00:31:52.336 21:01:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:52.336 21:01:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:55.617 spdk_targetn1 00:31:55.617 21:01:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.617 21:01:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:55.617 21:01:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.617 21:01:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:55.617 [2024-11-26 21:01:58.685224] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:55.617 21:01:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.617 21:01:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:31:55.617 21:01:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.617 21:01:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:55.617 21:01:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.617 21:01:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:31:55.617 21:01:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.617 21:01:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:55.617 21:01:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.617 21:01:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:31:55.617 21:01:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:55.617 21:01:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:55.617 [2024-11-26 21:01:58.725607] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:55.617 21:01:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:55.617 21:01:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:31:55.617 21:01:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:55.617 21:01:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:55.617 21:01:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:31:55.617 21:01:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:55.617 21:01:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:55.617 21:01:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:55.617 21:01:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:55.617 21:01:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:55.617 21:01:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:55.617 21:01:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:55.617 21:01:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:55.617 21:01:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:55.617 21:01:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:55.617 21:01:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:31:55.617 21:01:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:55.617 21:01:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:55.617 21:01:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:55.617 21:01:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:55.617 21:01:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:55.617 21:01:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:58.897 Initializing NVMe Controllers 00:31:58.897 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:58.897 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:58.897 Initialization complete. Launching workers. 00:31:58.897 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12286, failed: 0 00:31:58.897 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1178, failed to submit 11108 00:31:58.897 success 698, unsuccessful 480, failed 0 00:31:58.897 21:02:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:58.897 21:02:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:02.181 Initializing NVMe Controllers 00:32:02.181 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:02.181 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:02.181 Initialization complete. Launching workers. 00:32:02.181 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8830, failed: 0 00:32:02.181 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1276, failed to submit 7554 00:32:02.181 success 320, unsuccessful 956, failed 0 00:32:02.181 21:02:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:02.181 21:02:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:05.473 Initializing NVMe Controllers 00:32:05.473 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:05.473 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:05.473 Initialization complete. Launching workers. 00:32:05.473 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30988, failed: 0 00:32:05.473 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2637, failed to submit 28351 00:32:05.473 success 514, unsuccessful 2123, failed 0 00:32:05.473 21:02:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:32:05.473 21:02:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.473 21:02:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:05.473 21:02:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.473 21:02:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:32:05.473 21:02:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.473 21:02:08 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:06.432 21:02:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.432 21:02:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1844561 00:32:06.432 21:02:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 1844561 ']' 00:32:06.432 21:02:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 1844561 00:32:06.432 21:02:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:32:06.432 21:02:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:06.432 21:02:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1844561 00:32:06.432 21:02:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:06.432 21:02:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:06.432 21:02:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1844561' 00:32:06.432 killing process with pid 1844561 00:32:06.432 21:02:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 1844561 00:32:06.432 21:02:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 1844561 00:32:06.432 00:32:06.432 real 0m14.167s 00:32:06.432 user 0m53.853s 00:32:06.432 sys 0m2.593s 00:32:06.432 21:02:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:06.432 21:02:10 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:06.432 ************************************ 00:32:06.432 END TEST spdk_target_abort 00:32:06.432 ************************************ 00:32:06.432 21:02:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:32:06.432 21:02:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:06.432 21:02:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:06.432 21:02:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:06.432 ************************************ 00:32:06.432 START TEST kernel_target_abort 00:32:06.432 ************************************ 00:32:06.432 21:02:10 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:32:06.432 21:02:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:32:06.432 21:02:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:32:06.432 21:02:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:32:06.432 21:02:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:32:06.432 21:02:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:06.432 21:02:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:06.432 21:02:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:32:06.432 21:02:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:06.432 21:02:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:32:06.432 21:02:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:32:06.432 21:02:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:32:06.432 21:02:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:06.432 21:02:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:06.432 21:02:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:32:06.432 21:02:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:06.432 21:02:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:06.432 21:02:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:06.432 21:02:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:32:06.432 21:02:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:32:06.432 21:02:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:32:06.432 21:02:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:06.432 21:02:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:07.811 Waiting for block devices as requested 00:32:07.811 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:07.811 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:07.811 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:07.811 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:08.070 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:08.070 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:08.070 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:08.070 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:08.329 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:32:08.329 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:08.329 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:08.586 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:08.587 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:08.587 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:08.587 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:08.845 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:08.845 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:08.845 21:02:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:32:08.845 21:02:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:08.845 21:02:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:32:08.846 21:02:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:32:08.846 21:02:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:08.846 21:02:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:32:08.846 21:02:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:32:08.846 21:02:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:32:08.846 21:02:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:09.104 No valid GPT data, bailing 00:32:09.104 21:02:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:09.104 21:02:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:32:09.104 21:02:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:32:09.104 21:02:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:32:09.104 21:02:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:32:09.104 21:02:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:09.104 21:02:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:09.104 21:02:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:09.104 21:02:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:09.104 21:02:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:32:09.104 21:02:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:32:09.104 21:02:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:32:09.104 21:02:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:32:09.104 21:02:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:32:09.104 21:02:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:32:09.104 21:02:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:32:09.104 21:02:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:09.104 21:02:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:32:09.104 00:32:09.104 Discovery Log Number of Records 2, Generation counter 2 00:32:09.104 =====Discovery Log Entry 0====== 00:32:09.104 trtype: tcp 00:32:09.104 adrfam: ipv4 00:32:09.104 subtype: current discovery subsystem 00:32:09.104 treq: not specified, sq flow control disable supported 00:32:09.104 portid: 1 00:32:09.105 trsvcid: 4420 00:32:09.105 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:09.105 traddr: 10.0.0.1 00:32:09.105 eflags: none 00:32:09.105 sectype: none 00:32:09.105 =====Discovery Log Entry 1====== 00:32:09.105 trtype: tcp 00:32:09.105 adrfam: ipv4 00:32:09.105 subtype: nvme subsystem 00:32:09.105 treq: not specified, sq flow control disable supported 00:32:09.105 portid: 1 00:32:09.105 trsvcid: 4420 00:32:09.105 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:09.105 traddr: 10.0.0.1 00:32:09.105 eflags: none 00:32:09.105 sectype: none 00:32:09.105 21:02:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:32:09.105 21:02:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:09.105 21:02:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:09.105 21:02:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:32:09.105 21:02:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:09.105 21:02:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:09.105 21:02:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:09.105 21:02:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:09.105 21:02:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:09.105 21:02:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:09.105 21:02:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:09.105 21:02:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:09.105 21:02:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:09.105 21:02:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:09.105 21:02:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:32:09.105 21:02:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:09.105 21:02:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:32:09.105 21:02:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:09.105 21:02:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:09.105 21:02:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:09.105 21:02:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:12.386 Initializing NVMe Controllers 00:32:12.386 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:12.386 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:12.386 Initialization complete. Launching workers. 00:32:12.386 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 48037, failed: 0 00:32:12.386 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 48037, failed to submit 0 00:32:12.386 success 0, unsuccessful 48037, failed 0 00:32:12.386 21:02:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:12.386 21:02:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:15.724 Initializing NVMe Controllers 00:32:15.724 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:15.725 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:15.725 Initialization complete. Launching workers. 00:32:15.725 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 95380, failed: 0 00:32:15.725 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21478, failed to submit 73902 00:32:15.725 success 0, unsuccessful 21478, failed 0 00:32:15.725 21:02:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:15.725 21:02:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:19.000 Initializing NVMe Controllers 00:32:19.000 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:19.000 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:19.000 Initialization complete. Launching workers. 00:32:19.000 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 86577, failed: 0 00:32:19.000 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21626, failed to submit 64951 00:32:19.000 success 0, unsuccessful 21626, failed 0 00:32:19.000 21:02:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:32:19.000 21:02:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:19.000 21:02:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:32:19.000 21:02:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:19.000 21:02:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:19.000 21:02:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:19.000 21:02:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:19.000 21:02:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:32:19.000 21:02:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:32:19.000 21:02:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:19.565 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:19.565 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:19.565 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:19.566 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:19.566 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:19.566 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:19.566 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:19.566 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:19.566 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:19.566 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:19.566 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:19.566 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:19.566 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:19.826 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:19.826 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:19.826 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:20.762 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:32:20.762 00:32:20.762 real 0m14.298s 00:32:20.762 user 0m6.033s 00:32:20.762 sys 0m3.453s 00:32:20.762 21:02:24 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:20.762 21:02:24 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:20.762 ************************************ 00:32:20.762 END TEST kernel_target_abort 00:32:20.762 ************************************ 00:32:20.762 21:02:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:20.762 21:02:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:32:20.762 21:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:20.762 21:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:32:20.762 21:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:20.762 21:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:32:20.762 21:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:20.762 21:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:20.762 rmmod nvme_tcp 00:32:20.762 rmmod nvme_fabrics 00:32:20.762 rmmod nvme_keyring 00:32:20.762 21:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:20.762 21:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:32:20.762 21:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:32:20.762 21:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 1844561 ']' 00:32:20.762 21:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 1844561 00:32:20.762 21:02:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 1844561 ']' 00:32:20.762 21:02:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 1844561 00:32:20.762 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1844561) - No such process 00:32:20.762 21:02:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 1844561 is not found' 00:32:20.762 Process with pid 1844561 is not found 00:32:20.762 21:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:32:20.762 21:02:24 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:22.137 Waiting for block devices as requested 00:32:22.137 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:22.137 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:22.137 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:22.397 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:22.397 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:22.397 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:22.397 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:22.656 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:22.656 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:32:22.914 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:22.914 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:22.914 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:22.914 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:23.172 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:23.172 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:23.172 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:23.172 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:23.430 21:02:26 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:23.430 21:02:26 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:23.430 21:02:26 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:32:23.430 21:02:26 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:32:23.430 21:02:26 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:23.430 21:02:26 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:32:23.430 21:02:26 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:23.430 21:02:26 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:23.430 21:02:26 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:23.430 21:02:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:23.430 21:02:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:25.332 21:02:28 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:25.332 00:32:25.332 real 0m38.254s 00:32:25.332 user 1m2.121s 00:32:25.332 sys 0m9.637s 00:32:25.332 21:02:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:25.332 21:02:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:25.332 ************************************ 00:32:25.332 END TEST nvmf_abort_qd_sizes 00:32:25.332 ************************************ 00:32:25.332 21:02:29 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:25.332 21:02:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:25.332 21:02:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:25.332 21:02:29 -- common/autotest_common.sh@10 -- # set +x 00:32:25.591 ************************************ 00:32:25.591 START TEST keyring_file 00:32:25.591 ************************************ 00:32:25.591 21:02:29 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:25.591 * Looking for test storage... 00:32:25.591 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:25.591 21:02:29 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:25.591 21:02:29 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:32:25.591 21:02:29 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:25.591 21:02:29 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:25.591 21:02:29 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:25.591 21:02:29 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:25.591 21:02:29 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:25.591 21:02:29 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:32:25.591 21:02:29 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:32:25.591 21:02:29 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:32:25.591 21:02:29 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:32:25.591 21:02:29 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:32:25.591 21:02:29 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:32:25.591 21:02:29 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:32:25.591 21:02:29 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:25.591 21:02:29 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:32:25.591 21:02:29 keyring_file -- scripts/common.sh@345 -- # : 1 00:32:25.591 21:02:29 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:25.591 21:02:29 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:25.591 21:02:29 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:32:25.591 21:02:29 keyring_file -- scripts/common.sh@353 -- # local d=1 00:32:25.591 21:02:29 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:25.591 21:02:29 keyring_file -- scripts/common.sh@355 -- # echo 1 00:32:25.591 21:02:29 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:32:25.591 21:02:29 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:32:25.591 21:02:29 keyring_file -- scripts/common.sh@353 -- # local d=2 00:32:25.591 21:02:29 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:25.591 21:02:29 keyring_file -- scripts/common.sh@355 -- # echo 2 00:32:25.591 21:02:29 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:32:25.591 21:02:29 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:25.591 21:02:29 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:25.591 21:02:29 keyring_file -- scripts/common.sh@368 -- # return 0 00:32:25.591 21:02:29 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:25.591 21:02:29 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:25.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.591 --rc genhtml_branch_coverage=1 00:32:25.591 --rc genhtml_function_coverage=1 00:32:25.591 --rc genhtml_legend=1 00:32:25.591 --rc geninfo_all_blocks=1 00:32:25.591 --rc geninfo_unexecuted_blocks=1 00:32:25.591 00:32:25.591 ' 00:32:25.591 21:02:29 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:25.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.591 --rc genhtml_branch_coverage=1 00:32:25.591 --rc genhtml_function_coverage=1 00:32:25.591 --rc genhtml_legend=1 00:32:25.591 --rc geninfo_all_blocks=1 00:32:25.591 --rc geninfo_unexecuted_blocks=1 00:32:25.591 00:32:25.591 ' 00:32:25.591 21:02:29 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:25.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.591 --rc genhtml_branch_coverage=1 00:32:25.591 --rc genhtml_function_coverage=1 00:32:25.591 --rc genhtml_legend=1 00:32:25.591 --rc geninfo_all_blocks=1 00:32:25.591 --rc geninfo_unexecuted_blocks=1 00:32:25.591 00:32:25.591 ' 00:32:25.591 21:02:29 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:25.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.591 --rc genhtml_branch_coverage=1 00:32:25.591 --rc genhtml_function_coverage=1 00:32:25.591 --rc genhtml_legend=1 00:32:25.591 --rc geninfo_all_blocks=1 00:32:25.591 --rc geninfo_unexecuted_blocks=1 00:32:25.591 00:32:25.591 ' 00:32:25.591 21:02:29 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:25.591 21:02:29 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:25.591 21:02:29 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:32:25.591 21:02:29 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:25.591 21:02:29 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:25.591 21:02:29 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:25.591 21:02:29 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:25.591 21:02:29 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:25.591 21:02:29 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:25.591 21:02:29 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:25.591 21:02:29 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:25.591 21:02:29 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:25.591 21:02:29 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:25.591 21:02:29 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:32:25.591 21:02:29 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:32:25.591 21:02:29 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:25.591 21:02:29 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:25.591 21:02:29 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:25.591 21:02:29 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:25.591 21:02:29 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:25.591 21:02:29 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:32:25.591 21:02:29 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:25.592 21:02:29 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:25.592 21:02:29 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:25.592 21:02:29 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.592 21:02:29 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.592 21:02:29 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.592 21:02:29 keyring_file -- paths/export.sh@5 -- # export PATH 00:32:25.592 21:02:29 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.592 21:02:29 keyring_file -- nvmf/common.sh@51 -- # : 0 00:32:25.592 21:02:29 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:25.592 21:02:29 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:25.592 21:02:29 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:25.592 21:02:29 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:25.592 21:02:29 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:25.592 21:02:29 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:25.592 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:25.592 21:02:29 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:25.592 21:02:29 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:25.592 21:02:29 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:25.592 21:02:29 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:25.592 21:02:29 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:25.592 21:02:29 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:25.592 21:02:29 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:32:25.592 21:02:29 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:32:25.592 21:02:29 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:32:25.592 21:02:29 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:25.592 21:02:29 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:25.592 21:02:29 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:25.592 21:02:29 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:25.592 21:02:29 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:25.592 21:02:29 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:25.592 21:02:29 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.DwAdAbSxBL 00:32:25.592 21:02:29 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:25.592 21:02:29 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:25.592 21:02:29 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:32:25.592 21:02:29 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:32:25.592 21:02:29 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:32:25.592 21:02:29 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:32:25.592 21:02:29 keyring_file -- nvmf/common.sh@733 -- # python - 00:32:25.592 21:02:29 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.DwAdAbSxBL 00:32:25.592 21:02:29 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.DwAdAbSxBL 00:32:25.592 21:02:29 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.DwAdAbSxBL 00:32:25.592 21:02:29 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:32:25.592 21:02:29 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:25.592 21:02:29 keyring_file -- keyring/common.sh@17 -- # name=key1 00:32:25.592 21:02:29 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:25.592 21:02:29 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:25.592 21:02:29 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:25.592 21:02:29 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.5MMqd5ki6S 00:32:25.592 21:02:29 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:25.592 21:02:29 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:25.592 21:02:29 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:32:25.592 21:02:29 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:32:25.592 21:02:29 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:32:25.592 21:02:29 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:32:25.592 21:02:29 keyring_file -- nvmf/common.sh@733 -- # python - 00:32:25.850 21:02:29 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.5MMqd5ki6S 00:32:25.850 21:02:29 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.5MMqd5ki6S 00:32:25.850 21:02:29 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.5MMqd5ki6S 00:32:25.850 21:02:29 keyring_file -- keyring/file.sh@30 -- # tgtpid=1850334 00:32:25.850 21:02:29 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:25.850 21:02:29 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1850334 00:32:25.850 21:02:29 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1850334 ']' 00:32:25.850 21:02:29 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:25.850 21:02:29 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:25.850 21:02:29 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:25.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:25.850 21:02:29 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:25.850 21:02:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:25.850 [2024-11-26 21:02:29.339383] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:32:25.850 [2024-11-26 21:02:29.339476] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1850334 ] 00:32:25.850 [2024-11-26 21:02:29.406366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:25.850 [2024-11-26 21:02:29.466466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:26.107 21:02:29 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:26.107 21:02:29 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:32:26.107 21:02:29 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:32:26.107 21:02:29 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.107 21:02:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:26.107 [2024-11-26 21:02:29.741983] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:26.107 null0 00:32:26.107 [2024-11-26 21:02:29.774034] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:26.107 [2024-11-26 21:02:29.774529] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:26.107 21:02:29 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.107 21:02:29 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:26.107 21:02:29 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:32:26.107 21:02:29 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:26.107 21:02:29 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:26.107 21:02:29 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:26.107 21:02:29 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:26.107 21:02:29 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:26.107 21:02:29 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:26.107 21:02:29 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.107 21:02:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:26.107 [2024-11-26 21:02:29.798084] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:32:26.107 request: 00:32:26.107 { 00:32:26.107 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:32:26.107 "secure_channel": false, 00:32:26.365 "listen_address": { 00:32:26.365 "trtype": "tcp", 00:32:26.365 "traddr": "127.0.0.1", 00:32:26.365 "trsvcid": "4420" 00:32:26.365 }, 00:32:26.365 "method": "nvmf_subsystem_add_listener", 00:32:26.365 "req_id": 1 00:32:26.365 } 00:32:26.365 Got JSON-RPC error response 00:32:26.366 response: 00:32:26.366 { 00:32:26.366 "code": -32602, 00:32:26.366 "message": "Invalid parameters" 00:32:26.366 } 00:32:26.366 21:02:29 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:26.366 21:02:29 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:32:26.366 21:02:29 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:26.366 21:02:29 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:26.366 21:02:29 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:26.366 21:02:29 keyring_file -- keyring/file.sh@47 -- # bperfpid=1850345 00:32:26.366 21:02:29 keyring_file -- keyring/file.sh@49 -- # waitforlisten 1850345 /var/tmp/bperf.sock 00:32:26.366 21:02:29 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1850345 ']' 00:32:26.366 21:02:29 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:26.366 21:02:29 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:26.366 21:02:29 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:32:26.366 21:02:29 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:26.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:26.366 21:02:29 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:26.366 21:02:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:26.366 [2024-11-26 21:02:29.850749] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:32:26.366 [2024-11-26 21:02:29.850810] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1850345 ] 00:32:26.366 [2024-11-26 21:02:29.917761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:26.366 [2024-11-26 21:02:29.977986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:26.623 21:02:30 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:26.623 21:02:30 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:32:26.623 21:02:30 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.DwAdAbSxBL 00:32:26.623 21:02:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.DwAdAbSxBL 00:32:26.881 21:02:30 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.5MMqd5ki6S 00:32:26.881 21:02:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.5MMqd5ki6S 00:32:27.140 21:02:30 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:32:27.140 21:02:30 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:32:27.140 21:02:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:27.140 21:02:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:27.140 21:02:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:27.398 21:02:30 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.DwAdAbSxBL == \/\t\m\p\/\t\m\p\.\D\w\A\d\A\b\S\x\B\L ]] 00:32:27.398 21:02:30 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:32:27.398 21:02:30 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:32:27.398 21:02:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:27.398 21:02:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:27.398 21:02:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:27.656 21:02:31 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.5MMqd5ki6S == \/\t\m\p\/\t\m\p\.\5\M\M\q\d\5\k\i\6\S ]] 00:32:27.656 21:02:31 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:32:27.656 21:02:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:27.656 21:02:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:27.656 21:02:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:27.656 21:02:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:27.656 21:02:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:27.914 21:02:31 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:32:27.914 21:02:31 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:32:27.914 21:02:31 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:27.914 21:02:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:27.914 21:02:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:27.914 21:02:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:27.914 21:02:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:28.172 21:02:31 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:32:28.172 21:02:31 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:28.172 21:02:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:28.430 [2024-11-26 21:02:31.954103] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:28.430 nvme0n1 00:32:28.430 21:02:32 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:32:28.430 21:02:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:28.430 21:02:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:28.430 21:02:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:28.430 21:02:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:28.430 21:02:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:28.688 21:02:32 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:32:28.688 21:02:32 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:32:28.688 21:02:32 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:28.688 21:02:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:28.689 21:02:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:28.689 21:02:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:28.689 21:02:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:28.948 21:02:32 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:32:28.948 21:02:32 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:29.206 Running I/O for 1 seconds... 00:32:30.140 10395.00 IOPS, 40.61 MiB/s 00:32:30.140 Latency(us) 00:32:30.140 [2024-11-26T20:02:33.837Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:30.140 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:32:30.140 nvme0n1 : 1.01 10443.28 40.79 0.00 0.00 12217.09 6068.15 25437.68 00:32:30.140 [2024-11-26T20:02:33.837Z] =================================================================================================================== 00:32:30.140 [2024-11-26T20:02:33.837Z] Total : 10443.28 40.79 0.00 0.00 12217.09 6068.15 25437.68 00:32:30.140 { 00:32:30.140 "results": [ 00:32:30.140 { 00:32:30.140 "job": "nvme0n1", 00:32:30.140 "core_mask": "0x2", 00:32:30.140 "workload": "randrw", 00:32:30.140 "percentage": 50, 00:32:30.140 "status": "finished", 00:32:30.140 "queue_depth": 128, 00:32:30.140 "io_size": 4096, 00:32:30.140 "runtime": 1.007729, 00:32:30.140 "iops": 10443.283859053376, 00:32:30.140 "mibps": 40.79407757442725, 00:32:30.140 "io_failed": 0, 00:32:30.140 "io_timeout": 0, 00:32:30.140 "avg_latency_us": 12217.093077691907, 00:32:30.140 "min_latency_us": 6068.148148148148, 00:32:30.140 "max_latency_us": 25437.677037037036 00:32:30.140 } 00:32:30.140 ], 00:32:30.140 "core_count": 1 00:32:30.140 } 00:32:30.140 21:02:33 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:30.140 21:02:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:30.399 21:02:34 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:32:30.399 21:02:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:30.399 21:02:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:30.399 21:02:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:30.399 21:02:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:30.399 21:02:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:30.657 21:02:34 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:32:30.657 21:02:34 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:32:30.657 21:02:34 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:30.657 21:02:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:30.657 21:02:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:30.658 21:02:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:30.658 21:02:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:30.916 21:02:34 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:32:30.916 21:02:34 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:30.916 21:02:34 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:32:30.916 21:02:34 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:30.916 21:02:34 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:32:30.916 21:02:34 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:30.916 21:02:34 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:32:30.916 21:02:34 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:30.916 21:02:34 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:30.916 21:02:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:31.175 [2024-11-26 21:02:34.812461] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:31.175 [2024-11-26 21:02:34.813086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16fa530 (107): Transport endpoint is not connected 00:32:31.175 [2024-11-26 21:02:34.814077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16fa530 (9): Bad file descriptor 00:32:31.175 [2024-11-26 21:02:34.815076] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:32:31.175 [2024-11-26 21:02:34.815094] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:31.175 [2024-11-26 21:02:34.815122] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:32:31.175 [2024-11-26 21:02:34.815135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:32:31.175 request: 00:32:31.175 { 00:32:31.175 "name": "nvme0", 00:32:31.175 "trtype": "tcp", 00:32:31.175 "traddr": "127.0.0.1", 00:32:31.175 "adrfam": "ipv4", 00:32:31.175 "trsvcid": "4420", 00:32:31.175 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:31.175 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:31.175 "prchk_reftag": false, 00:32:31.175 "prchk_guard": false, 00:32:31.175 "hdgst": false, 00:32:31.175 "ddgst": false, 00:32:31.175 "psk": "key1", 00:32:31.175 "allow_unrecognized_csi": false, 00:32:31.175 "method": "bdev_nvme_attach_controller", 00:32:31.175 "req_id": 1 00:32:31.175 } 00:32:31.175 Got JSON-RPC error response 00:32:31.175 response: 00:32:31.175 { 00:32:31.175 "code": -5, 00:32:31.175 "message": "Input/output error" 00:32:31.175 } 00:32:31.175 21:02:34 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:32:31.175 21:02:34 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:31.175 21:02:34 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:31.175 21:02:34 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:31.175 21:02:34 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:32:31.175 21:02:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:31.175 21:02:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:31.175 21:02:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:31.175 21:02:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:31.175 21:02:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:31.433 21:02:35 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:32:31.433 21:02:35 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:32:31.433 21:02:35 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:31.433 21:02:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:31.433 21:02:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:31.433 21:02:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:31.433 21:02:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:31.691 21:02:35 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:32:31.691 21:02:35 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:32:31.691 21:02:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:32.257 21:02:35 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:32:32.257 21:02:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:32:32.257 21:02:35 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:32:32.257 21:02:35 keyring_file -- keyring/file.sh@78 -- # jq length 00:32:32.257 21:02:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:32.515 21:02:36 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:32:32.515 21:02:36 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.DwAdAbSxBL 00:32:32.515 21:02:36 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.DwAdAbSxBL 00:32:32.515 21:02:36 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:32:32.515 21:02:36 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.DwAdAbSxBL 00:32:32.515 21:02:36 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:32:32.515 21:02:36 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:32.515 21:02:36 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:32:32.515 21:02:36 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:32.515 21:02:36 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.DwAdAbSxBL 00:32:32.515 21:02:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.DwAdAbSxBL 00:32:32.772 [2024-11-26 21:02:36.437073] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.DwAdAbSxBL': 0100660 00:32:32.772 [2024-11-26 21:02:36.437107] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:32:32.772 request: 00:32:32.772 { 00:32:32.772 "name": "key0", 00:32:32.772 "path": "/tmp/tmp.DwAdAbSxBL", 00:32:32.772 "method": "keyring_file_add_key", 00:32:32.772 "req_id": 1 00:32:32.772 } 00:32:32.772 Got JSON-RPC error response 00:32:32.772 response: 00:32:32.772 { 00:32:32.772 "code": -1, 00:32:32.772 "message": "Operation not permitted" 00:32:32.772 } 00:32:32.772 21:02:36 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:32:32.772 21:02:36 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:32.772 21:02:36 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:32.772 21:02:36 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:32.772 21:02:36 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.DwAdAbSxBL 00:32:32.772 21:02:36 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.DwAdAbSxBL 00:32:32.772 21:02:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.DwAdAbSxBL 00:32:33.339 21:02:36 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.DwAdAbSxBL 00:32:33.339 21:02:36 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:32:33.339 21:02:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:33.339 21:02:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:33.339 21:02:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:33.339 21:02:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:33.339 21:02:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:33.339 21:02:37 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:32:33.339 21:02:37 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:33.339 21:02:37 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:32:33.339 21:02:37 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:33.339 21:02:37 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:32:33.339 21:02:37 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:33.339 21:02:37 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:32:33.339 21:02:37 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:33.339 21:02:37 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:33.339 21:02:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:33.597 [2024-11-26 21:02:37.267388] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.DwAdAbSxBL': No such file or directory 00:32:33.597 [2024-11-26 21:02:37.267432] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:32:33.597 [2024-11-26 21:02:37.267456] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:32:33.597 [2024-11-26 21:02:37.267469] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:32:33.597 [2024-11-26 21:02:37.267482] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:33.597 [2024-11-26 21:02:37.267493] bdev_nvme.c:6769:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:32:33.597 request: 00:32:33.597 { 00:32:33.597 "name": "nvme0", 00:32:33.597 "trtype": "tcp", 00:32:33.597 "traddr": "127.0.0.1", 00:32:33.597 "adrfam": "ipv4", 00:32:33.597 "trsvcid": "4420", 00:32:33.597 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:33.597 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:33.597 "prchk_reftag": false, 00:32:33.597 "prchk_guard": false, 00:32:33.597 "hdgst": false, 00:32:33.597 "ddgst": false, 00:32:33.597 "psk": "key0", 00:32:33.597 "allow_unrecognized_csi": false, 00:32:33.597 "method": "bdev_nvme_attach_controller", 00:32:33.597 "req_id": 1 00:32:33.597 } 00:32:33.597 Got JSON-RPC error response 00:32:33.597 response: 00:32:33.597 { 00:32:33.597 "code": -19, 00:32:33.597 "message": "No such device" 00:32:33.597 } 00:32:33.597 21:02:37 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:32:33.597 21:02:37 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:33.597 21:02:37 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:33.597 21:02:37 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:33.597 21:02:37 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:32:33.597 21:02:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:34.164 21:02:37 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:34.164 21:02:37 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:34.164 21:02:37 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:34.164 21:02:37 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:34.164 21:02:37 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:34.164 21:02:37 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:34.164 21:02:37 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.mA52TRyn9G 00:32:34.164 21:02:37 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:34.164 21:02:37 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:34.164 21:02:37 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:32:34.164 21:02:37 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:32:34.164 21:02:37 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:32:34.164 21:02:37 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:32:34.164 21:02:37 keyring_file -- nvmf/common.sh@733 -- # python - 00:32:34.164 21:02:37 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.mA52TRyn9G 00:32:34.164 21:02:37 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.mA52TRyn9G 00:32:34.164 21:02:37 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.mA52TRyn9G 00:32:34.164 21:02:37 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.mA52TRyn9G 00:32:34.164 21:02:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.mA52TRyn9G 00:32:34.422 21:02:37 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:34.422 21:02:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:34.680 nvme0n1 00:32:34.680 21:02:38 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:32:34.680 21:02:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:34.680 21:02:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:34.680 21:02:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:34.680 21:02:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:34.680 21:02:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:34.938 21:02:38 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:32:34.938 21:02:38 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:32:34.938 21:02:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:35.196 21:02:38 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:32:35.196 21:02:38 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:32:35.196 21:02:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:35.196 21:02:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:35.196 21:02:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:35.455 21:02:39 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:32:35.455 21:02:39 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:32:35.455 21:02:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:35.455 21:02:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:35.455 21:02:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:35.455 21:02:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:35.455 21:02:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:35.713 21:02:39 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:32:35.714 21:02:39 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:35.714 21:02:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:35.972 21:02:39 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:32:35.972 21:02:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:35.972 21:02:39 keyring_file -- keyring/file.sh@105 -- # jq length 00:32:36.230 21:02:39 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:32:36.230 21:02:39 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.mA52TRyn9G 00:32:36.230 21:02:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.mA52TRyn9G 00:32:36.488 21:02:40 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.5MMqd5ki6S 00:32:36.488 21:02:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.5MMqd5ki6S 00:32:36.746 21:02:40 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:36.746 21:02:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:37.312 nvme0n1 00:32:37.312 21:02:40 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:32:37.312 21:02:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:32:37.572 21:02:41 keyring_file -- keyring/file.sh@113 -- # config='{ 00:32:37.572 "subsystems": [ 00:32:37.572 { 00:32:37.572 "subsystem": "keyring", 00:32:37.572 "config": [ 00:32:37.572 { 00:32:37.572 "method": "keyring_file_add_key", 00:32:37.572 "params": { 00:32:37.572 "name": "key0", 00:32:37.572 "path": "/tmp/tmp.mA52TRyn9G" 00:32:37.572 } 00:32:37.572 }, 00:32:37.572 { 00:32:37.572 "method": "keyring_file_add_key", 00:32:37.572 "params": { 00:32:37.572 "name": "key1", 00:32:37.572 "path": "/tmp/tmp.5MMqd5ki6S" 00:32:37.572 } 00:32:37.572 } 00:32:37.572 ] 00:32:37.572 }, 00:32:37.572 { 00:32:37.572 "subsystem": "iobuf", 00:32:37.572 "config": [ 00:32:37.572 { 00:32:37.572 "method": "iobuf_set_options", 00:32:37.572 "params": { 00:32:37.572 "small_pool_count": 8192, 00:32:37.572 "large_pool_count": 1024, 00:32:37.572 "small_bufsize": 8192, 00:32:37.572 "large_bufsize": 135168, 00:32:37.572 "enable_numa": false 00:32:37.572 } 00:32:37.572 } 00:32:37.572 ] 00:32:37.572 }, 00:32:37.572 { 00:32:37.572 "subsystem": "sock", 00:32:37.572 "config": [ 00:32:37.572 { 00:32:37.572 "method": "sock_set_default_impl", 00:32:37.572 "params": { 00:32:37.572 "impl_name": "posix" 00:32:37.572 } 00:32:37.572 }, 00:32:37.572 { 00:32:37.572 "method": "sock_impl_set_options", 00:32:37.572 "params": { 00:32:37.572 "impl_name": "ssl", 00:32:37.572 "recv_buf_size": 4096, 00:32:37.572 "send_buf_size": 4096, 00:32:37.572 "enable_recv_pipe": true, 00:32:37.572 "enable_quickack": false, 00:32:37.572 "enable_placement_id": 0, 00:32:37.572 "enable_zerocopy_send_server": true, 00:32:37.572 "enable_zerocopy_send_client": false, 00:32:37.572 "zerocopy_threshold": 0, 00:32:37.572 "tls_version": 0, 00:32:37.572 "enable_ktls": false 00:32:37.572 } 00:32:37.572 }, 00:32:37.572 { 00:32:37.572 "method": "sock_impl_set_options", 00:32:37.572 "params": { 00:32:37.572 "impl_name": "posix", 00:32:37.572 "recv_buf_size": 2097152, 00:32:37.572 "send_buf_size": 2097152, 00:32:37.572 "enable_recv_pipe": true, 00:32:37.572 "enable_quickack": false, 00:32:37.572 "enable_placement_id": 0, 00:32:37.572 "enable_zerocopy_send_server": true, 00:32:37.572 "enable_zerocopy_send_client": false, 00:32:37.572 "zerocopy_threshold": 0, 00:32:37.572 "tls_version": 0, 00:32:37.572 "enable_ktls": false 00:32:37.572 } 00:32:37.572 } 00:32:37.572 ] 00:32:37.572 }, 00:32:37.572 { 00:32:37.572 "subsystem": "vmd", 00:32:37.572 "config": [] 00:32:37.572 }, 00:32:37.572 { 00:32:37.573 "subsystem": "accel", 00:32:37.573 "config": [ 00:32:37.573 { 00:32:37.573 "method": "accel_set_options", 00:32:37.573 "params": { 00:32:37.573 "small_cache_size": 128, 00:32:37.573 "large_cache_size": 16, 00:32:37.573 "task_count": 2048, 00:32:37.573 "sequence_count": 2048, 00:32:37.573 "buf_count": 2048 00:32:37.573 } 00:32:37.573 } 00:32:37.573 ] 00:32:37.573 }, 00:32:37.573 { 00:32:37.573 "subsystem": "bdev", 00:32:37.573 "config": [ 00:32:37.573 { 00:32:37.573 "method": "bdev_set_options", 00:32:37.573 "params": { 00:32:37.573 "bdev_io_pool_size": 65535, 00:32:37.573 "bdev_io_cache_size": 256, 00:32:37.573 "bdev_auto_examine": true, 00:32:37.573 "iobuf_small_cache_size": 128, 00:32:37.573 "iobuf_large_cache_size": 16 00:32:37.573 } 00:32:37.573 }, 00:32:37.573 { 00:32:37.573 "method": "bdev_raid_set_options", 00:32:37.573 "params": { 00:32:37.573 "process_window_size_kb": 1024, 00:32:37.573 "process_max_bandwidth_mb_sec": 0 00:32:37.573 } 00:32:37.573 }, 00:32:37.573 { 00:32:37.573 "method": "bdev_iscsi_set_options", 00:32:37.573 "params": { 00:32:37.573 "timeout_sec": 30 00:32:37.573 } 00:32:37.573 }, 00:32:37.573 { 00:32:37.573 "method": "bdev_nvme_set_options", 00:32:37.573 "params": { 00:32:37.573 "action_on_timeout": "none", 00:32:37.573 "timeout_us": 0, 00:32:37.573 "timeout_admin_us": 0, 00:32:37.573 "keep_alive_timeout_ms": 10000, 00:32:37.573 "arbitration_burst": 0, 00:32:37.573 "low_priority_weight": 0, 00:32:37.573 "medium_priority_weight": 0, 00:32:37.573 "high_priority_weight": 0, 00:32:37.573 "nvme_adminq_poll_period_us": 10000, 00:32:37.573 "nvme_ioq_poll_period_us": 0, 00:32:37.573 "io_queue_requests": 512, 00:32:37.573 "delay_cmd_submit": true, 00:32:37.573 "transport_retry_count": 4, 00:32:37.573 "bdev_retry_count": 3, 00:32:37.573 "transport_ack_timeout": 0, 00:32:37.573 "ctrlr_loss_timeout_sec": 0, 00:32:37.573 "reconnect_delay_sec": 0, 00:32:37.573 "fast_io_fail_timeout_sec": 0, 00:32:37.573 "disable_auto_failback": false, 00:32:37.573 "generate_uuids": false, 00:32:37.573 "transport_tos": 0, 00:32:37.573 "nvme_error_stat": false, 00:32:37.573 "rdma_srq_size": 0, 00:32:37.573 "io_path_stat": false, 00:32:37.573 "allow_accel_sequence": false, 00:32:37.573 "rdma_max_cq_size": 0, 00:32:37.573 "rdma_cm_event_timeout_ms": 0, 00:32:37.573 "dhchap_digests": [ 00:32:37.573 "sha256", 00:32:37.573 "sha384", 00:32:37.573 "sha512" 00:32:37.573 ], 00:32:37.573 "dhchap_dhgroups": [ 00:32:37.573 "null", 00:32:37.573 "ffdhe2048", 00:32:37.573 "ffdhe3072", 00:32:37.573 "ffdhe4096", 00:32:37.573 "ffdhe6144", 00:32:37.573 "ffdhe8192" 00:32:37.573 ] 00:32:37.573 } 00:32:37.573 }, 00:32:37.573 { 00:32:37.573 "method": "bdev_nvme_attach_controller", 00:32:37.573 "params": { 00:32:37.573 "name": "nvme0", 00:32:37.573 "trtype": "TCP", 00:32:37.573 "adrfam": "IPv4", 00:32:37.573 "traddr": "127.0.0.1", 00:32:37.573 "trsvcid": "4420", 00:32:37.573 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:37.573 "prchk_reftag": false, 00:32:37.573 "prchk_guard": false, 00:32:37.573 "ctrlr_loss_timeout_sec": 0, 00:32:37.573 "reconnect_delay_sec": 0, 00:32:37.573 "fast_io_fail_timeout_sec": 0, 00:32:37.573 "psk": "key0", 00:32:37.573 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:37.573 "hdgst": false, 00:32:37.573 "ddgst": false, 00:32:37.573 "multipath": "multipath" 00:32:37.573 } 00:32:37.573 }, 00:32:37.573 { 00:32:37.573 "method": "bdev_nvme_set_hotplug", 00:32:37.573 "params": { 00:32:37.573 "period_us": 100000, 00:32:37.573 "enable": false 00:32:37.573 } 00:32:37.573 }, 00:32:37.573 { 00:32:37.573 "method": "bdev_wait_for_examine" 00:32:37.573 } 00:32:37.573 ] 00:32:37.573 }, 00:32:37.573 { 00:32:37.573 "subsystem": "nbd", 00:32:37.573 "config": [] 00:32:37.573 } 00:32:37.573 ] 00:32:37.573 }' 00:32:37.573 21:02:41 keyring_file -- keyring/file.sh@115 -- # killprocess 1850345 00:32:37.573 21:02:41 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1850345 ']' 00:32:37.573 21:02:41 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1850345 00:32:37.573 21:02:41 keyring_file -- common/autotest_common.sh@959 -- # uname 00:32:37.573 21:02:41 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:37.573 21:02:41 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1850345 00:32:37.573 21:02:41 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:37.573 21:02:41 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:37.573 21:02:41 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1850345' 00:32:37.573 killing process with pid 1850345 00:32:37.573 21:02:41 keyring_file -- common/autotest_common.sh@973 -- # kill 1850345 00:32:37.573 Received shutdown signal, test time was about 1.000000 seconds 00:32:37.573 00:32:37.573 Latency(us) 00:32:37.573 [2024-11-26T20:02:41.270Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:37.573 [2024-11-26T20:02:41.270Z] =================================================================================================================== 00:32:37.573 [2024-11-26T20:02:41.270Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:37.573 21:02:41 keyring_file -- common/autotest_common.sh@978 -- # wait 1850345 00:32:37.831 21:02:41 keyring_file -- keyring/file.sh@118 -- # bperfpid=1851915 00:32:37.831 21:02:41 keyring_file -- keyring/file.sh@120 -- # waitforlisten 1851915 /var/tmp/bperf.sock 00:32:37.831 21:02:41 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1851915 ']' 00:32:37.832 21:02:41 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:32:37.832 21:02:41 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:37.832 21:02:41 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:37.832 21:02:41 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:32:37.832 "subsystems": [ 00:32:37.832 { 00:32:37.832 "subsystem": "keyring", 00:32:37.832 "config": [ 00:32:37.832 { 00:32:37.832 "method": "keyring_file_add_key", 00:32:37.832 "params": { 00:32:37.832 "name": "key0", 00:32:37.832 "path": "/tmp/tmp.mA52TRyn9G" 00:32:37.832 } 00:32:37.832 }, 00:32:37.832 { 00:32:37.832 "method": "keyring_file_add_key", 00:32:37.832 "params": { 00:32:37.832 "name": "key1", 00:32:37.832 "path": "/tmp/tmp.5MMqd5ki6S" 00:32:37.832 } 00:32:37.832 } 00:32:37.832 ] 00:32:37.832 }, 00:32:37.832 { 00:32:37.832 "subsystem": "iobuf", 00:32:37.832 "config": [ 00:32:37.832 { 00:32:37.832 "method": "iobuf_set_options", 00:32:37.832 "params": { 00:32:37.832 "small_pool_count": 8192, 00:32:37.832 "large_pool_count": 1024, 00:32:37.832 "small_bufsize": 8192, 00:32:37.832 "large_bufsize": 135168, 00:32:37.832 "enable_numa": false 00:32:37.832 } 00:32:37.832 } 00:32:37.832 ] 00:32:37.832 }, 00:32:37.832 { 00:32:37.832 "subsystem": "sock", 00:32:37.832 "config": [ 00:32:37.832 { 00:32:37.832 "method": "sock_set_default_impl", 00:32:37.832 "params": { 00:32:37.832 "impl_name": "posix" 00:32:37.832 } 00:32:37.832 }, 00:32:37.832 { 00:32:37.832 "method": "sock_impl_set_options", 00:32:37.832 "params": { 00:32:37.832 "impl_name": "ssl", 00:32:37.832 "recv_buf_size": 4096, 00:32:37.832 "send_buf_size": 4096, 00:32:37.832 "enable_recv_pipe": true, 00:32:37.832 "enable_quickack": false, 00:32:37.832 "enable_placement_id": 0, 00:32:37.832 "enable_zerocopy_send_server": true, 00:32:37.832 "enable_zerocopy_send_client": false, 00:32:37.832 "zerocopy_threshold": 0, 00:32:37.832 "tls_version": 0, 00:32:37.832 "enable_ktls": false 00:32:37.832 } 00:32:37.832 }, 00:32:37.832 { 00:32:37.832 "method": "sock_impl_set_options", 00:32:37.832 "params": { 00:32:37.832 "impl_name": "posix", 00:32:37.832 "recv_buf_size": 2097152, 00:32:37.832 "send_buf_size": 2097152, 00:32:37.832 "enable_recv_pipe": true, 00:32:37.832 "enable_quickack": false, 00:32:37.832 "enable_placement_id": 0, 00:32:37.832 "enable_zerocopy_send_server": true, 00:32:37.832 "enable_zerocopy_send_client": false, 00:32:37.832 "zerocopy_threshold": 0, 00:32:37.832 "tls_version": 0, 00:32:37.832 "enable_ktls": false 00:32:37.832 } 00:32:37.832 } 00:32:37.832 ] 00:32:37.832 }, 00:32:37.832 { 00:32:37.832 "subsystem": "vmd", 00:32:37.832 "config": [] 00:32:37.832 }, 00:32:37.832 { 00:32:37.832 "subsystem": "accel", 00:32:37.832 "config": [ 00:32:37.832 { 00:32:37.832 "method": "accel_set_options", 00:32:37.832 "params": { 00:32:37.832 "small_cache_size": 128, 00:32:37.832 "large_cache_size": 16, 00:32:37.832 "task_count": 2048, 00:32:37.832 "sequence_count": 2048, 00:32:37.832 "buf_count": 2048 00:32:37.832 } 00:32:37.832 } 00:32:37.832 ] 00:32:37.832 }, 00:32:37.832 { 00:32:37.832 "subsystem": "bdev", 00:32:37.832 "config": [ 00:32:37.832 { 00:32:37.832 "method": "bdev_set_options", 00:32:37.832 "params": { 00:32:37.832 "bdev_io_pool_size": 65535, 00:32:37.832 "bdev_io_cache_size": 256, 00:32:37.832 "bdev_auto_examine": true, 00:32:37.832 "iobuf_small_cache_size": 128, 00:32:37.832 "iobuf_large_cache_size": 16 00:32:37.832 } 00:32:37.832 }, 00:32:37.832 { 00:32:37.832 "method": "bdev_raid_set_options", 00:32:37.832 "params": { 00:32:37.832 "process_window_size_kb": 1024, 00:32:37.832 "process_max_bandwidth_mb_sec": 0 00:32:37.832 } 00:32:37.832 }, 00:32:37.832 { 00:32:37.832 "method": "bdev_iscsi_set_options", 00:32:37.832 "params": { 00:32:37.832 "timeout_sec": 30 00:32:37.832 } 00:32:37.832 }, 00:32:37.832 { 00:32:37.832 "method": "bdev_nvme_set_options", 00:32:37.832 "params": { 00:32:37.832 "action_on_timeout": "none", 00:32:37.832 "timeout_us": 0, 00:32:37.832 "timeout_admin_us": 0, 00:32:37.832 "keep_alive_timeout_ms": 10000, 00:32:37.832 "arbitration_burst": 0, 00:32:37.832 "low_priority_weight": 0, 00:32:37.832 "medium_priority_weight": 0, 00:32:37.832 "high_priority_weight": 0, 00:32:37.832 "nvme_adminq_poll_period_us": 10000, 00:32:37.832 "nvme_ioq_poll_period_us": 0, 00:32:37.832 "io_queue_requests": 512, 00:32:37.832 "delay_cmd_submit": true, 00:32:37.832 "transport_retry_count": 4, 00:32:37.832 "bdev_retry_count": 3, 00:32:37.832 "transport_ack_timeout": 0, 00:32:37.832 "ctrlr_loss_timeout_sec": 0, 00:32:37.832 "reconnect_delay_sec": 0, 00:32:37.832 "fast_io_fail_timeout_sec": 0, 00:32:37.832 "disable_auto_failback": false, 00:32:37.832 "generate_uuids": false, 00:32:37.832 "transport_tos": 0, 00:32:37.832 "nvme_error_stat": false, 00:32:37.832 "rdma_srq_size": 0, 00:32:37.832 "io_path_stat": false, 00:32:37.832 "allow_accel_sequence": false, 00:32:37.832 "rdma_max_cq_size": 0, 00:32:37.832 "rdma_cm_event_timeout_ms": 0, 00:32:37.832 "dhchap_digests": [ 00:32:37.832 "sha256", 00:32:37.832 "sha384", 00:32:37.832 "sha512" 00:32:37.832 ], 00:32:37.832 "dhchap_dhgroups": [ 00:32:37.832 "null", 00:32:37.832 "ffdhe2048", 00:32:37.832 "ffdhe3072", 00:32:37.832 "ffdhe4096", 00:32:37.832 "ffdhe6144", 00:32:37.832 "ffdhe8192" 00:32:37.832 ] 00:32:37.832 } 00:32:37.832 }, 00:32:37.832 { 00:32:37.832 "method": "bdev_nvme_attach_controller", 00:32:37.832 "params": { 00:32:37.832 "name": "nvme0", 00:32:37.832 "trtype": "TCP", 00:32:37.832 "adrfam": "IPv4", 00:32:37.832 "traddr": "127.0.0.1", 00:32:37.832 "trsvcid": "4420", 00:32:37.832 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:37.832 "prchk_reftag": false, 00:32:37.832 "prchk_guard": false, 00:32:37.832 "ctrlr_loss_timeout_sec": 0, 00:32:37.832 "reconnect_delay_sec": 0, 00:32:37.832 "fast_io_fail_timeout_sec": 0, 00:32:37.832 "psk": "key0", 00:32:37.832 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:37.832 "hdgst": false, 00:32:37.832 "ddgst": false, 00:32:37.832 "multipath": "multipath" 00:32:37.832 } 00:32:37.832 }, 00:32:37.832 { 00:32:37.832 "method": "bdev_nvme_set_hotplug", 00:32:37.832 "params": { 00:32:37.832 "period_us": 100000, 00:32:37.832 "enable": false 00:32:37.832 } 00:32:37.832 }, 00:32:37.832 { 00:32:37.832 "method": "bdev_wait_for_examine" 00:32:37.832 } 00:32:37.832 ] 00:32:37.832 }, 00:32:37.832 { 00:32:37.832 "subsystem": "nbd", 00:32:37.832 "config": [] 00:32:37.832 } 00:32:37.832 ] 00:32:37.832 }' 00:32:37.832 21:02:41 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:37.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:37.832 21:02:41 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:37.832 21:02:41 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:37.832 [2024-11-26 21:02:41.379870] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:32:37.832 [2024-11-26 21:02:41.379958] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1851915 ] 00:32:37.832 [2024-11-26 21:02:41.447993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:37.832 [2024-11-26 21:02:41.511009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:38.091 [2024-11-26 21:02:41.707538] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:38.350 21:02:41 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:38.350 21:02:41 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:32:38.350 21:02:41 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:32:38.350 21:02:41 keyring_file -- keyring/file.sh@121 -- # jq length 00:32:38.350 21:02:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:38.608 21:02:42 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:32:38.608 21:02:42 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:32:38.608 21:02:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:38.608 21:02:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:38.608 21:02:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:38.608 21:02:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:38.608 21:02:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:38.866 21:02:42 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:32:38.866 21:02:42 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:32:38.866 21:02:42 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:38.866 21:02:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:38.866 21:02:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:38.866 21:02:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:38.866 21:02:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:39.123 21:02:42 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:32:39.123 21:02:42 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:32:39.123 21:02:42 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:32:39.124 21:02:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:32:39.382 21:02:42 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:32:39.382 21:02:42 keyring_file -- keyring/file.sh@1 -- # cleanup 00:32:39.382 21:02:42 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.mA52TRyn9G /tmp/tmp.5MMqd5ki6S 00:32:39.382 21:02:42 keyring_file -- keyring/file.sh@20 -- # killprocess 1851915 00:32:39.382 21:02:42 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1851915 ']' 00:32:39.382 21:02:42 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1851915 00:32:39.382 21:02:42 keyring_file -- common/autotest_common.sh@959 -- # uname 00:32:39.382 21:02:42 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:39.382 21:02:42 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1851915 00:32:39.382 21:02:42 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:39.382 21:02:42 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:39.382 21:02:42 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1851915' 00:32:39.382 killing process with pid 1851915 00:32:39.382 21:02:42 keyring_file -- common/autotest_common.sh@973 -- # kill 1851915 00:32:39.382 Received shutdown signal, test time was about 1.000000 seconds 00:32:39.382 00:32:39.382 Latency(us) 00:32:39.382 [2024-11-26T20:02:43.079Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:39.382 [2024-11-26T20:02:43.079Z] =================================================================================================================== 00:32:39.382 [2024-11-26T20:02:43.079Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:39.382 21:02:42 keyring_file -- common/autotest_common.sh@978 -- # wait 1851915 00:32:39.640 21:02:43 keyring_file -- keyring/file.sh@21 -- # killprocess 1850334 00:32:39.640 21:02:43 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1850334 ']' 00:32:39.640 21:02:43 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1850334 00:32:39.640 21:02:43 keyring_file -- common/autotest_common.sh@959 -- # uname 00:32:39.640 21:02:43 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:39.640 21:02:43 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1850334 00:32:39.640 21:02:43 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:39.640 21:02:43 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:39.640 21:02:43 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1850334' 00:32:39.640 killing process with pid 1850334 00:32:39.640 21:02:43 keyring_file -- common/autotest_common.sh@973 -- # kill 1850334 00:32:39.640 21:02:43 keyring_file -- common/autotest_common.sh@978 -- # wait 1850334 00:32:40.207 00:32:40.207 real 0m14.594s 00:32:40.207 user 0m37.107s 00:32:40.207 sys 0m3.232s 00:32:40.207 21:02:43 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:40.207 21:02:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:40.207 ************************************ 00:32:40.207 END TEST keyring_file 00:32:40.207 ************************************ 00:32:40.207 21:02:43 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:32:40.207 21:02:43 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:40.207 21:02:43 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:40.207 21:02:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:40.207 21:02:43 -- common/autotest_common.sh@10 -- # set +x 00:32:40.207 ************************************ 00:32:40.207 START TEST keyring_linux 00:32:40.207 ************************************ 00:32:40.207 21:02:43 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:40.207 Joined session keyring: 657578213 00:32:40.207 * Looking for test storage... 00:32:40.207 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:40.207 21:02:43 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:40.207 21:02:43 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:32:40.207 21:02:43 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:40.207 21:02:43 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:40.207 21:02:43 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:40.207 21:02:43 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:40.207 21:02:43 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:40.207 21:02:43 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:32:40.207 21:02:43 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:32:40.207 21:02:43 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:32:40.207 21:02:43 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:32:40.207 21:02:43 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:32:40.207 21:02:43 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:32:40.207 21:02:43 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:32:40.207 21:02:43 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:40.207 21:02:43 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:32:40.207 21:02:43 keyring_linux -- scripts/common.sh@345 -- # : 1 00:32:40.207 21:02:43 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:40.207 21:02:43 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:40.207 21:02:43 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:32:40.207 21:02:43 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:32:40.207 21:02:43 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:40.207 21:02:43 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:32:40.207 21:02:43 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:32:40.207 21:02:43 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:32:40.207 21:02:43 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:32:40.207 21:02:43 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:40.207 21:02:43 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:32:40.207 21:02:43 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:32:40.207 21:02:43 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:40.207 21:02:43 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:40.207 21:02:43 keyring_linux -- scripts/common.sh@368 -- # return 0 00:32:40.207 21:02:43 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:40.207 21:02:43 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:40.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:40.207 --rc genhtml_branch_coverage=1 00:32:40.207 --rc genhtml_function_coverage=1 00:32:40.207 --rc genhtml_legend=1 00:32:40.207 --rc geninfo_all_blocks=1 00:32:40.207 --rc geninfo_unexecuted_blocks=1 00:32:40.207 00:32:40.207 ' 00:32:40.207 21:02:43 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:40.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:40.207 --rc genhtml_branch_coverage=1 00:32:40.207 --rc genhtml_function_coverage=1 00:32:40.207 --rc genhtml_legend=1 00:32:40.207 --rc geninfo_all_blocks=1 00:32:40.207 --rc geninfo_unexecuted_blocks=1 00:32:40.207 00:32:40.207 ' 00:32:40.207 21:02:43 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:40.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:40.207 --rc genhtml_branch_coverage=1 00:32:40.207 --rc genhtml_function_coverage=1 00:32:40.207 --rc genhtml_legend=1 00:32:40.207 --rc geninfo_all_blocks=1 00:32:40.207 --rc geninfo_unexecuted_blocks=1 00:32:40.207 00:32:40.207 ' 00:32:40.207 21:02:43 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:40.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:40.207 --rc genhtml_branch_coverage=1 00:32:40.207 --rc genhtml_function_coverage=1 00:32:40.207 --rc genhtml_legend=1 00:32:40.207 --rc geninfo_all_blocks=1 00:32:40.207 --rc geninfo_unexecuted_blocks=1 00:32:40.207 00:32:40.207 ' 00:32:40.207 21:02:43 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:40.207 21:02:43 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:40.207 21:02:43 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:32:40.207 21:02:43 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:40.207 21:02:43 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:40.207 21:02:43 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:40.207 21:02:43 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:40.207 21:02:43 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:40.208 21:02:43 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:40.208 21:02:43 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:40.208 21:02:43 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:40.208 21:02:43 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:40.208 21:02:43 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:40.208 21:02:43 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:32:40.208 21:02:43 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:32:40.208 21:02:43 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:40.208 21:02:43 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:40.208 21:02:43 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:40.208 21:02:43 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:40.208 21:02:43 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:40.208 21:02:43 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:32:40.208 21:02:43 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:40.208 21:02:43 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:40.208 21:02:43 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:40.208 21:02:43 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.208 21:02:43 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.208 21:02:43 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.208 21:02:43 keyring_linux -- paths/export.sh@5 -- # export PATH 00:32:40.208 21:02:43 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.208 21:02:43 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:32:40.208 21:02:43 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:40.208 21:02:43 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:40.208 21:02:43 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:40.208 21:02:43 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:40.208 21:02:43 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:40.208 21:02:43 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:40.208 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:40.208 21:02:43 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:40.208 21:02:43 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:40.208 21:02:43 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:40.208 21:02:43 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:40.208 21:02:43 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:40.208 21:02:43 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:40.208 21:02:43 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:32:40.208 21:02:43 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:32:40.208 21:02:43 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:32:40.208 21:02:43 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:32:40.208 21:02:43 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:40.208 21:02:43 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:32:40.208 21:02:43 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:40.208 21:02:43 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:40.208 21:02:43 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:32:40.208 21:02:43 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:40.208 21:02:43 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:40.208 21:02:43 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:32:40.208 21:02:43 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:32:40.208 21:02:43 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:32:40.208 21:02:43 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:32:40.208 21:02:43 keyring_linux -- nvmf/common.sh@733 -- # python - 00:32:40.208 21:02:43 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:32:40.208 21:02:43 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:32:40.208 /tmp/:spdk-test:key0 00:32:40.208 21:02:43 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:32:40.208 21:02:43 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:40.208 21:02:43 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:32:40.208 21:02:43 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:40.208 21:02:43 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:40.208 21:02:43 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:32:40.208 21:02:43 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:40.208 21:02:43 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:40.208 21:02:43 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:32:40.208 21:02:43 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:32:40.208 21:02:43 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:32:40.208 21:02:43 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:32:40.208 21:02:43 keyring_linux -- nvmf/common.sh@733 -- # python - 00:32:40.466 21:02:43 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:32:40.466 21:02:43 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:32:40.466 /tmp/:spdk-test:key1 00:32:40.466 21:02:43 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1852298 00:32:40.466 21:02:43 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:40.466 21:02:43 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1852298 00:32:40.466 21:02:43 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1852298 ']' 00:32:40.466 21:02:43 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:40.466 21:02:43 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:40.466 21:02:43 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:40.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:40.466 21:02:43 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:40.467 21:02:43 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:40.467 [2024-11-26 21:02:43.983580] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:32:40.467 [2024-11-26 21:02:43.983686] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1852298 ] 00:32:40.467 [2024-11-26 21:02:44.047910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:40.467 [2024-11-26 21:02:44.104464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:40.820 21:02:44 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:40.820 21:02:44 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:32:40.820 21:02:44 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:32:40.820 21:02:44 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.820 21:02:44 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:40.820 [2024-11-26 21:02:44.351537] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:40.820 null0 00:32:40.820 [2024-11-26 21:02:44.383612] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:40.820 [2024-11-26 21:02:44.384058] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:40.820 21:02:44 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.820 21:02:44 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:32:40.820 73511537 00:32:40.820 21:02:44 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:32:40.820 467090106 00:32:40.820 21:02:44 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1852304 00:32:40.820 21:02:44 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:32:40.820 21:02:44 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1852304 /var/tmp/bperf.sock 00:32:40.820 21:02:44 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1852304 ']' 00:32:40.820 21:02:44 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:40.820 21:02:44 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:40.820 21:02:44 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:40.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:40.820 21:02:44 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:40.820 21:02:44 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:41.089 [2024-11-26 21:02:44.450127] Starting SPDK v25.01-pre git sha1 752c08b51 / DPDK 24.03.0 initialization... 00:32:41.089 [2024-11-26 21:02:44.450209] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1852304 ] 00:32:41.089 [2024-11-26 21:02:44.514736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:41.089 [2024-11-26 21:02:44.575388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:41.089 21:02:44 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:41.089 21:02:44 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:32:41.089 21:02:44 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:32:41.089 21:02:44 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:32:41.347 21:02:44 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:32:41.347 21:02:44 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:41.913 21:02:45 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:41.913 21:02:45 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:41.913 [2024-11-26 21:02:45.589271] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:42.171 nvme0n1 00:32:42.171 21:02:45 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:32:42.171 21:02:45 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:32:42.171 21:02:45 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:42.171 21:02:45 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:42.171 21:02:45 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:42.171 21:02:45 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:42.429 21:02:45 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:32:42.429 21:02:45 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:42.429 21:02:45 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:32:42.429 21:02:45 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:32:42.429 21:02:45 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:42.429 21:02:45 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:42.429 21:02:45 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:32:42.687 21:02:46 keyring_linux -- keyring/linux.sh@25 -- # sn=73511537 00:32:42.687 21:02:46 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:32:42.687 21:02:46 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:42.687 21:02:46 keyring_linux -- keyring/linux.sh@26 -- # [[ 73511537 == \7\3\5\1\1\5\3\7 ]] 00:32:42.687 21:02:46 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 73511537 00:32:42.687 21:02:46 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:32:42.687 21:02:46 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:42.687 Running I/O for 1 seconds... 00:32:44.061 10920.00 IOPS, 42.66 MiB/s 00:32:44.061 Latency(us) 00:32:44.061 [2024-11-26T20:02:47.758Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:44.061 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:44.061 nvme0n1 : 1.01 10915.37 42.64 0.00 0.00 11650.21 10291.58 21845.33 00:32:44.061 [2024-11-26T20:02:47.758Z] =================================================================================================================== 00:32:44.061 [2024-11-26T20:02:47.758Z] Total : 10915.37 42.64 0.00 0.00 11650.21 10291.58 21845.33 00:32:44.061 { 00:32:44.061 "results": [ 00:32:44.061 { 00:32:44.061 "job": "nvme0n1", 00:32:44.061 "core_mask": "0x2", 00:32:44.061 "workload": "randread", 00:32:44.061 "status": "finished", 00:32:44.061 "queue_depth": 128, 00:32:44.061 "io_size": 4096, 00:32:44.061 "runtime": 1.012151, 00:32:44.061 "iops": 10915.367371074079, 00:32:44.061 "mibps": 42.63815379325812, 00:32:44.061 "io_failed": 0, 00:32:44.061 "io_timeout": 0, 00:32:44.061 "avg_latency_us": 11650.205113846649, 00:32:44.061 "min_latency_us": 10291.579259259259, 00:32:44.061 "max_latency_us": 21845.333333333332 00:32:44.061 } 00:32:44.061 ], 00:32:44.061 "core_count": 1 00:32:44.061 } 00:32:44.061 21:02:47 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:44.061 21:02:47 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:44.061 21:02:47 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:32:44.061 21:02:47 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:32:44.061 21:02:47 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:44.061 21:02:47 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:44.061 21:02:47 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:44.061 21:02:47 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:44.319 21:02:47 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:32:44.319 21:02:47 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:44.319 21:02:47 keyring_linux -- keyring/linux.sh@23 -- # return 00:32:44.319 21:02:47 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:44.319 21:02:47 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:32:44.319 21:02:47 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:44.319 21:02:47 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:32:44.319 21:02:47 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:44.319 21:02:47 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:32:44.319 21:02:47 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:44.319 21:02:47 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:44.319 21:02:47 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:44.577 [2024-11-26 21:02:48.187888] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:44.577 [2024-11-26 21:02:48.188113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd82e0 (107): Transport endpoint is not connected 00:32:44.577 [2024-11-26 21:02:48.189105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd82e0 (9): Bad file descriptor 00:32:44.577 [2024-11-26 21:02:48.190105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:32:44.577 [2024-11-26 21:02:48.190125] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:44.577 [2024-11-26 21:02:48.190153] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:32:44.577 [2024-11-26 21:02:48.190167] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:32:44.577 request: 00:32:44.577 { 00:32:44.577 "name": "nvme0", 00:32:44.577 "trtype": "tcp", 00:32:44.577 "traddr": "127.0.0.1", 00:32:44.577 "adrfam": "ipv4", 00:32:44.577 "trsvcid": "4420", 00:32:44.577 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:44.577 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:44.577 "prchk_reftag": false, 00:32:44.577 "prchk_guard": false, 00:32:44.577 "hdgst": false, 00:32:44.577 "ddgst": false, 00:32:44.577 "psk": ":spdk-test:key1", 00:32:44.577 "allow_unrecognized_csi": false, 00:32:44.577 "method": "bdev_nvme_attach_controller", 00:32:44.577 "req_id": 1 00:32:44.577 } 00:32:44.577 Got JSON-RPC error response 00:32:44.577 response: 00:32:44.577 { 00:32:44.577 "code": -5, 00:32:44.577 "message": "Input/output error" 00:32:44.577 } 00:32:44.577 21:02:48 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:32:44.577 21:02:48 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:44.577 21:02:48 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:44.577 21:02:48 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:44.577 21:02:48 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:32:44.577 21:02:48 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:44.577 21:02:48 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:32:44.577 21:02:48 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:32:44.577 21:02:48 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:32:44.577 21:02:48 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:44.577 21:02:48 keyring_linux -- keyring/linux.sh@33 -- # sn=73511537 00:32:44.577 21:02:48 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 73511537 00:32:44.577 1 links removed 00:32:44.577 21:02:48 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:44.577 21:02:48 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:32:44.577 21:02:48 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:32:44.577 21:02:48 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:32:44.577 21:02:48 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:32:44.577 21:02:48 keyring_linux -- keyring/linux.sh@33 -- # sn=467090106 00:32:44.577 21:02:48 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 467090106 00:32:44.577 1 links removed 00:32:44.577 21:02:48 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1852304 00:32:44.577 21:02:48 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1852304 ']' 00:32:44.577 21:02:48 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1852304 00:32:44.577 21:02:48 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:32:44.577 21:02:48 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:44.577 21:02:48 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1852304 00:32:44.577 21:02:48 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:44.577 21:02:48 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:44.577 21:02:48 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1852304' 00:32:44.577 killing process with pid 1852304 00:32:44.577 21:02:48 keyring_linux -- common/autotest_common.sh@973 -- # kill 1852304 00:32:44.577 Received shutdown signal, test time was about 1.000000 seconds 00:32:44.577 00:32:44.577 Latency(us) 00:32:44.577 [2024-11-26T20:02:48.274Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:44.577 [2024-11-26T20:02:48.274Z] =================================================================================================================== 00:32:44.577 [2024-11-26T20:02:48.274Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:44.577 21:02:48 keyring_linux -- common/autotest_common.sh@978 -- # wait 1852304 00:32:44.835 21:02:48 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1852298 00:32:44.835 21:02:48 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1852298 ']' 00:32:44.835 21:02:48 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1852298 00:32:44.835 21:02:48 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:32:44.835 21:02:48 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:44.835 21:02:48 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1852298 00:32:44.835 21:02:48 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:44.835 21:02:48 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:44.835 21:02:48 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1852298' 00:32:44.835 killing process with pid 1852298 00:32:44.835 21:02:48 keyring_linux -- common/autotest_common.sh@973 -- # kill 1852298 00:32:44.835 21:02:48 keyring_linux -- common/autotest_common.sh@978 -- # wait 1852298 00:32:45.403 00:32:45.403 real 0m5.205s 00:32:45.403 user 0m10.394s 00:32:45.403 sys 0m1.606s 00:32:45.403 21:02:48 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:45.403 21:02:48 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:45.403 ************************************ 00:32:45.403 END TEST keyring_linux 00:32:45.403 ************************************ 00:32:45.403 21:02:48 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:32:45.403 21:02:48 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:32:45.403 21:02:48 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:32:45.403 21:02:48 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:32:45.403 21:02:48 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:32:45.403 21:02:48 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:32:45.403 21:02:48 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:32:45.403 21:02:48 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:32:45.403 21:02:48 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:32:45.403 21:02:48 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:32:45.403 21:02:48 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:32:45.403 21:02:48 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:32:45.403 21:02:48 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:32:45.403 21:02:48 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:32:45.403 21:02:48 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:32:45.403 21:02:48 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:32:45.403 21:02:48 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:32:45.403 21:02:48 -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:45.403 21:02:48 -- common/autotest_common.sh@10 -- # set +x 00:32:45.403 21:02:48 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:32:45.403 21:02:48 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:32:45.403 21:02:48 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:32:45.403 21:02:48 -- common/autotest_common.sh@10 -- # set +x 00:32:47.310 INFO: APP EXITING 00:32:47.310 INFO: killing all VMs 00:32:47.310 INFO: killing vhost app 00:32:47.310 INFO: EXIT DONE 00:32:48.688 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:32:48.688 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:32:48.688 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:32:48.688 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:32:48.688 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:32:48.688 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:32:48.688 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:32:48.688 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:32:48.688 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:32:48.688 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:32:48.688 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:32:48.688 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:32:48.688 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:32:48.688 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:32:48.688 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:32:48.688 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:32:48.688 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:32:50.066 Cleaning 00:32:50.066 Removing: /var/run/dpdk/spdk0/config 00:32:50.066 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:50.066 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:50.066 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:50.066 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:50.066 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:32:50.066 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:32:50.066 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:32:50.066 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:32:50.066 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:50.066 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:50.066 Removing: /var/run/dpdk/spdk1/config 00:32:50.066 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:32:50.066 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:32:50.066 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:32:50.066 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:32:50.066 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:32:50.066 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:32:50.066 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:32:50.066 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:32:50.066 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:32:50.066 Removing: /var/run/dpdk/spdk1/hugepage_info 00:32:50.066 Removing: /var/run/dpdk/spdk2/config 00:32:50.066 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:32:50.066 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:32:50.066 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:32:50.066 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:32:50.066 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:32:50.066 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:32:50.066 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:32:50.066 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:32:50.066 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:32:50.066 Removing: /var/run/dpdk/spdk2/hugepage_info 00:32:50.066 Removing: /var/run/dpdk/spdk3/config 00:32:50.066 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:32:50.067 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:32:50.067 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:32:50.067 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:32:50.067 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:32:50.067 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:32:50.067 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:32:50.067 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:32:50.067 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:32:50.067 Removing: /var/run/dpdk/spdk3/hugepage_info 00:32:50.067 Removing: /var/run/dpdk/spdk4/config 00:32:50.067 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:32:50.067 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:32:50.067 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:32:50.067 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:32:50.067 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:32:50.067 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:32:50.067 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:32:50.067 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:32:50.067 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:32:50.067 Removing: /var/run/dpdk/spdk4/hugepage_info 00:32:50.067 Removing: /dev/shm/bdev_svc_trace.1 00:32:50.067 Removing: /dev/shm/nvmf_trace.0 00:32:50.067 Removing: /dev/shm/spdk_tgt_trace.pid1531252 00:32:50.067 Removing: /var/run/dpdk/spdk0 00:32:50.067 Removing: /var/run/dpdk/spdk1 00:32:50.067 Removing: /var/run/dpdk/spdk2 00:32:50.067 Removing: /var/run/dpdk/spdk3 00:32:50.067 Removing: /var/run/dpdk/spdk4 00:32:50.067 Removing: /var/run/dpdk/spdk_pid1529568 00:32:50.067 Removing: /var/run/dpdk/spdk_pid1530314 00:32:50.067 Removing: /var/run/dpdk/spdk_pid1531252 00:32:50.067 Removing: /var/run/dpdk/spdk_pid1531585 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1532278 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1532417 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1533127 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1533261 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1533521 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1534728 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1535698 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1535965 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1536163 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1536494 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1536692 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1536851 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1537010 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1537202 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1537510 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1540016 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1540181 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1540346 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1540353 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1540780 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1540783 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1541094 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1541217 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1541391 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1541523 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1541690 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1541697 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1542195 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1542348 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1542554 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1544783 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1547308 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1554991 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1555587 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1558117 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1558299 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1560923 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1564765 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1566842 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1573277 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1578616 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1579822 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1580497 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1590979 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1593856 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1621205 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1624497 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1628337 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1633232 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1633351 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1633891 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1634548 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1635154 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1635603 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1635613 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1635871 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1635893 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1636005 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1636556 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1637210 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1637863 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1638270 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1638273 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1638432 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1639434 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1640163 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1645508 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1673525 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1676456 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1677633 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1678957 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1679101 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1679239 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1679382 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1679899 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1681843 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1682619 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1683054 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1684670 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1685090 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1685535 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1687925 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1691327 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1691328 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1691329 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1693551 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1698417 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1701085 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1704991 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1705940 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1707029 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1708000 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1710761 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1713346 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1715767 00:32:50.326 Removing: /var/run/dpdk/spdk_pid1720551 00:32:50.585 Removing: /var/run/dpdk/spdk_pid1720561 00:32:50.585 Removing: /var/run/dpdk/spdk_pid1723373 00:32:50.585 Removing: /var/run/dpdk/spdk_pid1723606 00:32:50.585 Removing: /var/run/dpdk/spdk_pid1723741 00:32:50.585 Removing: /var/run/dpdk/spdk_pid1724015 00:32:50.585 Removing: /var/run/dpdk/spdk_pid1724024 00:32:50.585 Removing: /var/run/dpdk/spdk_pid1726787 00:32:50.585 Removing: /var/run/dpdk/spdk_pid1727188 00:32:50.585 Removing: /var/run/dpdk/spdk_pid1729841 00:32:50.585 Removing: /var/run/dpdk/spdk_pid1731768 00:32:50.585 Removing: /var/run/dpdk/spdk_pid1735196 00:32:50.585 Removing: /var/run/dpdk/spdk_pid1738523 00:32:50.585 Removing: /var/run/dpdk/spdk_pid1745037 00:32:50.585 Removing: /var/run/dpdk/spdk_pid1749485 00:32:50.585 Removing: /var/run/dpdk/spdk_pid1749487 00:32:50.585 Removing: /var/run/dpdk/spdk_pid1762362 00:32:50.585 Removing: /var/run/dpdk/spdk_pid1762884 00:32:50.585 Removing: /var/run/dpdk/spdk_pid1763299 00:32:50.585 Removing: /var/run/dpdk/spdk_pid1763711 00:32:50.585 Removing: /var/run/dpdk/spdk_pid1764287 00:32:50.585 Removing: /var/run/dpdk/spdk_pid1764704 00:32:50.585 Removing: /var/run/dpdk/spdk_pid1765221 00:32:50.585 Removing: /var/run/dpdk/spdk_pid1765636 00:32:50.585 Removing: /var/run/dpdk/spdk_pid1768141 00:32:50.585 Removing: /var/run/dpdk/spdk_pid1768275 00:32:50.585 Removing: /var/run/dpdk/spdk_pid1772082 00:32:50.585 Removing: /var/run/dpdk/spdk_pid1772254 00:32:50.585 Removing: /var/run/dpdk/spdk_pid1775621 00:32:50.585 Removing: /var/run/dpdk/spdk_pid1778118 00:32:50.585 Removing: /var/run/dpdk/spdk_pid1785026 00:32:50.585 Removing: /var/run/dpdk/spdk_pid1785473 00:32:50.585 Removing: /var/run/dpdk/spdk_pid1788047 00:32:50.585 Removing: /var/run/dpdk/spdk_pid1788326 00:32:50.585 Removing: /var/run/dpdk/spdk_pid1791341 00:32:50.585 Removing: /var/run/dpdk/spdk_pid1795147 00:32:50.585 Removing: /var/run/dpdk/spdk_pid1797196 00:32:50.585 Removing: /var/run/dpdk/spdk_pid1803556 00:32:50.585 Removing: /var/run/dpdk/spdk_pid1808762 00:32:50.585 Removing: /var/run/dpdk/spdk_pid1810060 00:32:50.585 Removing: /var/run/dpdk/spdk_pid1810706 00:32:50.585 Removing: /var/run/dpdk/spdk_pid1820921 00:32:50.585 Removing: /var/run/dpdk/spdk_pid1823073 00:32:50.585 Removing: /var/run/dpdk/spdk_pid1825542 00:32:50.585 Removing: /var/run/dpdk/spdk_pid1830734 00:32:50.585 Removing: /var/run/dpdk/spdk_pid1830739 00:32:50.585 Removing: /var/run/dpdk/spdk_pid1833639 00:32:50.585 Removing: /var/run/dpdk/spdk_pid1835041 00:32:50.585 Removing: /var/run/dpdk/spdk_pid1836555 00:32:50.585 Removing: /var/run/dpdk/spdk_pid1837301 00:32:50.585 Removing: /var/run/dpdk/spdk_pid1838701 00:32:50.585 Removing: /var/run/dpdk/spdk_pid1839569 00:32:50.585 Removing: /var/run/dpdk/spdk_pid1844905 00:32:50.585 Removing: /var/run/dpdk/spdk_pid1845252 00:32:50.585 Removing: /var/run/dpdk/spdk_pid1845651 00:32:50.585 Removing: /var/run/dpdk/spdk_pid1847202 00:32:50.585 Removing: /var/run/dpdk/spdk_pid1847495 00:32:50.585 Removing: /var/run/dpdk/spdk_pid1847886 00:32:50.585 Removing: /var/run/dpdk/spdk_pid1850334 00:32:50.585 Removing: /var/run/dpdk/spdk_pid1850345 00:32:50.586 Removing: /var/run/dpdk/spdk_pid1851915 00:32:50.586 Removing: /var/run/dpdk/spdk_pid1852298 00:32:50.586 Removing: /var/run/dpdk/spdk_pid1852304 00:32:50.586 Clean 00:32:50.586 21:02:54 -- common/autotest_common.sh@1453 -- # return 0 00:32:50.586 21:02:54 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:32:50.586 21:02:54 -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:50.586 21:02:54 -- common/autotest_common.sh@10 -- # set +x 00:32:50.844 21:02:54 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:32:50.844 21:02:54 -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:50.844 21:02:54 -- common/autotest_common.sh@10 -- # set +x 00:32:50.844 21:02:54 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:32:50.844 21:02:54 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:32:50.844 21:02:54 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:32:50.844 21:02:54 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:32:50.844 21:02:54 -- spdk/autotest.sh@398 -- # hostname 00:32:50.844 21:02:54 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:32:50.844 geninfo: WARNING: invalid characters removed from testname! 00:33:22.925 21:03:25 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:26.222 21:03:29 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:28.764 21:03:32 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:32.062 21:03:35 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:35.418 21:03:38 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:37.956 21:03:41 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:41.253 21:03:44 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:41.253 21:03:44 -- spdk/autorun.sh@1 -- $ timing_finish 00:33:41.253 21:03:44 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:33:41.253 21:03:44 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:41.253 21:03:44 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:33:41.253 21:03:44 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:41.253 + [[ -n 1458947 ]] 00:33:41.253 + sudo kill 1458947 00:33:41.273 [Pipeline] } 00:33:41.289 [Pipeline] // stage 00:33:41.294 [Pipeline] } 00:33:41.308 [Pipeline] // timeout 00:33:41.314 [Pipeline] } 00:33:41.328 [Pipeline] // catchError 00:33:41.333 [Pipeline] } 00:33:41.349 [Pipeline] // wrap 00:33:41.355 [Pipeline] } 00:33:41.369 [Pipeline] // catchError 00:33:41.379 [Pipeline] stage 00:33:41.381 [Pipeline] { (Epilogue) 00:33:41.395 [Pipeline] catchError 00:33:41.397 [Pipeline] { 00:33:41.412 [Pipeline] echo 00:33:41.414 Cleanup processes 00:33:41.420 [Pipeline] sh 00:33:41.715 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:41.715 1863595 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:41.730 [Pipeline] sh 00:33:42.015 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:42.015 ++ awk '{print $1}' 00:33:42.015 ++ grep -v 'sudo pgrep' 00:33:42.015 + sudo kill -9 00:33:42.015 + true 00:33:42.028 [Pipeline] sh 00:33:42.314 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:52.325 [Pipeline] sh 00:33:52.612 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:52.612 Artifacts sizes are good 00:33:52.628 [Pipeline] archiveArtifacts 00:33:52.636 Archiving artifacts 00:33:52.775 [Pipeline] sh 00:33:53.061 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:33:53.076 [Pipeline] cleanWs 00:33:53.086 [WS-CLEANUP] Deleting project workspace... 00:33:53.086 [WS-CLEANUP] Deferred wipeout is used... 00:33:53.093 [WS-CLEANUP] done 00:33:53.095 [Pipeline] } 00:33:53.113 [Pipeline] // catchError 00:33:53.126 [Pipeline] sh 00:33:53.408 + logger -p user.info -t JENKINS-CI 00:33:53.418 [Pipeline] } 00:33:53.433 [Pipeline] // stage 00:33:53.438 [Pipeline] } 00:33:53.455 [Pipeline] // node 00:33:53.461 [Pipeline] End of Pipeline 00:33:53.499 Finished: SUCCESS